Install zookeeper on kubernetes

Tested On

OS: Ubuntu 18.04
Kubernetes Version: v1.17.0
Zookeeper Version: 3.5.6

In this guide you will learn how to deploy the official image of zookeeper on kubernetes.
In this guide I will use local volumes because I am using kubernetes on bare-metal servers.

Install zookeeper

  • create local directories for zookeeper data volumes on all servers that will run zookeeper
sudo mkdir -p /var/lib/k8s/volumes/zookeeper/data
  • apply the following namespace
apiVersion: v1
 kind: Namespace
 metadata:
   name: kafka
kubectl apply -f namespace.yml
  • apply the following physical volumes
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-data-01
  labels:
    name: zookeeper-data
spec:
  capacity:
    storage: 50Gi
  accessModes: 
  - ReadWriteOnce 
  persistentVolumeReclaimPolicy: Retain 
  storageClassName: local-storage 
  local: 
    path: /var/lib/k8s/volumes/zookeeper/data 
  nodeAffinity: 
    required:   
      nodeSelectorTerms: 
      - matchExpressions: 
        - key: kubernetes.io/hostname
          operator: In
          values: 
          - k8s-01 

--- 

apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-data-02
  labels:
    name: zookeeper-data
spec:
  capacity:
    storage: 50Gi
  accessModes: 
  - ReadWriteOnce 
  persistentVolumeReclaimPolicy: Retain 
  storageClassName: local-storage 
  local: 
    path: /var/lib/k8s/volumes/zookeeper/data 
  nodeAffinity: 
    required:   
      nodeSelectorTerms: 
      - matchExpressions: 
        - key: kubernetes.io/hostname
          operator: In
          values: 
          - k8s-02 
---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-data-03
  labels:
    name: zookeeper-data
spec:
  capacity:
    storage: 50Gi
  accessModes: 
  - ReadWriteOnce 
  persistentVolumeReclaimPolicy: Retain 
  storageClassName: local-storage 
  local: 
    path: /var/lib/k8s/volumes/zookeeper/data 
  nodeAffinity: 
    required:   
      nodeSelectorTerms: 
      - matchExpressions: 
        - key: kubernetes.io/hostname
          operator: In
          values: 
          - k8s-03 
kubectl apply -f zookeeper-pv.yml
  • apply the following zookeeper cluster
apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  namespace: kafka
  labels:
    app: zk
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector: 
    app: zk

---

apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  namespace: kafka
  labels:
    app: zk
spec:
  ports:
  - port: 2181
    name: client
  selector: 
    app: zk

---

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
  namespace: kafka
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1

---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
  namespace: kafka
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: zk-hs
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: OrderedReady
  template:
    metadata:
      labels:
        app: zk
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - zk
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: kubernetes-zookeeper
        imagePullPolicy: Always
        image: "zookeeper:3.5.6"
        env:
        - name: ZOO_SERVERS
          value: "server.1=zk-0.zk-hs.kafka.svc.cluster.local:2888:3888;2181 server.2=zk-1.zk-hs.kafka.svc.cluster.local:2888:3888;2181 server.3=zk-2.zk-hs.kafka.svc.cluster.local:2888:3888;2181"
        resources:
          requests:
            memory: "1Gi"
            cpu: "0.5"
        volumeMounts:
        - name: zookeeper-data
          mountPath: /data
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
      initContainers:
      - name: init-myservice
        image: busybox:1.28
        command: ['sh', '-c', 'echo $(( $(echo ${POD_NAME} | cut -d "-" -f 2) + 1 )) > /data/myid']
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        volumeMounts:
        - name: zookeeper-data
          mountPath: /data
  volumeClaimTemplates:
metadata:
  name: zookeeper-data
spec:
  accessModes: [ "ReadWriteOnce" ]
  storageClassName: "local-storage"
  resources:
    requests:
      storage: 50Gi
  selector:
    matchExpressions:
      - {key: name, operator: In, values: [zookeeper-data]} 
kubectl apply -f zookeeper.yaml

Create User in Kubernetes

Tested On

OS: Ubuntu 18.04
Kubernetes Version: v1.17.0

This guide will show you how to create user in kubernetes and use it inside a bash script to run some automate tasks.

Here I will show how to create a backup job in Jenkins for chef server that runs inside kubernetes.

Create Kubernetes User

  • Create a jenkins-robot service account and bind it to cluster-admin role
kubectl -n chef create serviceaccount jenkins-robot
kubectl -n chef create rolebinding jenkins-robot-binding --clusterrole=cluster-admin --serviceaccount=chef:jenkins-robot
  • get service account token name and decrypt the token with base64
TOKEN_NAME=$(kubectl -n chef get serviceaccount jenkins-robot -o go-template --template='{{range .secrets}}{{.name}}{{"\n"}}{{end}}')
kubectl -n chef get secrets ${TOKEN_NAME} -o go-template --template '{{index .data "token"}}' | base64 -d

Create a Jenkins job

  • Upload jenkins-robot token to jenkins credentials as a secret text
  • Upload kubernetes apiserver certificate to jenkins credentials as a secret file. default file location: /etc/kubernetes/pki/apiserver.crt in the control plain server
  • Example for a bash script that I use:
#!/bin/bash
Configure kubectl
PATH=${PATH}:~/bin/
if [ ! -x ~/bin/kubectl ]
then
  curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl
  chmod +x ./kubectl
  mkdir ~/bin/
  mv ./kubectl ~/bin/kubectl
fi
kubectl config set-cluster prod --server=https://k8s-cp.example.com:6443 --certificate-authority=${CA}
kubectl config set-credentials jenkins-robot --token=${TOKEN}
kubectl config set-context prod --cluster=prod --namespace=default --user=jenkins-robot
kubectl config use-context prod
POD_NAME=chef-0
kubectl -n chef exec -i ${POD_NAME} -- chef-server-ctl backup --yes
TAR_FILE=$(kubectl -n chef exec -i ${POD_NAME} -- ls -lrt /var/opt/chef-backup/ | tail -1 | awk '{print $NF}')
rm -f chef-backup*.tgz
kubectl -n chef cp ${POD_NAME}:/var/opt/chef-backup/${TAR_FILE} ${TAR_FILE}
  • Upload tar file to s3 with publish artifacts to s3 bucket

Install HA Kubernetes Cluster on BareMetal

Tested On

OS: Ubuntu 18.04
Kubernetes Version: v1.17.0
Docker Version: 19.03.5

Prerequisites

apt-get update
apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-get install docker-ce docker-ce-cli containerd.io
  • Install kubelet kubeadm and kubectl
curl -s  https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get install -y kubelet kubeadm kubectl
  • Configure docker for kubernetes
cat > /etc/docker/daemon.json <<EOF
 {
   "exec-opts": ["native.cgroupdriver=systemd"],
   "log-driver": "json-file",
   "log-opts": {
     "max-size": "100m"
   },
   "storage-driver": "overlay2"
 }
 EOF
systemctl daemon-reload
systemctl restart docker
  • Disable swap for kubernetes
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Installation

  • Initialize kubernetes cluster (I have two network interfaces one for public and one for private so I use the apiserver-advertise-address with the private address)
kubeadm init --apiserver-advertise-address 172.18.73.71 --apiserver-cert-extra-sans k8s-api.example.com
  • Configure kubectl
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get pods --all-namespaces
  • Install weaveworks network plugin
kubectl apply -n kube-system -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
  • if you need change local server environment of kubelet (I changed/added resolv-conf and node-ip)
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --resolv-conf=/etc/resolv.conf --node-ip=172.18.73.71"
service kubelet restart
  • If you want to run containers on the muster than remove the master taint
kubectl taint nodes --all node-role.kubernetes.io/master-
  • Join control plain servers
kubeadm join k8s-cp.example.com:6443 --token xxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxx --control-plane --certificate-key xxxxxxxx --apiserver-advertise-address 172.18.73.72
  • Join worker servers to kubernetes cluster
kubeadm join 172.18.73.71:6443 --apiserver-advertise-address 172.18.73.72 --token xxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxx
  • If you have multiple network interface like me than you need to add the following routes on the worker servers
ip route add 10.96.0.1/32 dev ens1(private_network_inteface)
kubectl get pods --all-namespaces - to check that all pods are running

Deploy Prometheus and Grafana on Kubernetes

Tested On

OS: Ubuntu 18.04
Kubernetes Version: v1.15.3
Docker Version: 18.09.8
Prometheus Version: 2.12.0

Prometheus Deployment

  • Create monitoring namespance
kubectl create namespace monitoring
  • apply the following files

prometheus-cluster-role.yaml:

apiVersion: rbac.authorization.k8s.io/v1beta1
 kind: ClusterRole
 metadata:
   name: prometheus
 rules:
 apiGroups: [""]
 resources:
 nodes
 nodes/proxy
 services
 endpoints
 pods
 verbs: ["get", "list", "watch"]
 apiGroups:
 extensions
 resources:
 ingresses
 verbs: ["get", "list", "watch"]
 nonResourceURLs: ["/metrics"] 
   verbs: ["get"]
 apiVersion: rbac.authorization.k8s.io/v1beta1
 kind: ClusterRoleBinding
 metadata:
   name: prometheus
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: prometheus
 subjects:
 kind: ServiceAccount
 name: default
 namespace: monitoring 

prometheus-config-map.yaml:

apiVersion: v1
 kind: ConfigMap
 metadata:
   name: prometheus-server-conf
   labels:
     name: prometheus-server-conf
   namespace: monitoring
 data:
   prometheus.rules: |-
     groups:
     - name: devopscube demo alert
       rules:
       - alert: High Pod Memory
         expr: sum(container_memory_usage_bytes) > 1
         for: 1m
         labels:
           severity: slack
         annotations:
           summary: High Memory Usage
   prometheus.yml: |-
     global:
       scrape_interval: 5s
       evaluation_interval: 5s
     rule_files:
       - /etc/prometheus/prometheus.rules
     alerting:
       alertmanagers:
       - scheme: http
         static_configs:
         - targets:
           - "alertmanager.monitoring.svc:9093"
 scrape_configs:   - job_name: 'kubernetes-apiservers'     kubernetes_sd_configs:     - role: endpoints     scheme: https     tls_config:       ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt     bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token     relabel_configs:     - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]       action: keep       regex: default;kubernetes;https   - job_name: 'kubernetes-nodes'     scheme: https     tls_config:       ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt     bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token     kubernetes_sd_configs:     - role: node     relabel_configs:     - action: labelmap       regex: __meta_kubernetes_node_label_(.+)     - target_label: __address__       replacement: kubernetes.default.svc:443     - source_labels: [__meta_kubernetes_node_name]       regex: (.+)       target_label: __metrics_path__       replacement: /api/v1/nodes/${1}/proxy/metrics   - job_name: 'kubernetes-pods'     kubernetes_sd_configs:     - role: pod     relabel_configs:     - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]       action: keep       regex: true     - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]       action: replace       target_label: __metrics_path__       regex: (.+)     - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]       action: replace       regex: ([^:]+)(?::\d+)?;(\d+)       replacement: $1:$2       target_label: __address__     - action: labelmap       regex: __meta_kubernetes_pod_label_(.+)     - source_labels: [__meta_kubernetes_namespace]       action: replace       target_label: kubernetes_namespace     - source_labels: [__meta_kubernetes_pod_name]       action: replace       target_label: kubernetes_pod_name   - job_name: 'kubernetes-cadvisor'     scheme: https     tls_config:       ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt     bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token     kubernetes_sd_configs:     - role: node     relabel_configs:     - action: labelmap       regex: __meta_kubernetes_node_label_(.+)     - target_label: __address__       replacement: kubernetes.default.svc:443     - source_labels: [__meta_kubernetes_node_name]       regex: (.+)       target_label: __metrics_path__       replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor   - job_name: 'kubernetes-service-endpoints'     kubernetes_sd_configs:     - role: endpoints     relabel_configs:     - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]       action: keep       regex: true     - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]       action: replace       target_label: __scheme__       regex: (https?)     - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]       action: replace       target_label: __metrics_path__       regex: (.+)     - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]       action: replace       target_label: __address__       regex: ([^:]+)(?::\d+)?;(\d+)       replacement: $1:$2     - action: labelmap       regex: __meta_kubernetes_service_label_(.+)     - source_labels: [__meta_kubernetes_namespace]       action: replace       target_label: kubernetes_namespace     - source_labels: [__meta_kubernetes_service_name]       action: replace       target_label: kubernetes_name

prometheus-deployment.yaml:

apiVersion: extensions/v1beta1
 kind: Deployment
 metadata:
   name: prometheus-deployment
   labels:
     app: prometheus-server
   namespace: monitoring
 spec:
   replicas: 1
   template:
     metadata:
       labels:
         app: prometheus-server
     spec:
       containers:
         - name: prometheus
           image: prom/prometheus
           args:
             - "--config.file=/etc/prometheus/prometheus.yml"
             - "--storage.tsdb.path=/prometheus/"
           ports:
             - containerPort: 9090
           volumeMounts:
             - name: prometheus-config-volume
               mountPath: /etc/prometheus/
             - name: prometheus-storage-volume
               mountPath: /prometheus/
       volumes:
         - name: prometheus-config-volume
           configMap:
             defaultMode: 420
             name: prometheus-server-conf
     - name: prometheus-storage-volume       emptyDir: {}

prometheus-service.yaml:

apiVersion: v1
 kind: Service
 metadata:
   name: prometheus-service
   namespace: monitoring
   annotations:
       prometheus.io/scrape: 'true'
       prometheus.io/path:   /
       prometheus.io/port:   '8080'
 spec:
   selector:
     app: prometheus-server
   type: NodePort
   ports:
     - port: 8080
       targetPort: 9090
   selector:
     app: prometheus-server

prometheus-ingress-service.yml:

apiVersion: networking.k8s.io/v1beta1
 kind: Ingress
 metadata:
   name: prometheus-ingress
   namespace: monitoring
 spec:
   tls:
     - hosts:
       - prom.example.com
       secretName: wildcard.example.com.crt
   rules:
 host: prom.example.com http:   paths: path: /
 backend:
   serviceName: prometheus-service
   servicePort: 8080 
  • apply commands
kubectl apply -f prometheus-cluster-role.yaml
kubectl apply -f prometheus-config-map.yaml
kubectl apply -f prometheus-deployment.yaml
kubectl apply -f prometheus-service.yaml
kubectl apply -f prometheus-ingress-service.yml 
  • upload certificate for nginx https
kubectl create secret tls -n monitoring wildcard.example.com.crt --key wildcard.example.com.pem --cert wildcard.example.com.crt

Grafana Deployment

  • apply the following files:

grafana-configmap.yaml:

apiVersion: v1
 kind: ConfigMap
 metadata:
   name: cluster-monitoring-grafana-ini
   namespace: monitoring
   labels:
     app.kubernetes.io/name: cluster-monitoring
     app.kubernetes.io/component: grafana
 data:
   # Grafana's main configuration file. To learn more about the configuration options available to you,
   # consult https://grafana.com/docs/installation/configuration
   grafana.ini: |
     [analytics]
     check_for_updates = true
     [grafana_net]
     url = https://grafana.example.com
     [log]
     mode = console
     [paths]
     data = /var/lib/grafana/data
     logs = /var/log/grafana
     plugins = /var/lib/grafana/plugins
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: cluster-monitoring-grafana-datasources
   namespace: monitoring
   labels:
     app.kubernetes.io/name: cluster-monitoring
 data:
   # A file that specifies data sources for Grafana to use to populate dashboards.
   # To learn more about configuring this, consult https://grafana.com/docs/administration/provisioning/#datasources
   datasources.yaml: |
     apiVersion: 1
     datasources:
     - access: proxy
       isDefault: true
       name: prometheus
       type: prometheus
       url: http://prometheus-service.monitoring:8080
       version: 1

grafana-pv-data.yml:

apiVersion: v1
 kind: PersistentVolume
 metadata:
   name: grafana-data
   namespace: monitoring
   labels:
     name: grafana-data
 spec:
   capacity:
     storage: 200Gi
   accessModes:
 ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /var/lib/k8s/volumes/grafana/data nodeAffinity: required:   nodeSelectorTerms: matchExpressions: key: kubernetes.io/hostname
 operator: In
 values:
 k8s-02 

grafana-secret.yaml:

apiVersion: v1
 kind: Secret
 metadata:
   name: cluster-monitoring-grafana
   namespace: monitoring
   labels:
     app.kubernetes.io/name: cluster-monitoring
     app.kubernetes.io/component: grafana
 type: Opaque
 data:
   # By default, admin-user is set to admin
   admin-user: YWRtaW4=
   admin-password: "base64encodedpassword"

grafana-serviceaccount.yaml:

apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: grafana
   namespace: monitoring

grafana-service.yaml:

apiVersion: v1
 kind: Service
 metadata:
   name: grafana-service
   namespace: monitoring
   labels:
     k8s-app: grafana
     app.kubernetes.io/name: cluster-monitoring
     app.kubernetes.io/component: grafana
 spec:
   ports:
     # Routes port 80 to port 3000 of the Grafana StatefulSet Pods
     - name: http
       port: 80
       protocol: TCP
       targetPort: 3000
   selector:
     k8s-app: grafana

grafana-statefulset.yaml:

apiVersion: apps/v1beta2
 kind: StatefulSet
 metadata:
   name: cluster-monitoring-grafana
   namespace: monitoring
   labels: &Labels
     k8s-app: grafana
     app.kubernetes.io/name: cluster-monitoring
     app.kubernetes.io/component: grafana
 spec:
   serviceName: cluster-monitoring-grafana
   replicas: 1
   selector:
     matchLabels: *Labels
   template:
     metadata:
       labels: *Labels
     spec:
       serviceAccountName: grafana
       # Configure an init container that will chmod 777 Grafana's data directory
       # and volume before the main Grafana container starts up.
       # To learn more about init containers, consult https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
       # from the official Kubernetes docs.
       initContainers:
           - name: "init-chmod-data"
             image: debian:9
             imagePullPolicy: "IfNotPresent"
             command: ["chmod", "777", "/var/lib/grafana"]
             volumeMounts:
             - name: grafana-data
               mountPath: "/var/lib/grafana"
       containers:
         - name: grafana
           # The main Grafana container, which uses the grafana/grafana:6.0.1 image
           # from https://hub.docker.com/r/grafana/grafana
           image: grafana/grafana:6.2.5
           imagePullPolicy: Always
           # Mount in all the previously defined ConfigMaps as volumeMounts
           # as well as the Grafana data volume
           volumeMounts:
             - name: config
               mountPath: "/etc/grafana/"
             - name: datasources
               mountPath: "/etc/grafana/provisioning/datasources/"
             - name: grafana-data
               mountPath: "/var/lib/grafana"
           ports:
             - name: service
               containerPort: 80
               protocol: TCP
             - name: grafana
               containerPort: 3000
               protocol: TCP
           # Set the GF_SECURITY_ADMIN_USER and GF_SECURITY_ADMIN_PASSWORD environment variables
           # using the Secret defined in grafana-secret.yaml
           env:
             - name: GF_SECURITY_ADMIN_USER
               valueFrom:
                 secretKeyRef:
                   name: cluster-monitoring-grafana
                   key: admin-user
             - name: GF_SECURITY_ADMIN_PASSWORD
               valueFrom:
                 secretKeyRef:
                   name: cluster-monitoring-grafana
                   key: admin-password
           # Define a liveness and readiness probe that will hit /api/health using port 3000.
           # To learn more about Liveness and Readiness Probes,
           # consult https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
           # from the official Kubernetes docs.
           livenessProbe:
             httpGet:
               path: /api/health
               port: 3000
             initialDelaySeconds: 60
             timeoutSeconds: 30
             failureThreshold: 10
             periodSeconds: 10
           readinessProbe:
             httpGet:
               path: /api/health
               port: 3000
             initialDelaySeconds: 60
             timeoutSeconds: 30
             failureThreshold: 10
             periodSeconds: 10
           # Define resource limits and requests of 50m of CPU and 100Mi of memory.
           resources:
             limits:
               cpu: 50m
               memory: 100Mi
             requests:
               cpu: 50m
               memory: 100Mi
       # Define configMap volumes for the above ConfigMap files, and volumeClaimTemplates
       # for Grafana's 2Gi Block Storage data volume, which will be mounted to /var/lib/grafana.
       volumes:
         - name: config
           configMap:
             name: cluster-monitoring-grafana-ini
         - name: datasources
           configMap:
             name: cluster-monitoring-grafana-datasources
   volumeClaimTemplates:
 metadata:
   name: grafana-data
 spec:
   accessModes: [ "ReadWriteOnce" ]
   storageClassName: "local-storage"
   resources:
     requests:
       storage: 200Gi
   selector:
     matchExpressions:
       - {key: name, operator: In, values: [grafana-data]} 

grafana-ingress-service.yml:

apiVersion: networking.k8s.io/v1beta1
 kind: Ingress
 metadata:
   name: grafana-ingress
   namespace: monitoring
 spec:
   tls:
     - hosts:
       - grafana.example.com
       secretName: wildcard.example.com.crt
   rules:
 host: grafana.example.com http:   paths: path: /
 backend:
   serviceName: grafana-service
   servicePort: 80 
  • apply commands
kubectl apply -f grafana-configmap.yaml
kubectl apply -f grafana-pv-data.yml
kubectl apply -f grafana-secret.yaml
kubectl apply -f grafana-serviceaccount.yaml
kubectl apply -f grafana-service.yaml
kubectl apply -f grafana-statefulset.yaml
kubectl apply -f grafana-ingress-service.yml
  • login to grafana with admin and your password and import some dashboards to monitor kuberentes. I used the following dashboards

https://grafana.com/grafana/dashboards/10000
https://grafana.com/grafana/dashboards/315

Kubernetes Dashboard Behind Ingress-Nginx

Tested On

OS: Ubuntu 18.04
Kubernetes Version: v1.15.3
Docker Version: 18.09.8
Kubernetes Dashboard Version: v1.10.1

Installation

  • Download kubernetes-dashboard yaml file
curl https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml -o kubernetes-dashboard.yaml
  • Edit the file to change dashboard configuration to use http and insecure port. Here is the file that I used:
Copyright 2017 The Kubernetes Authors.
 #
 Licensed under the Apache License, Version 2.0 (the "License");
 you may not use this file except in compliance with the License.
 You may obtain a copy of the License at
 #
 http://www.apache.org/licenses/LICENSE-2.0
 #
 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License.
 ------------------- Dashboard Secret -------------------
 apiVersion: v1
 kind: Secret
 metadata:
   labels:
     k8s-app: kubernetes-dashboard
   name: kubernetes-dashboard-certs
   namespace: kube-system
 type: Opaque
 
 ------------------- Dashboard Service Account -------------------
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   labels:
     k8s-app: kubernetes-dashboard
   name: kubernetes-dashboard
   namespace: kube-system
 
 ------------------- Dashboard Role & Role Binding -------------------
 kind: Role
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: kubernetes-dashboard-minimal
   namespace: kube-system
 rules:
   # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
 apiGroups: [""]
 resources: ["secrets"]
 verbs: ["create"]
 # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
 apiGroups: [""]
 resources: ["configmaps"]
 verbs: ["create"]
 # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
 apiGroups: [""]
 resources: ["secrets"]
 resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
 verbs: ["get", "update", "delete"]
 # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
 apiGroups: [""]
 resources: ["configmaps"]
 resourceNames: ["kubernetes-dashboard-settings"]
 verbs: ["get", "update"]
 # Allow Dashboard to get metrics from heapster.
 apiGroups: [""]
 resources: ["services"]
 resourceNames: ["heapster"]
 verbs: ["proxy"]
 apiGroups: [""]
 resources: ["services/proxy"]
 resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
 verbs: ["get"] 
 
 apiVersion: rbac.authorization.k8s.io/v1
 kind: RoleBinding
 metadata:
   name: kubernetes-dashboard-minimal
   namespace: kube-system
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: Role
   name: kubernetes-dashboard-minimal
 subjects:
 kind: ServiceAccount
 name: kubernetes-dashboard
 namespace: kube-system 
 
 ------------------- Dashboard Deployment -------------------
 kind: Deployment
 apiVersion: apps/v1
 metadata:
   labels:
     k8s-app: kubernetes-dashboard
   name: kubernetes-dashboard
   namespace: kube-system
 spec:
   replicas: 1
   revisionHistoryLimit: 10
   selector:
     matchLabels:
       k8s-app: kubernetes-dashboard
   template:
     metadata:
       labels:
         k8s-app: kubernetes-dashboard
     spec:
       containers:
       - name: kubernetes-dashboard
         image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
         ports:
         - containerPort: 8444
           protocol: TCP
         args:
           - --enable-insecure-login
           - --port=8443
           - --insecure-port=8444
           - --insecure-bind-address=0.0.0.0
           # Uncomment the following line to manually specify Kubernetes API server Host
           # If not specified, Dashboard will attempt to auto discover the API server and connect
           # to it. Uncomment only if the default does not work.
           # - --apiserver-host=http://my-address:port
         volumeMounts:
         - name: kubernetes-dashboard-certs
           mountPath: /certs
           # Create on-disk volume to store exec logs
         - mountPath: /tmp
           name: tmp-volume
         livenessProbe:
           httpGet:
             scheme: HTTP
             path: /
             port: 8444
           initialDelaySeconds: 30
           timeoutSeconds: 30
       volumes:
       - name: kubernetes-dashboard-certs
         secret:
           secretName: kubernetes-dashboard-certs
       - name: tmp-volume
         emptyDir: {}
       serviceAccountName: kubernetes-dashboard
       # Comment the following tolerations if Dashboard must not be deployed on master
       tolerations:
       - key: node-role.kubernetes.io/master
         effect: NoSchedule
 
 ------------------- Dashboard Service -------------------
 kind: Service
 apiVersion: v1
 metadata:
   labels:
     k8s-app: kubernetes-dashboard
   name: kubernetes-dashboard
   namespace: kube-system
 spec:
   ports:
     - port: 80
       targetPort: 8444
   selector:
     k8s-app: kubernetes-dashboard
  • apply the file
kubectl apply -f kubernetes-dashboard.yaml
  • Upload your certificate for ingress-nginx
kubectl create secret tls -n kube-system wildcard.example.com.crt --key wildcard.example.com.pem --cert wildcard.example.com.crt
  • Create Ingress file for nginx configuration
vi ingress-service.yml
apiVersion: networking.k8s.io/v1beta1
 kind: Ingress
 metadata:
   name: k8s-dashboard-ingress
   namespace: kube-system
 spec:
   tls:
     - hosts:
       - k8s-dashboard.example.com
       secretName: wildcard.example.com.crt
   rules:
 host: k8s-dashboard.example.com http:   paths: path: /
 backend:
   serviceName: kubernetes-dashboard
   servicePort: 80 
  • Create dashboard admin user
vi dashboard-adminuser.yaml
apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: admin-user
   namespace: kube-system
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: admin-user
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cluster-admin
 subjects:
 kind: ServiceAccount
 name: admin-user
 namespace: kube-system 
kubectl apply -f dashboard-adminuser.yaml
  • Get the token of dashboard admin user
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
  • Browse to your dashboard https://k8s-dashboard.example.com and login with the token of dashboard admin user

Create Ingress-Nginx in Kubernetes on BareMetal

Tested On

OS: Ubuntu 18.04
Kubernetes Version: v1.15.3
Docker Version: 18.09.8

Procedure

I used the “Via the host network” Solution describe in kubernetes docs: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network

  • Download the mandatory file from ingress-nginx
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml -o mandatory.yaml
  • Edit the mandatory file and change the Deployment to Daemonset, remove the replicas from Daemonset spec and add hostNetwork: true to the Daemonset spec
vi mandatory.yaml
...
apiVersion: apps/v1
 kind: DaemonSet
 metadata:
   name: nginx-ingress-controller
   namespace: ingress-nginx
   labels:
     app.kubernetes.io/name: ingress-nginx
     app.kubernetes.io/part-of: ingress-nginx
 spec:
   selector:
     matchLabels:
       app.kubernetes.io/name: ingress-nginx
       app.kubernetes.io/part-of: ingress-nginx
   template:
     metadata:
       labels:
         app.kubernetes.io/name: ingress-nginx
         app.kubernetes.io/part-of: ingress-nginx
       annotations:
         prometheus.io/port: "10254"
         prometheus.io/scrape: "true"
     spec:
       hostNetwork: true
       serviceAccountName: nginx-ingress-serviceaccount
       containers:
         - name: nginx-ingress-controller
           image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
           args:
             - /nginx-ingress-controller
             - --configmap=$(POD_NAMESPACE)/nginx-configuration
             - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
             - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
             - --publish-service=$(POD_NAMESPACE)/ingress-nginx
             - --annotations-prefix=nginx.ingress.kubernetes.io
           securityContext:
             allowPrivilegeEscalation: true
             capabilities:
               drop:
                 - ALL
               add:
                 - NET_BIND_SERVICE
             # www-data -> 33
             runAsUser: 33
           env:
             - name: POD_NAME
               valueFrom:
                 fieldRef:
                   fieldPath: metadata.name
             - name: POD_NAMESPACE
               valueFrom:
                 fieldRef:
                   fieldPath: metadata.namespace
           ports:
             - name: http
               containerPort: 80
             - name: https
               containerPort: 443
           livenessProbe:
             failureThreshold: 3
             httpGet:
               path: /healthz
               port: 10254
               scheme: HTTP
             initialDelaySeconds: 10
             periodSeconds: 10
             successThreshold: 1
             timeoutSeconds: 10
           readinessProbe:
             failureThreshold: 3
             httpGet:
               path: /healthz
               port: 10254
               scheme: HTTP
             periodSeconds: 10
             successThreshold: 1
             timeoutSeconds: 10
 
  • Run kubectl apply
kubectl apply -f mandatory.yaml

Install Kubernetes Cluster on BareMetal

Tested On

OS: Ubuntu 18.04
Kubernetes Version: v1.15.3
Docker Version: 18.09.8

Prerequisites

apt-get update
apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-get install docker-ce docker-ce-cli containerd.io
  • Install kubelet kubeadm and kubectl
curl -s  https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get install -y kubelet kubeadm kubectl
  • Configure docker for kubernetes
cat > /etc/docker/daemon.json <<EOF
 {
   "exec-opts": ["native.cgroupdriver=systemd"],
   "log-driver": "json-file",
   "log-opts": {
     "max-size": "100m"
   },
   "storage-driver": "overlay2"
 }
 EOF
systemctl daemon-reload
systemctl restart docker
  • Disable swap for kubernetes
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Installation

  • Initialize kubernetes cluster (I have two network interfaces one for public and one for private so I use the apiserver-advertise-address with the private address)
kubeadm init --apiserver-advertise-address 172.18.73.71 --apiserver-cert-extra-sans k8s-api.example.com
  • Configure kubectl
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get pods --all-namespaces
  • Install weaveworks network plugin
kubectl apply -n kube-system -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
  • If you want to run containers on the muster than remove the master taint
kubectl taint nodes --all node-role.kubernetes.io/master-
  • Join worker servers to kubernetes clsuer
kubeadm join 172.18.73.71:6443 --apiserver-advertise-address 172.18.73.72 --token xxxxxxxxxxx --discovery-token-ca-cert-hash sha256:asdjlkasjfljasfljsldjflsdj
  • If you have multiple network interface like me than you need to add the following routes on the worker servers
ip route add 10.96.0.1/32 dev ens1(private_network_inteface)
kubectl get pods --all-namespaces - to check that all pods are running

Automatic Backup of AWS instances

There is no builtin option in AWS to backup instances automatically, so I created a ruby script that can run from crontab and create automatic AMI images from ec2 instances.

aws_ami_autobackup.rb works with ec2 tags, the script get tag and value and create AMI from all instances that has this tag and value.

Here is how to install and use the script

Prerequisite

  • Install ruby (I use ruby 2.2) with aws-sdk-resources gem.
  • Create IAM account with privileges to create and remove ec2 snapshots and AMI and save his access key and secret key. The quickest way is to use AmazonEC2FullAccess policy.
  • Create credentials file for the user that will run the tool in ~/.aws/credentials
vi ~/.aws/credentials
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXXXXX
aws_secret_access_key = YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY

 

How To Use aws_ami_autobackup.rb

Here are few examples on how to use the tool:

  • To take an ami on all instances that contain the tag daily-backup with value of true in us-east-1 region and keep them for 7 days:
/usr/local/bin/aws_ami_autobackup.rb -t daily_backup -v true -r us-east-1 -x 7
  • To take an ami on all instances that contain the tag daily-backup with value of true in us-east-1 region from multiple profiles (aws accounts):
for i in dev qa test; do /usr/local/bin/aws_ami_autobackup.rb -t daily_backup -v true -r us-east-1 -x 7 -p ${i}; done
  • Create cronjobs that take an ami every day at 00:00 and keep them for 30 days:
00 00 * * * /usr/local/bin/aws_ami_autobackup.rb -t daily_backup -v true -r us-east-1 -x 7

Now you just need to add the right tags to your instances and test it 🙂

Stop Start AWS Instances Automatically

In order to save money in AWS you can stop dev instances at night and weekends and start them again in the morning.

I created a wrapper script for AWS cli tools (stop_start_aws_instances.sh) that with cronjobs can help you automatically stop aws instances when you don’t use them.

The script is located here:
https://github.com/nachum234/scripts/blob/master/stop_start_aws_instances.sh

Prerequisite

In order to use the script you need to install and configure aws tools.

Here is a quick how to install and configure aws tools:

  • Install aws cli tools
sudo pip install awscli
  • Create IAM account with privileges to stop and start ec2 instances and save his access key and secret key. The quickest way is to use AmazonEC2FullAccess policy.
  • Configure aws cli tools. You need to enter the user access key and secret key
aws configure

or if you want to configure a specific profile

aws --profile dev configure

For more information use AWS guide: http://docs.aws.amazon.com/cli/latest/userguide.

How To Use stop_start_aws_instances.sh

Here are few examples on how to use the script:

  • To stop all instances that contain the tag daily-stop with value of true in us-east-1 region:
stop_start_aws_instances.sh -p default -a stop-instances -f Name=tag:daily-stop,Values=true -r us-east-1
  • To test on which instances the action will apply on, run the script with describe-instances action:
stop_start_aws_instances.sh -p default -a describe-instances -f Name=tag:daily-stop,Values=true -r us-east-1
  • To stop all instances that contain the tag daily-stop with value of true in us-east-1 region from multiple profiles:
for i in dev qa test; do stop_start_aws_instances.sh -p $i -a stop-instances -f Name=tag:daily-stop,Values=true -r us-east-1; done
  • Create cronjobs that start instances every working days at 9:00 and stop instances at every day at 19:00:
00 09 * * 1-5 /usr/local/bin/stop_start_aws_instances.sh -p default -a start-instances -f Name=tag:daily-start,Values=true -r us-east-1
00 19 * * * /usr/local/bin/stop_start_aws_instances.sh -p default -a stop-instances -f Name=tag:daily-stop,Values=true -r us-east-1

Now you just need to add the right tags to your instances and test it 🙂