Kubernetes Cluster on Bare Metal System Made Possible using MetalLB

Estimated Reading Time: 11 minutes

If you try to setup Kubernetes cluster on bare metal system, you will notice that Load-Balancer always remain in the “pending” state indefinitely when created. This is expected because Kubernetes, by default does not offer an implementation of network load-balancer for bare metal cluster.

In a cloud-enabled Kubernetes cluster, you request a load-balancer, and your cloud platform assigns an IP address to you. In a bare metal cluster, you need an external Load-Balancer implementation which has capability to perform an IP allocation.

Missing Load-Balancer in Bare Metal System

Enter MetalLB…

MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster. It aims to redress this imbalance by offering a Network LB implementation that integrates with standard network equipment, so that external services on bare metal clusters also “just work” as much as possible.

Why can’t Ingress help me out here?

Yes, Ingress could be one of the best option if you deployed Kubernetes cluster on bare metal. Ingress lets you configure internal load balancing of HTTP or HTTPS traffic to your deployed services using software load balancers like NGINX or HAProxy deployed as pods in your cluster. Ingress makes use of Layer 7 routing of your applications as well. The problem with this is that it doesn’t easily route TCP or UDP traffic. The best way to do this was using a LoadBalancer type of service. However, if you deployed your Kubernetes cluster to bare metal you didn’t have the option of using a LoadBalancer.

How does MetalLB work?

MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.

It is important to note that MetalLB cannot create IP addresses out of thin air, so you do have to give it pools of IP addresses that it can use. It will then take care of assigning and unassigning individual addresses as services come and go, but it will only ever hand out IPs that are part of its configured pools.

Okay wait.. How will I get IP Address pool for MetalLB?

How you get IP address pools for MetalLB depends on your environment. If you’re running a bare metal cluster in a colocation facility, your hosting provider probably offers IP addresses for lease. In that case, you would lease, say, a /26 of IP space (64 addresses), and provide that range to MetalLB for cluster services.

Under this blog post, I will showcase how to setup 3-Node Kubernetes cluster using MetalLB. The below steps have also been tested for ESXi Virtual Machines and works flawlessly.

Preparing the Infrastructure

[rml_read_more]

  • Machine #1(Master): 10.94.214.206
  • Machine #2(Worker Node1): 10.94.214.210
  • Machine #3(Worker Node2): 10.94.214.213

Assign hostname to each of these systems:

~$ cat /etc/hosts
127.0.0.1       localhost
127.0.1.1       ubuntu1804-1
10.94.214.206    kubemaster.dell.com
10.94.214.210   node1.dell.com
10.94.214.213   node2.dell.com

Installing curl package

$ sudo apt install curl
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  libcurl4
The following NEW packages will be installed:
  curl libcurl4
0 upgraded, 2 newly installed, 0 to remove and 472 not upgraded.
Need to get 373 kB of archives.
After this operation, 1,036 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 libcurl4 amd64 7.58.0-2ubuntu3.7 [214 kB]
Get:2 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 curl amd64 7.58.0-2ubuntu3.7 [159 kB]
Fetched 373 kB in 2s (164 kB/s)
Selecting previously unselected package libcurl4:amd64.
(Reading database ... 128791 files and directories currently installed.)
Preparing to unpack .../libcurl4_7.58.0-2ubuntu3.7_amd64.deb ...
Unpacking libcurl4:amd64 (7.58.0-2ubuntu3.7) ...
Selecting previously unselected package curl.
Preparing to unpack .../curl_7.58.0-2ubuntu3.7_amd64.deb ...
Unpacking curl (7.58.0-2ubuntu3.7) ...
Setting up libcurl4:amd64 (7.58.0-2ubuntu3.7) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for man-db (2.8.3-2) ...
Setting up curl (7.58.0-2ubuntu3.7) ...

Installing Docker

$ sudo curl -sSL https://get.docker.com/ | sh
# Executing docker install script, commit: 2f4ae48
+ sudo -E sh -c apt-get update -qq >/dev/null
+ sudo -E sh -c apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
+ sudo -E sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | apt-key add -qq - >/dev/null
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sudo -E sh -c echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" > /etc/apt/sources.list.d/docker.list
+ sudo -E sh -c apt-get update -qq >/dev/null
+ [ -n  ]
+ sudo -E sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/null
+ sudo -E sh -c docker version
Client:
 Version:           18.09.7
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        2d0083d
 Built:             Thu Jun 27 17:56:23 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.7
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       2d0083d
  Built:            Thu Jun 27 17:23:02 2019
  OS/Arch:          linux/amd64
  Experimental:     false
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker cse

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.
cse@kubemaster:~$
~$ sudo docker version
Client:
 Version:           18.09.7
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        2d0083d
 Built:             Thu Jun 27 17:56:23 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.7
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       2d0083d
  Built:            Thu Jun 27 17:23:02 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Add the Kubernetes signing key on both the nodes

$ sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
OK

Adding Xenial Kubernetes Repository on both the nodes

sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

Installing Kubeadm

sudo apt install kubeadm

Verifying Kubeadm installation

$ sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:41Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Disable swap memory (if running) on both the nodes

sudo swapoff -a

Steps to setup K8s Cluster

sudo kubeadm init --apiserver-advertise-address $(hostname -i)
mkdir -p $HOME/.kube
chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -n kube-system -f \
    "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 |tr -d '\n')"

In case you face any issue, just run the below command to see the logs:

journalctl -xeu kubelet

Adding Worker Node

cse@ubuntu1804-1:~$ sudo swapoff -a
cse@ubuntu1804-1:~$ sudo kubeadm join 10.94.214.210:6443 --token aju7kd.5mlhmmo1wlf8d5un     --discovery-token-ca-cert-hash sha256:89541bb9bbe5ee1efafe17b20eab77e6b756bd4ae023d2ff7c67ce73e3e8c7bb
cse@ubuntu1804-1:~$ sudo swapoff -a
cse@ubuntu1804-1:~$ sudo kubeadm join 10.94.214.210:6443 --token aju7kd.5mlhmmo1wlf8d5un     --discovery-token-ca-cert-hash sha256:89541bb9bbe5ee1efafe17b20eab77e6b756bd4ae023d2ff7c67ce73e3e8c7bb
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

cse@ubuntu1804-1:~$

Listing the Nodes

cse@kubemaster:~$ sudo kubectl get nodes
NAME               STATUS   ROLES    AGE     VERSION
kubemaster         Ready    master   8m17s   v1.15.0
worker1.dell.com   Ready    <none>   5m22s   v1.15.0
cse@kubemaster:~$
cse@kubemaster:~$ sudo kubectl describe node worker1.dell.com
Name:               worker1.dell.com
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=worker1.dell.com
                    kubernetes.io/os=linux
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 05 Jul 2019 16:10:33 -0400
Taints:             <none>
Unschedulable:      false
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Fri, 05 Jul 2019 16:10:55 -0400   Fri, 05 Jul 2019 16:10:55 -0400   WeaveIsUp                    Weave pod has set this
  MemoryPressure       False   Fri, 05 Jul 2019 16:15:33 -0400   Fri, 05 Jul 2019 16:10:33 -0400   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Fri, 05 Jul 2019 16:15:33 -0400   Fri, 05 Jul 2019 16:10:33 -0400   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Fri, 05 Jul 2019 16:15:33 -0400   Fri, 05 Jul 2019 16:10:33 -0400   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Fri, 05 Jul 2019 16:15:33 -0400   Fri, 05 Jul 2019 16:11:03 -0400   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  10.94.214.213
  Hostname:    worker1.dell.com
Capacity:
 cpu:                2
 ephemeral-storage:  102685624Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             4040016Ki
 pods:               110
Allocatable:
 cpu:                2
 ephemeral-storage:  94635070922
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             3937616Ki
 pods:               110
System Info:
 Machine ID:                 e7573bb6bf1e4cf5b9249413950f0a3d
 System UUID:                2FD93F42-FA94-0C27-83A3-A1F9276469CF
 Boot ID:                    782d6cfc-08a2-4586-82b6-7149389b1f4f
 Kernel Version:             4.15.0-29-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.9.7
 Kubelet Version:            v1.15.0
 Kube-Proxy Version:         v1.15.0
Non-terminated Pods:         (4 in total)
  Namespace                  Name                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                         ------------  ----------  ---------------  -------------  ---
  default                    my-nginx-68459bd9bb-55wk7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
  default                    my-nginx-68459bd9bb-z5r45    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
  kube-system                kube-proxy-jt4bs             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
  kube-system                weave-net-kw9gg              20m (1%)      0 (0%)      0 (0%)           0 (0%)         5m51s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests  Limits
  --------           --------  ------
  cpu                20m (1%)  0 (0%)
  memory             0 (0%)    0 (0%)
  ephemeral-storage  0 (0%)    0 (0%)
Events:
  Type    Reason                   Age                    From                          Message
  ----    ------                   ----                   ----                          -------
  Normal  Starting                 5m51s                  kubelet, worker1.dell.com     Starting kubelet.
  Normal  NodeHasSufficientMemory  5m51s (x2 over 5m51s)  kubelet, worker1.dell.com     Node worker1.dell.com status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    5m51s (x2 over 5m51s)  kubelet, worker1.dell.com     Node worker1.dell.com status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     5m51s (x2 over 5m51s)  kubelet, worker1.dell.com     Node worker1.dell.com status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  5m51s                  kubelet, worker1.dell.com     Updated Node Allocatable limit across pods
  Normal  Starting                 5m48s                  kube-proxy, worker1.dell.com  Starting kube-proxy.
  Normal  NodeReady                5m21s                  kubelet, worker1.dell.com     Node worker1.dell.com status is now: NodeReady
cse@kubemaster:~$
$ sudo kubectl run nginx --image nginx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
~$

Configuring Metal LoadBalancer

kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
~$ sudo kubectl get ns
NAME              STATUS   AGE
default           Active   23h
kube-node-lease   Active   23h
kube-public       Active   23h
kube-system       Active   23h
metallb-system    Active   13m
$ kubectl get all -n metallb-system
NAME                              READY   STATUS    RESTARTS   AGE
pod/controller-547d466688-m9xlt   1/1     Running   0          13m
pod/speaker-tb9d7                 1/1     Running   0          13m



NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/speaker   1         1         1       1            1           <none>          13m

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   1/1     1            1           13m

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-547d466688   1         1         1       13m

There are 2 components :

  • Controller – Assigns the IP address to the LB
  • Speaker – Ensure that you can reach service through LB

Controller component is deployed as deplyment and speaker as daemonset which is running on all worker nodes

Next, we need to look at config files.

To configure MetalLB, write a config map to metallb-system/config

Link: https://metallb.universe.tf/configuration/

Layer 2 mode is the simplest to configure: in many cases, you don’t need any protocol-specific configuration, only IP addresses.

 sudo kubectl get nodes -o wide
NAME               STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kubemaster         Ready    master   23h   v1.15.0   10.94.214.210   <none>        Ubuntu 18.04.1 LTS   4.15.0-29-generic   docker://18.9.7
worker1.dell.com   Ready    <none>   23h   v1.15.0   10.94.214.213   <none>        Ubuntu 18.04.1 LTS   4.15.0-29-generic   docker://18.9.7

We need to pay attention to the above Internal IP. We need to use this range only.

$ sudo cat <<EOF | kubectl create -f -
> apiVersion: v1
> kind: ConfigMap
> metadata:
>   namespace: metallb-system
>   name: config
> data:
>   config: |
>     address-pools:
>     - name: default
>       protocol: layer2
>       addresses:
>       - 10.94.214.200-10.94.214.255
>
> EOF
configmap/config created
cse@kubemaster:~$ kubectl describe configmap config -n metallb-system
Name:         config
Namespace:    metallb-system
Labels:       <none>
Annotations:  <none>

Data
====
config:
----
address-pools:
- name: default
  protocol: layer2
  addresses:
  - 10.94.26.214-10.94.214.255

Events:  <none>
kubectl get all
$ kubectl expose deploy nginx --port 80 --type LoadBalancer
service/nginx exposed

Every 2.0s: kubectl get all             kubemaster: Sat Jul  6 15:33:30 2019

NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-7bb7cd8db5-rc8c4   1/1     Running   0          18m


NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)
        AGE
service/kubernetes   ClusterIP      10.96.0.1        <none>          443/TCP
        23h
service/nginx        LoadBalancer   10.105.157.210   10.94.214.200   80:3063
1/TCP   34s


NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           18m

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-7bb7cd8db5   1         1         1       18m


By now, you should be able to browser NGINX Page underhttp://10.94.214.210

Hurray !!!

Let’s run another nginx service:

~$ kubectl run nginx2 --image nginx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx2 created
Every 2.0s: kubectl get all             kubemaster: Sat Jul  6 15:37:21 2019

NAME                          READY   STATUS    RESTARTS   AGE
pod/nginx-7bb7cd8db5-rc8c4    1/1     Running   0          21m
pod/nginx2-5746fc444c-4tsls   1/1     Running   0          42s


NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)
        AGE
service/kubernetes   ClusterIP      10.96.0.1        <none>          443/TCP
        23h
service/nginx        LoadBalancer   10.105.157.210   10.94.214.200   80:3063
1/TCP   4m24s


NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx    1/1     1            1           21m
deployment.apps/nginx2   1/1     1            1           42s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-7bb7cd8db5    1         1         1       21m
replicaset.apps/nginx2-5746fc444c   1         1         1       42s
cse@kubemaster:~$ kubectl expose deploy  nginx2 --port 80 --type LoadBalancer
service/nginx2 exposed
cse@kubemaster:~$
Every 2.0s: kubectl get all             kubemaster: Sat Jul  6 15:38:49 2019

NAME                          READY   STATUS    RESTARTS   AGE
pod/nginx-7bb7cd8db5-rc8c4    1/1     Running   0          23m
pod/nginx2-5746fc444c-4tsls   1/1     Running   0          2m10s


NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)
        AGE
service/kubernetes   ClusterIP      10.96.0.1        <none>          443/TCP
        23h
service/nginx        LoadBalancer   10.105.157.210   10.94.214.200   80:3063
1/TCP   5m52s
service/nginx2       LoadBalancer   10.107.32.195    10.94.214.201   80:3139
0/TCP   15s


NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx    1/1     1            1           23m
deployment.apps/nginx2   1/1     1            1           2m10s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-7bb7cd8db5    1         1         1       23m
replicaset.apps/nginx2-5746fc444c   1         1         1       2m10s

Let’s run hellowhale example

cse@kubemaster:~$ sudo kubectl run hellowhale --image ajeetraina/hellowhale
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/hellowhale created
cse@kubemaster:~$
cse@kubemaster:~$ sudo kubectl expose deploy hellowhale --port 89 --type LoadBalancer
service/hellowhale exposed
cse@kubemaster:~$
cse@kubemaster:~$ sudo kubectl get all
NAME                              READY   STATUS    RESTARTS   AGE
pod/hellowhale-64ff675cb5-c95qf   1/1     Running   0          99s
pod/nginx-7bb7cd8db5-rc8c4        1/1     Running   0          2d9h
pod/nginx2-5746fc444c-4tsls       1/1     Running   0          2d8h


NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
service/hellowhale   LoadBalancer   10.100.239.246   10.94.214.203   89:30385/TCP   29s
service/kubernetes   ClusterIP      10.96.0.1        <none>          443/TCP        3d8h
service/nginx        LoadBalancer   10.105.157.210   10.94.214.200   80:30631/TCP   2d8h
service/nginx2       LoadBalancer   10.107.32.195    10.94.24.201   80:31390/TCP   2d8h


NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hellowhale   1/1     1            1           99s
deployment.apps/nginx        1/1     1            1           2d9h
deployment.apps/nginx2       1/1     1            1           2d8h

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/hellowhale-64ff675cb5   1         1         1       99s
replicaset.apps/nginx-7bb7cd8db5        1         1         1       2d9h
replicaset.apps/nginx2-5746fc444c       1         1         1       2d8h

Hence, you saw that it’s so easy to deploy Kubernetes on bare metal using various popular operating systems like Ubuntu Linux, CentOS, SLES or Red Hat Enterprise Linux. Bare Metal Kubernetes deployments are no longer second-class deployments. Now, you, too, can use LoadBalancer resources with Kubernetes Metallb.

Do you have any queries around Docker, Kubernetes & Cloud? Here’s your chance to meet 850+ Community members via Slack https://tinyurl.com/y973wcq8

In case you’re new and want to start with Docker & Kubernetes, don’t miss out https://dockerlabs.collabnix.com

Top 5 Most Exciting Dockercon EU 2018 Announcements

Estimated Reading Time: 9 minutes

Last week I attended Dockercon 2018 EU which took place at Centre de Convencions Internacional de Barcelona (CCIB) in Barcelona, Spain. With over 3000+ attendees from around the globe, 52 breakout sessions, 11 Community Theatres, 12 workshops, over 100+ total sessions, exciting Hallway Tracks & Hands-on Labs/Trainings, paid trainings, women’s networking event, DockerPals and so on..Dockercon allowed developers, sysadmins, Product Managers & industry evangelists come closer to share their wealth of experience around the container technology. This time I was lucky enough to get chance to Emcee for Docker for Developer Track  for the first time. Not only this, I conducted Hallway Track for OpenUSM project & DockerLabs community contribution effort. Around 20-30 participants  showed up their interest to learn more around this system management, monitoring & Log Analytics tool.

This Dockercon we had Docker Captains Summit for the first time where the entire day was  dedicated to Captains. On Dec #3 ( 10:00 AM till 3:00 PM), we got chance to interact with Docker Staffs, where we put all our queries around Docker Future roadmap. It was amazing to meet all young Captains who joined us this year as well as getting familiar to what they have been contributing to during the initial introductory rounds.

This Dockercon, there has been couple of exciting announcements. 3 of the new features were targeted at Docker Community Edition, while the two were for Docker Enterprise customers. Here’s a rundown of what I think are the most 5 exciting announcements made last week –

#1. Announcement of Cloud Native Application Bundles(CNAB)

Microsoft and Docker have captured a great piece of attention with announcement around CNAB – Cloud Native Application Bundles.

What is CNAB? 

Cloud Native Application Bundles (CNAB) are a standard packaging format for multi-component distributed applications. It allows packages to target different runtimes and architectures. It empowers application distributors to package applications for deployment on a wide variety of cloud platforms, cloud providers, and cloud services. It also provides the capabilities necessary for delivering multi-container applications in disconnected environments.

Is it platform-specific tool?

CNAB is not a platform-specific tool. While it uses containers for encapsulating installation logic, it remains un-opinionated about what cloud environment it runs in. CNAB developers can bundle applications targeting environments spanning IaaS (like OpenStack or Azure), container orchestrators (like Kubernetes or Nomad), container runtimes (like local Docker or ACI), and cloud platform services (like object storage or Database as a Service). CNAB can also be used for packaging other distributed applications, such as IoT or edge computing. In nutshell, CNAB are a package format specification that describes a technology for bundling, installing, and managing distributed applications, that are by design, cloud agnostic.

Why do we need CNAB?

The current distributed computing landscape involves a combination of executable units and supporting API-based services. Executable units include Virtual Machines (VMs), Containers (e.g. Docker and OCI) and Functions-as-a-Service (FaaS), as well as higher-level PaaS services. Along with these executable units, many managed cloud services (from load balancers to databases) are provisioned and interconnected via REST (and similar network-accessible) APIs. The overall goal of CNAB is to provide a packaging format that can enable application providers and developers with a way of installing a multi-component application into a distributed computing environment, supporting all of the above types.


Is it open source? Tell me more about CNAB format?

It is an open source, cloud-agnostic specification for packaging and running distributed applications. It is a nascent specification that offers a way to repackage distributed computing apps

The CNAB format is a packaging format for a broad range of distributed applications. It specifies a pairing of a bundle definition(bundle.json) to define the app, and an invocation image to install the app.

The bundle definition is a single file that contains the following information:

  • Information about the bundle, such as name, bundle version, description, and keywords
  • Information about locating and running the invocation image (the installer program)
  • A list of user-overridable parameters that this package recognizes
  • The list of executable images that this bundle will install
  • A list of credential paths or environment variables that this bundle requires to execute

What’s Docker future plan to do with CNAB?

This project was incubated by Microsoft and Docker 1 year back. The first implementation of the spec is an experimental utility called Docker App, which Docker officially rolled out this Dockercon and expected to be integrated with Docker Enterprise in near future.  Microsoft and Docker plan to donate CNAB to an open source foundation publicly which is expected to happen early next year.

If you have no patience, head over Docker App CNAB examples recently posted by Gareth Rushgrove, Docker Employee, which is accessible via https://github.com/garethr/docker-app-cnab-examples

The examples in this repository show some basic examples of using docker-app, in particular showing some of the CNAB integration details. Check it out –

 #2. Support for using Docker Compose on Kubernetes.

On the 2nd day of Dockercon, Docker Inc. open sourced Compose on Kubernetes project. Docker Enterprise Edition already had this capability enabled starting Compose File version 3.3 where one can use the same docker-compose.yml file for Swarm deployment as well as one can specify Kubernetes workloads whenever stack is deployed. 

What benefit does this bring to Community Developers?

By making it open source, Docker, Inc has really paved a way of infinite possibilities around simplified way of deploying Kubernetes application. Docker Swarm gained popularity because of its simplified approach of application deployment using docker-compose.yml file. Now the community developers can use the same YAML file to deploy their K8s application. 

Imagine, you are using Docker Desktop on your Macbook. Docker Desktop provides capability of running both Swarm & Kubernetes. You have context set to GKE cluster which is running on Google Cloud Platform. You just deployed your app using docker-compose.yml on your local Macbook. Now you want to deploy it in the same way but this time on your GKE cluster.  Just use docker stack deploy command to deploy it to GKE cluster. Interesting, Isn’t it?

How does Compose on Kubernetes architecture look like?

Compose on Kubernetes is made up of server-side and client-side components. This architecture was chosen so that the entire life cycle of a stack can be managed. The following image is a high-level diagram of the architecture:

Compose on Kubernetes architecture

If you’re interested to learn further, I would suggest you to visit this link.

How can I test it now?

First we need to install the Compose on Kubernetes controller into your Kubernetes cluster(which could be GKE/AKS). You can download the latest binary(as of 12/13/2018) via https://github.com/docker/compose-on-kubernetes/releases/tag/v0.4.16 .

This controller uses the standard Kubernetes extension points to introduce the `Stack` to the Kubernetes API. You can use any Kubernetes cluster you like, but if you don’t already have one available then remember that Docker Desktop comes with Kubernetes and the Compose controller built-in, and enabling it is as simple as ticking a box in the settings

Check out the latest doc which shows how to make it work with AKS here.

#3. Introducing Docker Desktop Enterprise

The 3rd Big announcement was an introduction to Docker Desktop Enterprise. With this, Docker Inc. made a new addition to their desktop product portfolio which currently includes the free Docker Desktop Community products for MacOS and Windows. Docker Desktop is the simplest way to get started with container-based development on both Windows 10 and macOS with a set of features now available for the enterprise.

Desktop Comparison Table

How will Docker Desktop Enterprise be different from Docker Desktop Community Edition?

Good question. Docker Desktop has Docker Engine and Kubernetes built-in and with the addition of swappable version packs you can now synchronize your desktop development environment with the same Docker API and Kubernetes versions that are used in production with Docker Enterprise. You get the assurance that your application will not break due to incompatible API calls, and if you have multiple downstream environments running different versions of the APIs, you can quickly change your desktop configuration with the click of a button.

Not only this, with Docker Desktop Enterprise, you get access to the Application Designer which is a new workflow that provides production-ready application and service templates that let you get coding quickly, with the reassurance that your application meets architectural standards

source ~ Docker, Inc


For those who are interested in Docker Desktop Enterprise – Please note that it is expected to be available for preview in January & General Availability is slated to happen during 1H 2019.

#4. From Zero to Docker in Seconds with “docker assemble” CLI

This time, Docker Team announced a very interesting docker subcommand rightly named as “assemble” to the public. Ann Rahma and Gareth Rushgrove from Docker, Inc. announced assemble, a new command that generates optimized images from non dockerized apps. It will get you from source to an optimized Docker images in seconds.

Here are few of interesting facts around docker assemble utility:

  • Docker assemble has capability to build an image without a Dockerfile, all about auto detecting the code framework.
  • It generates docker images (and lot more) from your code with single command and zero effort! which mean no more dockerfile needed for your app till you have a config (.pom file there). 
  • It can analyze your applications, dependencies, and caches, and give you a sweet Docker image without having to author your own Dockerfiles.
  • It is built on top of buildKit, will auto detect framework, versions etc. from a config file (.pom file) and automatically add dependencies to the image label, optimize image size and push.
  • Docker Assemble can also figure out what ports need to be published and what healthchecks are relevant.
  • The dockerassemble builds app without configuration files, without Dockerfile, just a git repository to deploy

Is it an open source project?

It’s an enterprise feature for now — not in the community version. It is available for a couple languages and frameworks (like Java as demonstrated on Dockercon stage). 

How is it different from buildpack?

By reading all through its feature, Docker assemble might look very similar to buildpacks  as it overlap with some of the stuff docker-assemble does. But the huge benefit with assemble is that it’s more than just an image (also ports, healthchecks, volume mounts, etc), and it’s integrated into the enterprise toolchain. The docker-assemble is sort of an enterprise-grade buildpack to help with digitalization.

Keep eye on my next blog post to get more detail around the fancy docker assemblecommand.

#5. Docker-app & CNAB together for the first time

On the 2nd day of Dockercon, Docker confirmed that they are the first to implement CNAB for containerized applications and will be expanding it across the Docker platform to support new application development, deployment and lifecycle management. Initially CNAB support will be released as part of our docker-app experimental tool for building, packaging and managing cloud-native applications. With this, Docker now lets you package CNAB bundles as Docker images, so you can distribute and share through Docker registry tools including Docker Hub and Docker Trusted Registry. Additionally, Docker will enable organizations to deploy and manage CNAB-based applications in Docker Enterprise in the upcoming months.

Can I test the preview binaries of docker-app which comes with CNAB support?

Yes, you can find some preview binaries of docker-app with CNAB support here.The latest release of Docker App is one such tool that implements the current CNAB spec. Tt can be used to both build CNAB bundles for Compose (which can then be used with any other CNAB client), and also to install, upgrade and uninstall any other CNAB bundle.

In case you have no patience, head over to this recently added example of how to deploy a Helm chart

You can visit https://github.com/docker/app/releases/tag/cnab-dockercon-preview for accessing preview build.

I hope you found this blog helpful. In my next blog series, I will deep-dive around each of these announcements in terms of implementation and enablements.

Kubernetes Hands-on Lab #4 – Deploy Prometheus Stack using Helm on Play with Kubernetes Platform

Estimated Reading Time: 8 minutes 

Let’s talk about Kubernetes Deployment Challenges…

As monolithic systems become too large to deal with, many enterprises are drawn to breaking them down into the microservices architecture.Whenever we move from monolithic to microservice architecture, application consists of multiple components in terms of services talking to each other. Each components has its own resources and can be scaled individually. If we talk about Kubernetes, it can become very complex with all the objects you need to handle ― such as ConfigMaps, services, pods, Persistent Volumes ― in addition to the number of releases you need to manage. The below list of challenges might occur :

 

 
 
1. Manage, Edit and Update multiple k8s configuration
2. Deploy Multiple K8s configuration as a SINGLE application
3. Share and reuse K8s configurations and applications
4. Parameterize and support multiple environments
5. Manage application releases: rollout, rollback, diff, history
6. Define deployment lifecycle(control operations to be run in different phases
7. Validate release state after deployment
 
These can be managed with Kubernetes Helm, which offers a simple way to package everything into one simple application and advertises what you can configure.
 

Helm is a Deployment Management(and NOT JUST PACKAGE MANAGER) for Kubernetes. It does a heavy lifting of repeatable deployment, management of dependencies(reuse and share), management of multiple configurations, update, rollback and test application deployments(Releases).

Under this blog post, we will test drive Helm on top of Play with Kubernetes Platform. Let’s get started.

Open https://labs.play-with-k8s.com/ to access Kubernetes Playground.

Click on the Login button to authenticate with Docker Hub or GitHub ID.

 

Once you start the session, you will have your own lab environment.

Adding First Kubernetes Node

Click on “Add New Instance” on the left to build your first Kubernetes Cluster node. It automatically names it as “node1”. Each instance has Docker Community Edition (CE) and Kubeadm already pre-installed. This node will be treated as the master node for our cluster.

Bootstrapping the Master Node

 

You can bootstrap the Kubernetes cluster by initializing the master (node1) node with the below script. Copy this script content into bootstrap.sh file and make it executable using “chmod +x bootstrap.sh” command.

When you execute this script, as part of initialization, the kubeadm write several configuration files needed, setup RBAC and deployed Kubernetes control plane components (like kube-apiserver, kube-dns, kube-proxy, etcd, etc.). Control plane components are deployed as Docker containers.

Copy the above kubeadm join token command and save it for the next step. This command will be used to join other nodes to your cluster.

Adding Worker Nodes

Click on “Add New Node” to add a new worker node.

Checking the Cluster Status

[node1 ~]$ kubectl get nodes
NAME      STATUS     ROLES     AGE       VERSION
node1     Ready      master    18m       v1.11.3
node2     Ready      <none>    4m        v1.11.3
node3     Ready      <none>    39s       v1.11.3
node4     NotReady   <none>    22s       v1.11.3
node5     NotReady   <none>    4s        v1.11.3
[node1 ~]$
[node1 ]$ kubectl get po
No resources found.
[node1 ]$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   1h
[node1]$

 

Verifying the running Pods


 

[node1 ~]$ kubectl get nodes -o json |
>       jq ".items[] | {name:.metadata.name} + .status.capacity"

{
  "name": "node1",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node2",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node3",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node4",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node5",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}

Installing OpenSSL

[node1 ~]$ yum install -y openssl

Installing Helm

$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
[node1 ~]$ sh get_helm.sh
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.11.0-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
get_helm.sh: line 177: which: command not found
Run 'helm init' to configure helm.
[node1 ~]$ helm init
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming

Installing Prometheus 

Let us try to install Prometheus Stack on top of 5-Node K8s cluster using Helm.

First one can search for application stack using helm search <packagename> option.

[node1 ~]$ helm search prometheus
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
stable/prometheus                       7.3.4           2.4.3           Prometheus is a monitoring system and time series database.
stable/prometheus-adapter               v0.2.0          v0.2.1          A Helm chart for k8s prometheus adapter
stable/prometheus-blackbox-exporter     0.1.3           0.12.0          Prometheus Blackbox Exporter
stable/prometheus-cloudwatch-exporter   0.2.1           0.5.0           A Helm chart for prometheus cloudwatch-exporter
stable/prometheus-couchdb-exporter      0.1.0           1.0             A Helm chart to export the metrics from couchdb in Promet...
stable/prometheus-mysql-exporter        0.2.1           v0.11.0         A Helm chart for prometheus mysql exporter with cloudsqlp...
stable/prometheus-node-exporter         0.5.0           0.16.0          A Helm chart for prometheus node-exporter
stable/prometheus-operator              0.1.7           0.24.0          Provides easy monitoring definitions for Kubernetes servi...
stable/prometheus-postgres-exporter     0.5.0           0.4.6           A Helm chart for prometheus postgres-exporter
stable/prometheus-pushgateway           0.1.3           0.6.0           A Helm chart for prometheus pushgateway
stable/prometheus-rabbitmq-exporter     0.1.4           v0.28.0         Rabbitmq metrics exporter for prometheus
stable/prometheus-redis-exporter        0.3.2           0.21.1          Prometheus exporter for Redis metrics
stable/prometheus-to-sd                 0.1.1           0.2.2           Scrape metrics stored in prometheus format and push them ...
stable/elasticsearch-exporter           0.4.0           1.0.2           Elasticsearch stats exporter for Prometheus
stable/karma                            1.1.2           v0.14           A Helm chart for Karma - an UI for Prometheus Alertmanager
stable/stackdriver-exporter             0.0.4           0.5.1           Stackdriver exporter for Prometheus
stable/weave-cloud                      0.3.0           1.1.0           Weave Cloud is a add-on to Kubernetes which provides Cont...
stable/kube-state-metrics               0.9.0           1.4.0           Install kube-state-metrics to generate and expose cluster...
stable/mariadb                          5.2.2           10.1.36         Fast, reliable, scalable, and easy to use open-source rel...
[node1 ~]$

Update the Repo

[node1 ~]$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

Installing Prometheus

$helm install stable/prometheus

Error: namespaces “default” is forbidden: User “system:serviceaccount:kube-system:default” cannot get namespaces in the namespace “default”

How to fix?

To fix this issue, you need to follow below steps:

kubectl --namespace kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade

Listing Helm

[node1 ~]$ helm list
NAME            REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
excited-elk     1               Sun Oct 28 10:00:02 2018        DEPLOYED        prometheus-7.3.4        2.4.3           default
[node1 ~]$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
[node1 ~]$ helm install stable/prometheus
NAME:   excited-elk
LAST DEPLOYED: Sun Oct 28 10:00:02 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/DaemonSet
NAME                                  AGE
excited-elk-prometheus-node-exporter  1s

==> v1/Pod(related)

NAME                                                        READY  STATUS             RESTARTS  AGE
excited-elk-prometheus-node-exporter-7bjqc                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-node-exporter-gbcd7                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-node-exporter-tk56q                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-node-exporter-tkk9b                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-alertmanager-68f4f57c97-wrfjz        0/2    Pending            0         1s
excited-elk-prometheus-kube-state-metrics-858d44dfdc-vt4wj  0/1    ContainerCreating  0         1s
excited-elk-prometheus-pushgateway-58bfd54d6d-m4n69         0/1    ContainerCreating  0         1s
excited-elk-prometheus-server-5958586794-b97xn              0/2    Pending            0         1s

==> v1/ConfigMap

NAME                                 AGE
excited-elk-prometheus-alertmanager  1s
excited-elk-prometheus-server        1s

==> v1/ServiceAccount
excited-elk-prometheus-alertmanager        1s
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-node-exporter       1s
excited-elk-prometheus-pushgateway         1s
excited-elk-prometheus-server              1s

==> v1beta1/ClusterRole
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-server              1s

==> v1beta1/Deployment
excited-elk-prometheus-alertmanager        1s
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-pushgateway         1s
excited-elk-prometheus-server              1s

==> v1/PersistentVolumeClaim
excited-elk-prometheus-alertmanager  1s
excited-elk-prometheus-server        1s

==> v1beta1/ClusterRoleBinding
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-server              1s

==> v1/Service
excited-elk-prometheus-alertmanager        1s
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-node-exporter       1s
excited-elk-prometheus-pushgateway         1s
excited-elk-prometheus-server              1s


NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
excited-elk-prometheus-server.default.svc.cluster.local


Get the Prometheus server URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9090


The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
excited-elk-prometheus-alertmanager.default.svc.cluster.local


Get the Alertmanager URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9093


The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
excited-elk-prometheus-pushgateway.default.svc.cluster.local


Get the PushGateway URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9091

For more information on running Prometheus, visit:
https://prometheus.io/
[node1 ~]$ kubectl get all
NAME                                                             READY     STATUS    RESTARTS   AGE
pod/excited-elk-prometheus-alertmanager-68f4f57c97-wrfjz         0/2       Pending   0          3m
pod/excited-elk-prometheus-kube-state-metrics-858d44dfdc-vt4wj   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-7bjqc                   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-gbcd7                   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-tk56q                   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-tkk9b                   1/1       Running   0          3m
pod/excited-elk-prometheus-pushgateway-58bfd54d6d-m4n69          1/1       Running   0          3m
pod/excited-elk-prometheus-server-5958586794-b97xn               0/2       Pending   0          3m

NAME                                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/excited-elk-prometheus-alertmanager         ClusterIP   10.106.159.46   <none>        80/TCP     3m
service/excited-elk-prometheus-kube-state-metrics   ClusterIP   None            <none>        80/TCP     3m
service/excited-elk-prometheus-node-exporter        ClusterIP   None            <none>        9100/TCP   3m
service/excited-elk-prometheus-pushgateway          ClusterIP   10.106.88.15    <none>        9091/TCP   3m
service/excited-elk-prometheus-server               ClusterIP   10.107.15.64    <none>        80/TCP     3m
service/kubernetes                                  ClusterIP   10.96.0.1       <none>        443/TCP    37m

NAME                                                  DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/excited-elk-prometheus-node-exporter   4         4         4         4            4           <none>          3m

NAME                                                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/excited-elk-prometheus-alertmanager         1         1         1            0           3m
deployment.apps/excited-elk-prometheus-kube-state-metrics   1         1         1            1           3m
deployment.apps/excited-elk-prometheus-pushgateway          1         1         1            1           3m
deployment.apps/excited-elk-prometheus-server               1         1         1            0           3m

NAME                                                                   DESIRED   CURRENT   READY     AGE
replicaset.apps/excited-elk-prometheus-alertmanager-68f4f57c97         1         1         0         3m
replicaset.apps/excited-elk-prometheus-kube-state-metrics-858d44dfdc   1         1         1         3m
replicaset.apps/excited-elk-prometheus-pushgateway-58bfd54d6d          1         1         1         3m
replicaset.apps/excited-elk-prometheus-server-5958586794               1         1         0         3m
[node1 ~]$

Wait for few minutes while you can access Prometheus UI using https://<external-ip>:9090 In the upcoming blog series, I will bring more interesting stuffs around Helm on PWD Playground.

Kubernetes Hands-on Lab #3 – Deploy Istio Mesh on K8s Cluster

Kubernetes Hands-on Lab #1 – Setting up 5-Node K8s Cluster