A Complete Guide to build Certified Kubernetes Cluster using Docker Enterprise 3.0 on Bare Metal System – Part-I

Estimated Reading Time: 15 minutes

Did you know? Over 800+ enterprise organizations use Docker Enterprise for everything from modernizing traditional applications to microservices and data science.

Over 96% of enterprise IT organizations are unable to manage Kubernetes on their own and one of the common reason is due to its inherent complexity. Undoubtedly, Kubernetes is a powerful orchestration technology for deploying, scaling and managing distributed applications and it has taken the industry by storm over the past few years. But if you are really looking out for on-premises certified Kubernetes distribution, you need to rely on enterprise container platform that allows you to leverage your existing team & processes to adopt and operationalize Kubernetes, offering streamlined lifecycle management & secure by design. Docker Enterprise 3.0 is the only desktop-to-cloud enterprise container platform enabling organizations to build and share any application and securely run them anywhere – from hybrid cloud to the edge.

Kubernetes Made Easy with DKS..

Last Dockercon, Docker Inc. announced DKS – Docker Kubernetes Service. DKS is a unified operational model that simplifies the use of Kubernetes for developers and operators. This provides organizations ability to leverage Kubernetes for their application delivery environment, without the need to hire a team of Kubernetes experts. Docker Enterprise with DKS makes Kubernetes easier to use and more secure for the entire organization without slowing down software delivery.

It is a certified Kubernetes distribution that is included with Docker Enterprise 3.0 and is designed to solve this fundamental challenge. It’s the only offering that integrates Kubernetes from the developer desktop to production servers, with ‘sensible secure defaults’ out-of-the-box. Simply put, DKS makes Kubernetes easy to use and more secure for the entire organization. Here are three things that DKS does to simplify (and accelerate) Kubernetes adoption for the enterprise:

  • Consistent, seamless Kubernetes experience for developers and operators ~ With the use of Version Packs, developers’ Kubernetes environments running in Docker Desktop Enterprise stay in sync with production environments for a complete, seamless Kubernetes experience. 
  • Streamlined Kubernetes lifecycle management (Day 1 and Day 2 operations) ~ New Cluster Management CLI Plugin to enable operations teams to easily deploy, scale, backup and restore and upgrade a certified Kubernetes environment using a set of simple CLI commands.
  • Enhanced security with ‘sensible defaults’ ( out-of-the-box configurations for security, encryption, access control, and lifecycle management — all without having to become a Kubernetes expert )

DKS is compatible with Kubernetes YAML, Helm charts, and super cool Docker Compose tool for creating multi-container applications. It also provides an automated way to install and configure Kubernetes applications across hybrid and multi-cloud deployments. Capabilities include security, access control, and lifecycle management. Additionally, it uses Docker Swarm Mode to orchestrate Docker containers.

Kubernetes 1.14+ in Docker Enterprise

Docker Enterprise 3.0 comes with the following components:

  • Containerd 1.2.6
  • Docker Engine 19.03.1
  • Runc 1.0.0-rc8
  • docker-init 0.18.0
  • Universal Control Plane 3.2.0
  • Docker Trusted Registry 2.7
  • Kubernetes 1.14+
  • Calico v3.5.7

This blog is going to be series of blog posts around Kubernetes support and capabilities under Docker EE 3.0 which are listed below:

  • Deploying certified Kubernetes Cluster using Docker Enterprise 3.0 running on Bare Metal System
  • Deploying certified Kubernetes Cluster on AWS Cloud using Docker Cluster CLI Plugin
  • Support of Kubernetes on Windows 2019
  • Implementing Persistent storage for Kubernetes workload using iSCSI
  • Implementing Cluster Ingress for Kubernetes

Under this first series of blog post, I will demonstrate the below implementations:

  • How to deploy Docker Enterprise 3.0 on bare metal/on-premises.
  • How to install Docker Client Bundle and add Linux worker nodes to the existing Cluster
  • How to add Windows worker nodes to the existing Kubernetes Cluster
  • How to install Kubectl
  • Enabling Helm and Tiller using UCP

Pre-Requisites:

  • Ubuntu 18.04 (atleast 2 Node to setup Multi-Node Cluster)
  • Minimal 4GB RAM is required for UCP 3.2.0
  • Go to https://hub.docker.com/my-content.
  • Click the Setup button for Docker Enterprise Edition for Ubuntu.
  • Copy the URL from the field labeled Copy and paste this URL to download your Edition.

Install packages to allow apt to use a repository over HTTPS:

$sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    software-properties-common

Adding $DOCKER_EE_URL variable into your environment

Replace with the URL you noted down in the prerequisites. Replace sub-xxx too.

$curl -fsSL https://storebits.docker.com/ee/m/sub-XXX-44fb-XXX-b6bf-XXXXXX/ubuntu/gpg | sudo apt-key add -

Adding the stable Repository

$sudo add-apt-repository \
   "deb [arch=amd64] https://storebits.docker.com/ee/m/sub-XXX-44fb-XXX-b6bf-XXXXXX/ubuntu \
   $(lsb_release -cs) \
   stable-19.03"

Installing Docker Enterprise

$sudo apt-get install docker-ee docker-ee-cli containerd.io

Verifying Docker Enterprise Version

$ sudo docker version
Client: Docker Engine - Enterprise
 Version:           19.03.1
 API version:       1.40
 Go version:        go1.12.5
 Git commit:        f660560
 Built:             Thu Jul 25 20:59:23 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Enterprise
 Engine:
  Version:          19.03.1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.5
  Git commit:       f660560
  Built:            Thu Jul 25 20:57:45 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
cse@ubuntu1804-1:~$

Testing the Hello World Example

$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:6540fc08ee6e6b7b63468dc3317e3303aae178cb8a45ed3123180328bcc1d20f
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/


Installing Universal Control Plane v3.2.0

$ sudo docker container run --rm -it --name ucp \
>   -v /var/run/docker.sock:/var/run/docker.sock \
>   docker/ucp:3.2.0 install \
>   --host-address 10.94.214.115 \
>   --interactive
Unable to find image 'docker/ucp:3.2.0' locally
3.2.0: Pulling from docker/ucp
050382585609: Pull complete
de0f2e3c5141: Pull complete
ef4440c639ab: Pull complete
Digest: sha256:f9049801c3fca01f1f08772013911bd8f9b616224b9f8d5252d91faec316424a
Status: Downloaded newer image for docker/ucp:3.2.0
INFO[0000] Your Docker daemon version 19.03.1, build f660560 (4.15.0-29-generic) is compatible with UCP 3.2.0 (586d782)
INFO[0000] Initializing New Docker Swarm
Admin Username: ajeetraina
Admin Password:
Confirm Admin Password:
WARN[0014] None of the Subject Alternative Names we'll be using in the UCP certificates ["ubuntu1804-1"] contain a domain component. Your generated certs may fail TLS validation unless you only use one of these shortnames or IP addresses to connect. You can use the --san flag to add more aliases

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases:
INFO[0019] Checking required ports for connectivity
INFO[0035] Checking required container images
INFO[0035] Pulling required images... (this may take a while)
INFO[0035] Pulling image: docker/ucp-agent:3.2.0
INFO[0042] Pulling image: docker/ucp-auth:3.2.0
INFO[0049] Pulling image: docker/ucp-auth-store:3.2.0
INFO[0061] Pulling image: docker/ucp-azure-ip-allocator:3.2.0
INFO[0067] Pulling image: docker/ucp-calico-cni:3.2.0
INFO[0079] Pulling image: docker/ucp-calico-kube-controllers:3.2.0
INFO[0092] Pulling image: docker/ucp-calico-node:3.2.0
INFO[0102] Pulling image: docker/ucp-cfssl:3.2.0
INFO[0113] Pulling image: docker/ucp-compose:3.2.0
5
INFO[0180] Pulling image: docker/ucp-controller:3.2.0
INFO[0197] Pulling image: docker/ucp-dsinfo:3.2.0
INFO[0201] Pulling image: docker/ucp-etcd:3.2.0
INFO[0230] Pulling image: docker/ucp-hyperkube:3.2.0
INFO[0266] Pulling image: docker/ucp-interlock:3.2.0
INFO[0272] Pulling image: docker/ucp-interlock-extension:3.2.0
INFO[0278] Pulling image: docker/ucp-interlock-proxy:3.2.0
INFO[0287] Pulling image: docker/ucp-kube-compose:3.2.0
INFO[0293] Pulling image: docker/ucp-kube-compose-api:3.2.0
INFO[0301] Pulling image: docker/ucp-kube-dns:3.2.0
INFO[0307] Pulling image: docker/ucp-kube-dns-dnsmasq-nanny:3.2.0
INFO[0314] Pulling image: docker/ucp-kube-dns-sidecar:3.2.0
INFO[0321] Pulling image: docker/ucp-metrics:3.2.0
INFO[0343] Pulling image: docker/ucp-pause:3.2.0
INFO[0348] Pulling image: docker/ucp-swarm:3.2.0
INFO[0354] Completed pulling required images
INFO[0357] Running install agent container ...
INFO[0000] Loading install configuration
INFO[0000] Running Installation Steps
INFO[0000] Step 1 of 35: [Setup Internal Cluster CA]
INFO[0003] Step 2 of 35: [Setup Internal Client CA]
INFO[0003] Step 3 of 35: [Initialize etcd Cluster]
INFO[0007] Step 4 of 35: [Set Initial Config in etcd]
INFO[0007] Step 5 of 35: [Deploy RethinkDB Server]
INFO[0010] Step 6 of 35: [Initialize RethinkDB Tables]
INFO[0030] Step 7 of 35: [Create Auth Service Encryption Key Secret]
INFO[0030] Step 8 of 35: [Deploy Auth API Server]
INFO[0039] Step 9 of 35: [Setup Auth Configuration]
INFO[0040] Step 10 of 35: [Deploy Auth Worker Server]
INFO[0046] Step 11 of 35: [Deploy UCP Proxy Server]
INFO[0047] Step 12 of 35: [Initialize Swarm v1 Node Inventory]
INFO[0047] Step 13 of 35: [Deploy Swarm v1 Manager Server]
INFO[0048] Step 14 of 35: [Deploy Internal Cluster CA Server]
INFO[0050] Step 15 of 35: [Deploy Internal Client CA Server]
INFO[0052] Step 16 of 35: [Deploy UCP Controller Server]
INFO[0058] Step 17 of 35: [Deploy Kubernetes API Server]
INFO[0067] Step 18 of 35: [Deploy Kubernetes Controller Manager]
INFO[0073] Step 19 of 35: [Deploy Kubernetes Scheduler]
INFO[0078] Step 20 of 35: [Deploy Kubelet]
INFO[0079] Step 21 of 35: [Deploy Kubernetes Proxy]
INFO[0081] Step 22 of 35: [Wait for Healthy UCP Controller and Kubernetes API]
INFO[0082] Step 23 of 35: [Create Kubernetes Pod Security Policies]
INFO[0085] Step 24 of 35: [Install Kubernetes CNI Plugin]
INFO[0113] Step 25 of 35: [Install KubeDNS]
INFO[0121] Step 26 of 35: [Create UCP Controller Kubernetes Service Endpoints]
INFO[0124] Step 27 of 35: [Install Metrics Plugin]
INFO[0131] Step 28 of 35: [Install Kubernetes Compose Plugin]
INFO[0142] Step 29 of 35: [Deploy Manager Node Agent Service]
INFO[0142] Step 30 of 35: [Deploy Worker Node Agent Service]
INFO[0142] Step 31 of 35: [Deploy Windows Worker Node Agent Service]
INFO[0142] Step 32 of 35: [Deploy Cluster Agent Service]
INFO[0142] Step 33 of 35: [Set License]
INFO[0142] Step 34 of 35: [Set Registry CA Certificates]
INFO[0142] Step 35 of 35: [Wait for All Nodes to be Ready]
INFO[0147]     Waiting for 1 nodes to be ready
INFO[0152] All Installation Steps Completed
cse@ubuntu1804-1:~$

cse@ubuntu1804-1:~$ sudo docker ps
CONTAINER ID        IMAGE                         COMMAND                  CREATED              STATUS                   PORTS                                                                             NAMES
f8c4666a7646        docker/ucp-agent:3.2.0        "/bin/ucp-agent node…"   58 seconds ago       Up 56 seconds            2376/tcp                                                                          ucp-manager-agent.z5m50h0rl2kh4jehuoe76hj8k.tj89jrf9xkxmw0t42oec3xkb0
611ca05ab239        docker/ucp-agent:3.2.0        "/bin/ucp-agent clus…"   58 seconds ago       Up 56 seconds            2376/tcp                                                                          ucp-cluster-agent.1.omzv7veky1kbmzgl78g25surq
df16260783ea        50810572f8d1                  "/compose-controller…"   About a minute ago   Up About a minute                                                                                          k8s_ucp-kube-compose_compose-57cc55c56d-n7klp_kube-system_a7af24f5-b83b-11e9-86e4-0242ac11000b_0
5079f9dc068d        7f719dba281f                  "/api-server --kubec…"   About a minute ago   Up About a minute                                                                                          k8s_ucp-kube-compose-api_compose-api-556c6d8c86-xf9hc_kube-system_a7aefd70-b83b-11e9-86e4-0242ac11000b_0
506abc0ef18c        9fdd9422f8b8                  "/bin/proxy"             About a minute ago   Up About a minute                                                                                          k8s_ucp-metrics-proxy_ucp-metrics-c7zpb_kube-system_a565c9f2-b83b-11e9-86e4-0242ac11000b_0
a99093a69d39        9fdd9422f8b8                  "/bin/prometheus.sh …"   About a minute ago   Up About a minute                                                                                          k8s_ucp-metrics-prometheus_ucp-metrics-c7zpb_kube-system_a565c9f2-b83b-11e9-86e4-0242ac11000b_0
8d3bd381cdd4        9fdd9422f8b8                  "/bin/sh -c 'while :…"   About a minute ago   Up About a minute                                                                                          k8s_ucp-metrics-inventory_ucp-metrics-c7zpb_kube-system_a565c9f2-b83b-11e9-86e4-0242ac11000b_0
93544ed4f512        docker/ucp-pause:3.2.0        "/pause"                 About a minute ago   Up About a minute                                                                                          k8s_POD_compose-api-556c6d8c86-xf9hc_kube-system_a7aefd70-b83b-11e9-86e4-0242ac11000b_0
579b68869229        docker/ucp-pause:3.2.0        "/pause"                 About a minute ago   Up About a minute                                                                                          k8s_POD_compose-57cc55c56d-n7klp_kube-system_a7af24f5-b83b-11e9-86e4-0242ac11000b_0
3182cfcded2c        docker/ucp-pause:3.2.0        "/pause"                 About a minute ago   Up About a minute                                                                                          k8s_POD_ucp-metrics-c7zpb_kube-system_a565c9f2-b83b-11e9-86e4-0242ac11000b_0
2c88d0f54623        435d88fe6b45                  "/sidecar --v=2 --lo…"   About a minute ago   Up About a minute                                                                                          k8s_ucp-kubedns-sidecar_kube-dns-84cd964544-pljlj_kube-system_a0bb52b9-b83b-11e9-86e4-0242ac11000b_0
add0887ce338        ec8b25117519                  "/dnsmasq-nanny -v=2…"   About a minute ago   Up About a minute                                                                                          k8s_ucp-dnsmasq-nanny_kube-dns-84cd964544-pljlj_kube-system_a0bb52b9-b83b-11e9-86e4-0242ac11000b_0
4bb226feb0af        28b1e608dc41                  "/kube-dns --domain=…"   About a minute ago   Up About a minute                                                                                          k8s_ucp-kubedns_kube-dns-84cd964544-pljlj_kube-system_a0bb52b9-b83b-11e9-86e4-0242ac11000b_0
028ecd2f4ba8        docker/ucp-pause:3.2.0        "/pause"                 About a minute ago   Up About a minute                                                                                          k8s_POD_kube-dns-84cd964544-pljlj_kube-system_a0bb52b9-b83b-11e9-86e4-0242ac11000b_0
529aed9d12fc        eb607f503ccd                  "/usr/bin/kube-contr…"   About a minute ago   Up About a minute                                                                                          k8s_calico-kube-controllers_calico-kube-controllers-5589844c6c-gx7x8_kube-system_8edd2b9f-b83b-11e9-86e4-0242ac11000b_0
a77e677d8688        docker/ucp-pause:3.2.0        "/pause"                 About a minute ago   Up About a minute                                                                                          k8s_POD_calico-kube-controllers-5589844c6c-gx7x8_kube-system_8edd2b9f-b83b-11e9-86e4-0242ac11000b_0
e065fac81ef2        6904e301c3a7                  "/install-cni.sh"        About a minute ago   Up About a minute                                                                                          k8s_install-cni_calico-node-blhvh_kube-system_8e9166f2-b83b-11e9-86e4-0242ac11000b_0
c65d50dafef4        697d2c1dea15                  "start_runit"            About a minute ago   Up About a minute                                                                                          k8s_calico-node_calico-node-blhvh_kube-system_8e9166f2-b83b-11e9-86e4-0242ac11000b_0
1f478e937ee2        docker/ucp-pause:3.2.0        "/pause"                 About a minute ago   Up About a minute                                                                                          k8s_POD_calico-node-blhvh_kube-system_8e9166f2-b83b-11e9-86e4-0242ac11000b_0
56ef4c6e7449        docker/ucp-hyperkube:3.2.0    "kube-proxy --cluste…"   2 minutes ago        Up 2 minutes                                                                                               ucp-kube-proxy
ae412f355aaa        docker/ucp-hyperkube:3.2.0    "/bin/kubelet_entryp…"   2 minutes ago        Up 2 minutes                                                                                               ucp-kubelet
93c0fb13401a        docker/ucp-hyperkube:3.2.0    "kube-scheduler --ku…"   2 minutes ago        Up 2 minutes (healthy)                                                                                     ucp-kube-scheduler
e20bfdd75b9a        docker/ucp-hyperkube:3.2.0    "/bin/controller_man…"   2 minutes ago        Up 2 minutes (healthy)                                                                                     ucp-kube-controller-manager
46aee6f0c836        docker/ucp-hyperkube:3.2.0    "/bin/apiserver_entr…"   2 minutes ago        Up 2 minutes (healthy)   0.0.0.0:12388->12388/tcp                                                          ucp-kube-apiserver
5ad4de889f26        docker/ucp-controller:3.2.0   "/bin/controller ser…"   2 minutes ago        Up 2 minutes (healthy)   0.0.0.0:443->8080/tcp, 0.0.0.0:6443->8081/tcp                                     ucp-controller
b4788ba1fb8f        docker/ucp-cfssl:3.2.0        "/bin/ucp-ca serve -…"   2 minutes ago        Up 2 minutes (healthy)   0.0.0.0:12382->12382/tcp                                                          ucp-client-root-ca
4d54f68a269d        docker/ucp-cfssl:3.2.0        "/bin/ucp-ca serve -…"   2 minutes ago        Up 2 minutes (healthy)   0.0.0.0:12381->12381/tcp                                                          ucp-cluster-root-ca
80c74028f856        docker/ucp-swarm:3.2.0        "/bin/swarm manage -…"   2 minutes ago        Up 2 minutes (healthy)   0.0.0.0:2376->2375/tcp                                                            ucp-swarm-manager
2df245efdbd5        docker/ucp-agent:3.2.0        "/bin/ucp-agent prox…"   2 minutes ago        Up 2 minutes (healthy)   0.0.0.0:6444->6444/tcp, 0.0.0.0:12378->12378/tcp, 0.0.0.0:12376->2376/tcp         ucp-proxy
d1fb6f51567e        docker/ucp-auth:3.2.0         "/usr/local/bin/enzi…"   2 minutes ago        Up 2 minutes (healthy)                                                                                     ucp-auth-worker.z5m50h0rl2kh4jehuoe76hj8k.soauyq1ovtbzu5dsvqm1aulrl
e12173e6f7b9        docker/ucp-auth:3.2.0         "/usr/local/bin/enzi…"   2 minutes ago        Up 2 minutes (healthy)                                                                                     ucp-auth-api.z5m50h0rl2kh4jehuoe76hj8k.zbx3dkew00pro3s76s321wqgs
06667a03ffea        docker/ucp-auth-store:3.2.0   "/bin/entrypoint.sh …"   3 minutes ago        Up 3 minutes (healthy)   0.0.0.0:12383-12384->12383-12384/tcp                                              ucp-auth-store
40d316287979        docker/ucp-etcd:3.2.0         "/bin/entrypoint.sh …"   3 minutes ago        Up 3 minutes (healthy)   2380/tcp, 4001/tcp, 7001/tcp, 0.0.0.0:12380->12380/tcp, 0.0.0.0:12379->2379/tcp   ucp-kv
cse@ubuntu1804-1:~$

Accessing the UCP

Now you should be able to access Docker Univeral Control Plane via https://<node-ip>



Click on “Sign In” and you will need to upload the license file to access Docker Enterprise UCP 3.2.0 WebUI as shown below:

Adding Worker Nodes to the Cluster

Let us try to add worker nodes to the cluster. Click on “Shared Resources” on the left pane and Click on “Nodes”. Select “Add Nodes” and you should be able to choose orchestrator of your choice. Also, it allows you to add either Linux or Windows nodes to the cluster as shown below:

I assume that you have a worker node installed with Ubuntu 18.04 and installed with the latest Docker binaries. Please remember it could be either Docker Community Edition or Enterprise Edition.

@ubuntu1804-1:~$ sudo curl -sSL https://get.docker.com/ | sh

$ sudo usermod -aG docker cs
$ sudo docker swarm join --token SWMTKN-1-3n4mwkzhXXXXXXt2hip0wonqagmjtos-bch9ezkt5kiroz6jncidrz13x <managernodeip>:2377
This node joined a swarm as a worker.

By now, you should be able to see both manager node and 1 worker node added under UCP.

If in case you see warning on UCP dashboard stating that you have similar hostname on both the manager and worker node, just go ahead and change the hostname on the worker node and it will automatically get updated on UCP dashboard.

Adding Windows worker Node to this existing Docker EE 3.0 Cluster

If you want to add Windows system as a worker node to the existing cluster, it is still possible. You will need Windows Server 2016+ system as a minimal OS.

I assume that Windows 2016 is already installed. Follow the below steps to first install Docker 19.03.x and then adding it to the cluster as a worker node.

Install-WindowsFeature -Name Containers
Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name docker -ProviderName DockerMsftProvider -Force -RequiredVersion 19.03.2
Restart the System
Start-Service docker
docker --version

Go to UCP window > Resources > Nodes. Select “Add Nodes”. Choose Node Type as “Windows” as shown below:

Once you select the square bracket under Step-2 it will display a docker swarm join command which you need to run on the Windows 2016 system to add it to your existing Docker EE 3.0 cluster.

Do verify that the nodes gets added successfully by clicking on “Nodes” tab under Resources.

Installing Docker Client Bundle

Click on Dashboard and scroll down to see Docker CLI option . This option allows you to download a client bundle to create and manage services using the Docker CLI client. Once you click, you will be able to find a new window as shown below:

Click on “user profile page” and it should redirect you to https://<manager-ip-node/manage/profile/clientbundle page with the below screenshot:

Click on “Generate Client Bundle” and it will download ucp-bundle-<username>.zip

$ unzip ucp-bundle-ajeetraina.zip
Archive:  ucp-bundle-ajeetraina.zip
 extracting: ca.pem
 extracting: cert.pem
 extracting: key.pem
 extracting: cert.pub
 extracting: kube.yml
 extracting: env.sh
 extracting: env.ps1
 extracting: env.cmd
 extracting: meta.json
 extracting: tls/docker/key.pem
 extracting: tls/kubernetes/ca.pem
 extracting: tls/kubernetes/cert.pem
 extracting: tls/kubernetes/key.pem
 extracting: tls/docker/ca.pem
 extracting: tls/docker/cert.pem
@ubuntu1804-1:~$ eval "$(<env.sh)"


The env script updates the DOCKER_HOST and DOCKER_CERT_PATH environment variables to make the Docker CLI client interact with UCP and use the client certificates you downloaded. From now on, when you use the Docker CLI client, it includes your user specific client certificates as part of the request to UCP.

Installing Kubectl on Docker EE 3.0

Once you have the Kubernetes version, install the kubectl client for the relevant operating system. As shown below, we need to install Kubectl version 1.14.3

Setting Kubectl version

@ubuntu1804-1:~$ k8sversion=v1.14.3
@ubuntu1804-1:~$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$k8sversion/bin/linux/amd64/kubectl
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 41.1M  100 41.1M    0     0  7494k      0  0:00:05  0:00:05 --:--:-- 9070k
@ubuntu1804-1:~$ chmod +x ./kubectl
@ubuntu1804-1:~$ sudo mv ./kubectl /usr/local/bin/kubectl
@ubuntu1804-1:~$

Verifying the Kubectl Installation

~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.3-docker-2", GitCommit:"7cfcb52617bf94c36953159ee9a2bf14c7fcc7ba", GitTreeState:"clean", BuildDate:"2019-06-06T16:18:13Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Listing out the Kubernetes Nodes

cse@ubuntu1804-1:~$ kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
node2          Ready    <none>   23h   v1.14.3-docker-2
ubuntu1804-1   Ready    master   23h   v1.14.3-docker-2

Enabling Helm and Tiller with UCP

$ kubectl create rolebinding default-view --clusterrole=view --serviceaccount=kube-system:default --namespace=kube-system
rolebinding.rbac.authorization.k8s.io/default-view created

$ kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
clusterrolebinding.rbac.authorization.k8s.io/add-on-cluster-admin created
cse@ubuntu1804-1:~$

Installing Helm

$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > install-helm.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7001  100  7001    0     0   6341      0  0:00:01  0:00:01 --:--:--  6347
$ chmod u+x install-helm.sh

$ ./install-helm.sh
Downloading https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.

cse@ubuntu1804-1:~$ helm init
Creating /home/cse/.helm
Creating /home/cse/.helm/repository
Creating /home/cse/.helm/repository/cache
Creating /home/cse/.helm/repository/local
Creating /home/cse/.helm/plugins
Creating /home/cse/.helm/starters
Creating /home/cse/.helm/cache/archive
Creating /home/cse/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/cse/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
cse@ubuntu1804-1:~$

Verifying Helm Installation

$ helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}

Troubeshooting:

In case you face an issue “Error: could not find a ready tiller pod” follow the below steps:

$ /usr/local/bin/tiller
[main] 2019/10/06 00:37:53 Starting Tiller v2.14.3 (tls=false)
[main] 2019/10/06 00:37:53 GRPC listening on :44134
[main] 2019/10/06 00:37:53 Probes listening on :44135
[main] 2019/10/06 00:37:53 Storage driver is ConfigMap
[main] 2019/10/06 00:37:53 Max history per release is 0

...

$ export HELM_HOST=localhost:44134
$ helm version # Should connect to localhost.

Deploying MySQL using Helm on Docker EE 3.0

Let us try out deploying MySQL using HelmPack.

$ helm install --name mysql stable/mysql
NAME:   mysql
LAST DEPLOYED: Wed Aug  7 11:43:01 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME        DATA  AGE
mysql-test  1     0s

==> v1/PersistentVolumeClaim
NAME   STATUS   VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mysql  Pending  0s

==> v1/Secret
NAME   TYPE    DATA  AGE
mysql  Opaque  2     0s

==> v1/Service
NAME   TYPE       CLUSTER-IP   EXTERNAL-IP  PORT(S)   AGE
mysql  ClusterIP  10.96.77.83  <none>       3306/TCP  0s

==> v1beta1/Deployment
NAME   READY  UP-TO-DATE  AVAILABLE  AGE
mysql  0/1    0           0          0s


NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your c     luster:
mysql.default.svc.cluster.local

To get your root password run:

    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql -o jsonpa     th="{.data.mysql-root-password}" | base64 --decode; echo)

To connect to your database:

1. Run an Ubuntu pod that you can use as a client:

    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il

2. Install the mysql client:

    $ apt-get update && apt-get install mysql-client -y

3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysql -p

To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306

    # Execute the following command to route the connection:
    kubectl port-forward svc/mysql 3306

    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}


cse@ubuntu1804-1:~$

Listing out the Releases

The helm listcommand lists all of the releases. By default, it lists only releases that are deployed or failed. Flags like ‘–deleted’ and ‘–all’ will alter this behavior. Such flags can be combined: ‘–deleted –failed’. By default, items are sorted alphabetically. Use the ‘-d’ flag to sort by release date.

$ helm list
NAME    REVISION        UPDATED                         STATUS          CHART           APP VERSION     NAMESPACE
mysql   1               Wed Aug  7 11:43:01 2019        DEPLOYED        mysql-1.3.0     5.7.14          default
$ kubectl get po,deploy,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/mysql-6f6bff58d8-t2kwm   1/1     Running   0          5m35s

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/mysql   1/1     1            0           5m35s

NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP    28h
service/mysql        ClusterIP   10.96.77.83   <none>        3306/TCP   5m35s
cse@ubuntu1804-1:~$

Hence, you can use Helm flawlessly with UCP under Docker Enterprise 3.0.

In Part-2 of this blog post series, I will talk around iSCSI support for Kubernetes under Docker Enterprise 3.0. Stay tuned.

Are you a Beginner and looking out to build your career in Docker & Kubernetes? Head over to DockerLabs Slack channel to join 1100+ Slack members https://tinyurl.com/y973wcq8

Do check https://dockerlabs.collabnix.com to access 500+ FREE tutorials on Docker, Kubernetes & Cloud.

Kubernetes Cluster on Bare Metal System Made Possible using MetalLB

Estimated Reading Time: 11 minutes

If you try to setup Kubernetes cluster on bare metal system, you will notice that Load-Balancer always remain in the “pending” state indefinitely when created. This is expected because Kubernetes, by default does not offer an implementation of network load-balancer for bare metal cluster.

In a cloud-enabled Kubernetes cluster, you request a load-balancer, and your cloud platform assigns an IP address to you. In a bare metal cluster, you need an external Load-Balancer implementation which has capability to perform an IP allocation.

Missing Load-Balancer in Bare Metal System

Enter MetalLB…

MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster. It aims to redress this imbalance by offering a Network LB implementation that integrates with standard network equipment, so that external services on bare metal clusters also “just work” as much as possible.

Why can’t Ingress help me out here?

Yes, Ingress could be one of the best option if you deployed Kubernetes cluster on bare metal. Ingress lets you configure internal load balancing of HTTP or HTTPS traffic to your deployed services using software load balancers like NGINX or HAProxy deployed as pods in your cluster. Ingress makes use of Layer 7 routing of your applications as well. The problem with this is that it doesn’t easily route TCP or UDP traffic. The best way to do this was using a LoadBalancer type of service. However, if you deployed your Kubernetes cluster to bare metal you didn’t have the option of using a LoadBalancer.

How does MetalLB work?

MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.

It is important to note that MetalLB cannot create IP addresses out of thin air, so you do have to give it pools of IP addresses that it can use. It will then take care of assigning and unassigning individual addresses as services come and go, but it will only ever hand out IPs that are part of its configured pools.

Okay wait.. How will I get IP Address pool for MetalLB?

How you get IP address pools for MetalLB depends on your environment. If you’re running a bare metal cluster in a colocation facility, your hosting provider probably offers IP addresses for lease. In that case, you would lease, say, a /26 of IP space (64 addresses), and provide that range to MetalLB for cluster services.

Under this blog post, I will showcase how to setup 3-Node Kubernetes cluster using MetalLB. The below steps have also been tested for ESXi Virtual Machines and works flawlessly.

Preparing the Infrastructure

[rml_read_more]

  • Machine #1(Master): 10.94.214.206
  • Machine #2(Worker Node1): 10.94.214.210
  • Machine #3(Worker Node2): 10.94.214.213

Assign hostname to each of these systems:

~$ cat /etc/hosts
127.0.0.1       localhost
127.0.1.1       ubuntu1804-1
10.94.214.206    kubemaster.dell.com
10.94.214.210   node1.dell.com
10.94.214.213   node2.dell.com

Installing curl package

$ sudo apt install curl
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  libcurl4
The following NEW packages will be installed:
  curl libcurl4
0 upgraded, 2 newly installed, 0 to remove and 472 not upgraded.
Need to get 373 kB of archives.
After this operation, 1,036 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 libcurl4 amd64 7.58.0-2ubuntu3.7 [214 kB]
Get:2 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 curl amd64 7.58.0-2ubuntu3.7 [159 kB]
Fetched 373 kB in 2s (164 kB/s)
Selecting previously unselected package libcurl4:amd64.
(Reading database ... 128791 files and directories currently installed.)
Preparing to unpack .../libcurl4_7.58.0-2ubuntu3.7_amd64.deb ...
Unpacking libcurl4:amd64 (7.58.0-2ubuntu3.7) ...
Selecting previously unselected package curl.
Preparing to unpack .../curl_7.58.0-2ubuntu3.7_amd64.deb ...
Unpacking curl (7.58.0-2ubuntu3.7) ...
Setting up libcurl4:amd64 (7.58.0-2ubuntu3.7) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for man-db (2.8.3-2) ...
Setting up curl (7.58.0-2ubuntu3.7) ...

Installing Docker

$ sudo curl -sSL https://get.docker.com/ | sh
# Executing docker install script, commit: 2f4ae48
+ sudo -E sh -c apt-get update -qq >/dev/null
+ sudo -E sh -c apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
+ sudo -E sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | apt-key add -qq - >/dev/null
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sudo -E sh -c echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" > /etc/apt/sources.list.d/docker.list
+ sudo -E sh -c apt-get update -qq >/dev/null
+ [ -n  ]
+ sudo -E sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/null
+ sudo -E sh -c docker version
Client:
 Version:           18.09.7
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        2d0083d
 Built:             Thu Jun 27 17:56:23 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.7
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       2d0083d
  Built:            Thu Jun 27 17:23:02 2019
  OS/Arch:          linux/amd64
  Experimental:     false
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker cse

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.
cse@kubemaster:~$
~$ sudo docker version
Client:
 Version:           18.09.7
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        2d0083d
 Built:             Thu Jun 27 17:56:23 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.7
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       2d0083d
  Built:            Thu Jun 27 17:23:02 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Add the Kubernetes signing key on both the nodes

$ sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
OK

Adding Xenial Kubernetes Repository on both the nodes

sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

Installing Kubeadm

sudo apt install kubeadm

Verifying Kubeadm installation

$ sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:41Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Disable swap memory (if running) on both the nodes

sudo swapoff -a

Steps to setup K8s Cluster

sudo kubeadm init --apiserver-advertise-address $(hostname -i)
mkdir -p $HOME/.kube
chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -n kube-system -f \
    "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 |tr -d '\n')"

In case you face any issue, just run the below command to see the logs:

journalctl -xeu kubelet

Adding Worker Node

cse@ubuntu1804-1:~$ sudo swapoff -a
cse@ubuntu1804-1:~$ sudo kubeadm join 10.94.214.210:6443 --token aju7kd.5mlhmmo1wlf8d5un     --discovery-token-ca-cert-hash sha256:89541bb9bbe5ee1efafe17b20eab77e6b756bd4ae023d2ff7c67ce73e3e8c7bb
cse@ubuntu1804-1:~$ sudo swapoff -a
cse@ubuntu1804-1:~$ sudo kubeadm join 10.94.214.210:6443 --token aju7kd.5mlhmmo1wlf8d5un     --discovery-token-ca-cert-hash sha256:89541bb9bbe5ee1efafe17b20eab77e6b756bd4ae023d2ff7c67ce73e3e8c7bb
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

cse@ubuntu1804-1:~$

Listing the Nodes

cse@kubemaster:~$ sudo kubectl get nodes
NAME               STATUS   ROLES    AGE     VERSION
kubemaster         Ready    master   8m17s   v1.15.0
worker1.dell.com   Ready    <none>   5m22s   v1.15.0
cse@kubemaster:~$
cse@kubemaster:~$ sudo kubectl describe node worker1.dell.com
Name:               worker1.dell.com
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=worker1.dell.com
                    kubernetes.io/os=linux
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 05 Jul 2019 16:10:33 -0400
Taints:             <none>
Unschedulable:      false
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Fri, 05 Jul 2019 16:10:55 -0400   Fri, 05 Jul 2019 16:10:55 -0400   WeaveIsUp                    Weave pod has set this
  MemoryPressure       False   Fri, 05 Jul 2019 16:15:33 -0400   Fri, 05 Jul 2019 16:10:33 -0400   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Fri, 05 Jul 2019 16:15:33 -0400   Fri, 05 Jul 2019 16:10:33 -0400   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Fri, 05 Jul 2019 16:15:33 -0400   Fri, 05 Jul 2019 16:10:33 -0400   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Fri, 05 Jul 2019 16:15:33 -0400   Fri, 05 Jul 2019 16:11:03 -0400   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  10.94.214.213
  Hostname:    worker1.dell.com
Capacity:
 cpu:                2
 ephemeral-storage:  102685624Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             4040016Ki
 pods:               110
Allocatable:
 cpu:                2
 ephemeral-storage:  94635070922
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             3937616Ki
 pods:               110
System Info:
 Machine ID:                 e7573bb6bf1e4cf5b9249413950f0a3d
 System UUID:                2FD93F42-FA94-0C27-83A3-A1F9276469CF
 Boot ID:                    782d6cfc-08a2-4586-82b6-7149389b1f4f
 Kernel Version:             4.15.0-29-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.9.7
 Kubelet Version:            v1.15.0
 Kube-Proxy Version:         v1.15.0
Non-terminated Pods:         (4 in total)
  Namespace                  Name                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                         ------------  ----------  ---------------  -------------  ---
  default                    my-nginx-68459bd9bb-55wk7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
  default                    my-nginx-68459bd9bb-z5r45    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
  kube-system                kube-proxy-jt4bs             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
  kube-system                weave-net-kw9gg              20m (1%)      0 (0%)      0 (0%)           0 (0%)         5m51s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests  Limits
  --------           --------  ------
  cpu                20m (1%)  0 (0%)
  memory             0 (0%)    0 (0%)
  ephemeral-storage  0 (0%)    0 (0%)
Events:
  Type    Reason                   Age                    From                          Message
  ----    ------                   ----                   ----                          -------
  Normal  Starting                 5m51s                  kubelet, worker1.dell.com     Starting kubelet.
  Normal  NodeHasSufficientMemory  5m51s (x2 over 5m51s)  kubelet, worker1.dell.com     Node worker1.dell.com status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    5m51s (x2 over 5m51s)  kubelet, worker1.dell.com     Node worker1.dell.com status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     5m51s (x2 over 5m51s)  kubelet, worker1.dell.com     Node worker1.dell.com status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  5m51s                  kubelet, worker1.dell.com     Updated Node Allocatable limit across pods
  Normal  Starting                 5m48s                  kube-proxy, worker1.dell.com  Starting kube-proxy.
  Normal  NodeReady                5m21s                  kubelet, worker1.dell.com     Node worker1.dell.com status is now: NodeReady
cse@kubemaster:~$
$ sudo kubectl run nginx --image nginx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
~$

Configuring Metal LoadBalancer

kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
~$ sudo kubectl get ns
NAME              STATUS   AGE
default           Active   23h
kube-node-lease   Active   23h
kube-public       Active   23h
kube-system       Active   23h
metallb-system    Active   13m
$ kubectl get all -n metallb-system
NAME                              READY   STATUS    RESTARTS   AGE
pod/controller-547d466688-m9xlt   1/1     Running   0          13m
pod/speaker-tb9d7                 1/1     Running   0          13m



NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/speaker   1         1         1       1            1           <none>          13m

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   1/1     1            1           13m

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-547d466688   1         1         1       13m

There are 2 components :

  • Controller – Assigns the IP address to the LB
  • Speaker – Ensure that you can reach service through LB

Controller component is deployed as deplyment and speaker as daemonset which is running on all worker nodes

Next, we need to look at config files.

To configure MetalLB, write a config map to metallb-system/config

Link: https://metallb.universe.tf/configuration/

Layer 2 mode is the simplest to configure: in many cases, you don’t need any protocol-specific configuration, only IP addresses.

 sudo kubectl get nodes -o wide
NAME               STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
kubemaster         Ready    master   23h   v1.15.0   10.94.214.210   <none>        Ubuntu 18.04.1 LTS   4.15.0-29-generic   docker://18.9.7
worker1.dell.com   Ready    <none>   23h   v1.15.0   10.94.214.213   <none>        Ubuntu 18.04.1 LTS   4.15.0-29-generic   docker://18.9.7

We need to pay attention to the above Internal IP. We need to use this range only.

$ sudo cat <<EOF | kubectl create -f -
> apiVersion: v1
> kind: ConfigMap
> metadata:
>   namespace: metallb-system
>   name: config
> data:
>   config: |
>     address-pools:
>     - name: default
>       protocol: layer2
>       addresses:
>       - 10.94.214.200-10.94.214.255
>
> EOF
configmap/config created
cse@kubemaster:~$ kubectl describe configmap config -n metallb-system
Name:         config
Namespace:    metallb-system
Labels:       <none>
Annotations:  <none>

Data
====
config:
----
address-pools:
- name: default
  protocol: layer2
  addresses:
  - 10.94.26.214-10.94.214.255

Events:  <none>
kubectl get all
$ kubectl expose deploy nginx --port 80 --type LoadBalancer
service/nginx exposed

Every 2.0s: kubectl get all             kubemaster: Sat Jul  6 15:33:30 2019

NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-7bb7cd8db5-rc8c4   1/1     Running   0          18m


NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)
        AGE
service/kubernetes   ClusterIP      10.96.0.1        <none>          443/TCP
        23h
service/nginx        LoadBalancer   10.105.157.210   10.94.214.200   80:3063
1/TCP   34s


NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           18m

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-7bb7cd8db5   1         1         1       18m


By now, you should be able to browser NGINX Page underhttp://10.94.214.210

Hurray !!!

Let’s run another nginx service:

~$ kubectl run nginx2 --image nginx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx2 created
Every 2.0s: kubectl get all             kubemaster: Sat Jul  6 15:37:21 2019

NAME                          READY   STATUS    RESTARTS   AGE
pod/nginx-7bb7cd8db5-rc8c4    1/1     Running   0          21m
pod/nginx2-5746fc444c-4tsls   1/1     Running   0          42s


NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)
        AGE
service/kubernetes   ClusterIP      10.96.0.1        <none>          443/TCP
        23h
service/nginx        LoadBalancer   10.105.157.210   10.94.214.200   80:3063
1/TCP   4m24s


NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx    1/1     1            1           21m
deployment.apps/nginx2   1/1     1            1           42s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-7bb7cd8db5    1         1         1       21m
replicaset.apps/nginx2-5746fc444c   1         1         1       42s
cse@kubemaster:~$ kubectl expose deploy  nginx2 --port 80 --type LoadBalancer
service/nginx2 exposed
cse@kubemaster:~$
Every 2.0s: kubectl get all             kubemaster: Sat Jul  6 15:38:49 2019

NAME                          READY   STATUS    RESTARTS   AGE
pod/nginx-7bb7cd8db5-rc8c4    1/1     Running   0          23m
pod/nginx2-5746fc444c-4tsls   1/1     Running   0          2m10s


NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)
        AGE
service/kubernetes   ClusterIP      10.96.0.1        <none>          443/TCP
        23h
service/nginx        LoadBalancer   10.105.157.210   10.94.214.200   80:3063
1/TCP   5m52s
service/nginx2       LoadBalancer   10.107.32.195    10.94.214.201   80:3139
0/TCP   15s


NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx    1/1     1            1           23m
deployment.apps/nginx2   1/1     1            1           2m10s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-7bb7cd8db5    1         1         1       23m
replicaset.apps/nginx2-5746fc444c   1         1         1       2m10s

Let’s run hellowhale example

cse@kubemaster:~$ sudo kubectl run hellowhale --image ajeetraina/hellowhale
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/hellowhale created
cse@kubemaster:~$
cse@kubemaster:~$ sudo kubectl expose deploy hellowhale --port 89 --type LoadBalancer
service/hellowhale exposed
cse@kubemaster:~$
cse@kubemaster:~$ sudo kubectl get all
NAME                              READY   STATUS    RESTARTS   AGE
pod/hellowhale-64ff675cb5-c95qf   1/1     Running   0          99s
pod/nginx-7bb7cd8db5-rc8c4        1/1     Running   0          2d9h
pod/nginx2-5746fc444c-4tsls       1/1     Running   0          2d8h


NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
service/hellowhale   LoadBalancer   10.100.239.246   10.94.214.203   89:30385/TCP   29s
service/kubernetes   ClusterIP      10.96.0.1        <none>          443/TCP        3d8h
service/nginx        LoadBalancer   10.105.157.210   10.94.214.200   80:30631/TCP   2d8h
service/nginx2       LoadBalancer   10.107.32.195    10.94.24.201   80:31390/TCP   2d8h


NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hellowhale   1/1     1            1           99s
deployment.apps/nginx        1/1     1            1           2d9h
deployment.apps/nginx2       1/1     1            1           2d8h

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/hellowhale-64ff675cb5   1         1         1       99s
replicaset.apps/nginx-7bb7cd8db5        1         1         1       2d9h
replicaset.apps/nginx2-5746fc444c       1         1         1       2d8h

Hence, you saw that it’s so easy to deploy Kubernetes on bare metal using various popular operating systems like Ubuntu Linux, CentOS, SLES or Red Hat Enterprise Linux. Bare Metal Kubernetes deployments are no longer second-class deployments. Now, you, too, can use LoadBalancer resources with Kubernetes Metallb.

Do you have any queries around Docker, Kubernetes & Cloud? Here’s your chance to meet 850+ Community members via Slack https://tinyurl.com/y973wcq8

In case you’re new and want to start with Docker & Kubernetes, don’t miss out https://dockerlabs.collabnix.com

Top 5 Cool Projects around Docker, Raspberry Pi & Blinkt! ~ Monitoring Docker Swarm using LEDs – Part I

Estimated Reading Time: 5 minutes

Two week back, I travelled to Jaipur, around 1000+ miles from Bangalore for delivering one of Docker Session. I was invited as a Guest Speaker for “IIEC Connect” event conducted by LinuxWorld Inc. held in GD Badaya Auditorium, Jaipur which accommodated around 500-600+ engineering students.

It was an amazing experience with dozens of questions at the end of the session. The session lasted for 3 hours and I was just amazed when 90% hands got raised when I asked “How many of you know about Docker?”. I compiled 120+ slides for this session but skipped immediately to advanced talk so as to keep the audience intact. I talked about how industry is using Docker with some real time in-house projects like Pico, OpenUSM & Docker in the data centre.

During the end of the session, I showcased an interesting demonstration around monitoring Docker Swarm cluster using Blinkt! Pironomi LED. It was a great opportunity for me to excite the students showcasing such a cool project around Docker containers running on Raspberry Pi. Under this blog post, I will talk about how to achieve it in a detailed way.

Pre-requisite:

ItemsLink Cost
Raspberry Pi 3 Model BBuy2849 INR
Raspberry Pi 3 Model B 4-layer Dog Bone
Stack Clear Case Box Enclosure
Buy4772 INR
Pimoroni Blinkt!Buy749 INR
Raspberry Pi 3 Heat Sink Set
Buy159 INR
  • A Raspberry Pi Node Cluster Stack
  • Blinkt

Blinkt! is an eight super-bright RGB LED indicators that are ideal for adding visual notifications to your Raspberry Pi. Inspired by OpenFaas founder ~ Alex Ellis’ work with his Raspberry Pi Zero Docker Cluster, Pironomi developed these boards for him to use as status indicators. Blinkt! offers eight APA102 pixels in the smallest (and cheapest) form factor to plug straight onto your Raspberry Pi.

Features

  • Eight APA102 RGB LEDs
  • Individually controllable pixels
  • Sits directly on top of your Pi in a tiny footprint
  • Fits inside most Pi cases
  • Doesn’t interfere with PWM audio
  • Blinkt! pinout
  • Compatible with Raspberry Pi 3B+, 3, 2, B+, A+, Zero, and Zero W
  • Python library
  • Comes fully assembled

Installing Docker on all 3 Nodes – 1 Manager and 2 worker nodes

Follow link to install Docker 18.09 on all the Raspberry Node cluster

root@raspberrypi:/home/pi# docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:57:21 2018
 OS/Arch:           linux/arm
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:17:57 2018
  OS/Arch:          linux/arm
  Experimental:     false
root@raspberrypi:/home/pi# 

Setting up Swarm Manager Node

root@raspberrypi:/home/pi# docker swarm init --advertise-addr 192.168.43.134 --listen-addr 192.168.43.134:2377
Swarm initialized: current node (j7i394an31gsevxt3fndzvum5) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-1zbsutds2u5gk5qwx0qbf95uccogrjx1ukszxxxxx-bcptng4inxxxldvvx17tn2l 192.168.43.134:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

root@raspberrypi:/home/pi# 
pi@raspberrypi:~ $ sudo docker swarm join --token SWMTKN-1-1zbsutds2u5gk5qwx0qbf95uccogrjx1ukszysmxxxbcptng4invy1abldvvx17tn2l 192.168.43.134:2377
This node joined a swarm as a worker.
pi@raspberrypi:~ $ 

Listing the Nodes

root@raspberrypi:/home/pi# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
ijnqkk7vybzts7ohgt63fteoo     raspberrypi         Ready               Active                                  18.09.0
j7i394an31gsevxt3fndzvum5 *   raspberrypi         Ready               Active              Leader              18.09.0
let43cp6uoankngeg5lmd91mn     raspberrypi         Ready               Active                                  18.09.0
root@raspberrypi:/home/pi# 

Running Monitor Service

A special credit to Docker Captain Stefan Schere and his work for building these piece of Docker container.

root@raspberrypi:/home/pi# docker service create --name monitor --mode global --restart-condition any --mount type=bind,src=/sys,dst=/sys --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock stefanscherer/monitor:1.1.0
kvgvohexsc2e8yapol0ulwq5q
overall progress: 3 out of 3 tasks 
ijnqkk7vybzt: running   [==================================================>] 
let43cp6uoan: running   [==================================================>] 
j7i394an31gs: running   [==================================================>] 
verify: Service converged 
root@raspberrypi:/home/pi# 
root@raspberrypi:/home/pi# docker service create --name whoami stefanscherer/whoami:1.1.0
jd5e5hlswu8ruxgfhgbwtww84
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service converged 

Scaling the Service to 3

root@raspberrypi:/home/pi# docker service scale whoami=3
whoami scaled to 3
overall progress: 3 out of 3 tasks 
1/4: running   [==================================================>] 
2/4: running   [==================================================>] 
3/4: running   [==================================================>] 
verify: Service converged 

Scaling the service to 16

root@raspberrypi:/home/pi# docker service scale whoami=16
whoami scaled to 16
overall progress: 16 out of 16 tasks 
1/16: running   [==================================================>] 
2/16: running   [==================================================>] 
3/16: running   [==================================================>] 
4/16: running   [==================================================>] 
5/16: running   [==================================================>] 
6/16: running   [==================================================>] 
7/16: running   [==================================================>] 
8/16: running   [==================================================>] 
9/16: running   [==================================================>] 
10/16: running   [==================================================>] 
11/16: running   [==================================================>] 
12/16: running   [==================================================>] 
13/16: running   [==================================================>] 
14/16: running   [==================================================>] 
15/16: running   [==================================================>] 
16/16: running   [==================================================>] 
verify: Service converged 

Scaling the Service to 32

root@raspberrypi:/home/pi# docker service scale whoami=32
whoami scaled to 32
overall progress: 32 out of 32 tasks 
verify: Service converged 

Scaling the Service back to 4

root@raspberrypi:/home/pi# docker service scale whoami=4
whoami scaled to 4
overall progress: 4 out of 4 tasks 
1/4: running   [==================================================>] 
2/4: running   [==================================================>] 
3/4: running   [==================================================>] 
4/4: running   [==================================================>] 
verify: Service converged 




Listing out the service

root@raspberrypi:/home/pi# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                         PORTS
h7ap83sidbw8        monitor             global              2/2                 stefanscherer/monitor:1.1.0   
root@raspberrypi:/home/pi# 

Listing the Nodes

root@raspberrypi:/home/pi# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
ijnqkk7vybzts7ohgt63fteoo     raspberrypi         Ready               Active                                  18.09.0
j7i394an31gsevxt3fndzvum5 *   raspberrypi         Ready               Active              Leader              18.09.0
let43cp6uoankngeg5lmd91mn     raspberrypi         Down                Active                                  18.09.0
root@raspberrypi:/home/pi# 

Rolling Updates

root@raspberrypi:/home/pi# docker service update --image stefanscherer/whoami:1.2.0 \
>   --update-parallelism 4  --update-delay 2s whoami
whoami
overall progress: 2 out of 4 tasks 
1/4: preparing [=================================>                 ] 
2/4: running   [==================================================>] 
3/4: preparing [=================================>                 ] 
4/4: running   [==================================================>] 
root@raspberrypi:/home/pi# 

Overall, it was an amazing experience to showcase the power of Raspberry Pi to monitor Docker Swarm Cluster using Pironomi Blinkt! LEDs. In my future post, I will bring another interesting use cases around Docker & Raspberry Pi.

References: