Did you know? Over 800+ enterprise organizations use Docker Enterprise for everything from modernizing traditional applications to microservices and data science.
Over 96% of enterprise IT organizations are unable to manage Kubernetes on their own and one of the common reason is due to its inherent complexity. Undoubtedly, Kubernetes is a powerful orchestration technology for deploying, scaling and managing distributed applications and it has taken the industry by storm over the past few years. But if you are really looking out for on-premises certified Kubernetes distribution, you need to rely on enterprise container platform that allows you to leverage your existing team & processes to adopt and operationalize Kubernetes, offering streamlined lifecycle management & secure by design. Docker Enterprise 3.0 is the only desktop-to-cloud enterprise container platform enabling organizations to build and share any application and securely run them anywhere – from hybrid cloud to the edge.
Kubernetes Made Easy with DKS..
Last Dockercon, Docker Inc. announced DKS – Docker Kubernetes Service. DKS is a unified operational model that simplifies the use of Kubernetes for developers and operators. This provides organizations ability to leverage Kubernetes for their application delivery environment, without the need to hire a team of Kubernetes experts. Docker Enterprise with DKS makes Kubernetes easier to use and more secure for the entire organization without slowing down software delivery.
It is a certified Kubernetes distribution that is included with Docker Enterprise 3.0 and is designed to solve this fundamental challenge. It’s the only offering that integrates Kubernetes from the developer desktop to production servers, with ‘sensible secure defaults’ out-of-the-box. Simply put, DKS makes Kubernetes easy to use and more secure for the entire organization. Here are three things that DKS does to simplify (and accelerate) Kubernetes adoption for the enterprise:
- Consistent, seamless Kubernetes experience for developers and operators ~ With the use of Version Packs, developers’ Kubernetes environments running in Docker Desktop Enterprise stay in sync with production environments for a complete, seamless Kubernetes experience.
- Streamlined Kubernetes lifecycle management (Day 1 and Day 2 operations) ~ New Cluster Management CLI Plugin to enable operations teams to easily deploy, scale, backup and restore and upgrade a certified Kubernetes environment using a set of simple CLI commands.
- Enhanced security with ‘sensible defaults’ ( out-of-the-box configurations for security, encryption, access control, and lifecycle management — all without having to become a Kubernetes expert )
DKS is compatible with Kubernetes YAML, Helm charts, and super cool Docker Compose tool for creating multi-container applications. It also provides an automated way to install and configure Kubernetes applications across hybrid and multi-cloud deployments. Capabilities include security, access control, and lifecycle management. Additionally, it uses Docker Swarm Mode to orchestrate Docker containers.
Kubernetes 1.14+ in Docker Enterprise
Docker Enterprise 3.0 comes with the following components:
- Containerd 1.2.6
- Docker Engine 19.03.1
- Runc 1.0.0-rc8
- docker-init 0.18.0
- Universal Control Plane 3.2.0
- Docker Trusted Registry 2.7
- Kubernetes 1.14+
- Calico v3.5.7
This blog is going to be series of blog posts around Kubernetes support and capabilities under Docker EE 3.0 which are listed below:
- Deploying certified Kubernetes Cluster using Docker Enterprise 3.0 running on Bare Metal System
- Deploying certified Kubernetes Cluster on AWS Cloud using Docker Cluster CLI Plugin
- Support of Kubernetes on Windows 2019
- Implementing Persistent storage for Kubernetes workload using iSCSI
- Implementing Cluster Ingress for Kubernetes
Under this first series of blog post, I will demonstrate the below implementations:
- How to deploy Docker Enterprise 3.0 on bare metal/on-premises.
- How to install Docker Client Bundle and add Linux worker nodes to the existing Cluster
- How to add Windows worker nodes to the existing Kubernetes Cluster
- How to install Kubectl
- Enabling Helm and Tiller using UCP
Pre-Requisites:
- Ubuntu 18.04 (atleast 2 Node to setup Multi-Node Cluster)
- Minimal 4GB RAM is required for UCP 3.2.0
- Go to https://hub.docker.com/my-content.
- Click the Setup button for Docker Enterprise Edition for Ubuntu.
- Copy the URL from the field labeled Copy and paste this URL to download your Edition.
Install packages to allow apt to use a repository over HTTPS:
$sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
Adding $DOCKER_EE_URL variable into your environment
Replace with the URL you noted down in the prerequisites. Replace sub-xxx too.
$curl -fsSL https://storebits.docker.com/ee/m/sub-XXX-44fb-XXX-b6bf-XXXXXX/ubuntu/gpg | sudo apt-key add -
Adding the stable Repository
$sudo add-apt-repository \
"deb [arch=amd64] https://storebits.docker.com/ee/m/sub-XXX-44fb-XXX-b6bf-XXXXXX/ubuntu \
$(lsb_release -cs) \
stable-19.03"
Installing Docker Enterprise
$sudo apt-get install docker-ee docker-ee-cli containerd.io
Verifying Docker Enterprise Version
$ sudo docker version
Client: Docker Engine - Enterprise
Version: 19.03.1
API version: 1.40
Go version: go1.12.5
Git commit: f660560
Built: Thu Jul 25 20:59:23 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Enterprise
Engine:
Version: 19.03.1
API version: 1.40 (minimum version 1.12)
Go version: go1.12.5
Git commit: f660560
Built: Thu Jul 25 20:57:45 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.6
GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc:
Version: 1.0.0-rc8
GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f
docker-init:
Version: 0.18.0
GitCommit: fec3683
cse@ubuntu1804-1:~$
Testing the Hello World Example
$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:6540fc08ee6e6b7b63468dc3317e3303aae178cb8a45ed3123180328bcc1d20f
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
Installing Universal Control Plane v3.2.0
$ sudo docker container run --rm -it --name ucp \
> -v /var/run/docker.sock:/var/run/docker.sock \
> docker/ucp:3.2.0 install \
> --host-address 10.94.214.115 \
> --interactive
Unable to find image 'docker/ucp:3.2.0' locally
3.2.0: Pulling from docker/ucp
050382585609: Pull complete
de0f2e3c5141: Pull complete
ef4440c639ab: Pull complete
Digest: sha256:f9049801c3fca01f1f08772013911bd8f9b616224b9f8d5252d91faec316424a
Status: Downloaded newer image for docker/ucp:3.2.0
INFO[0000] Your Docker daemon version 19.03.1, build f660560 (4.15.0-29-generic) is compatible with UCP 3.2.0 (586d782)
INFO[0000] Initializing New Docker Swarm
Admin Username: ajeetraina
Admin Password:
Confirm Admin Password:
WARN[0014] None of the Subject Alternative Names we'll be using in the UCP certificates ["ubuntu1804-1"] contain a domain component. Your generated certs may fail TLS validation unless you only use one of these shortnames or IP addresses to connect. You can use the --san flag to add more aliases
You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases:
INFO[0019] Checking required ports for connectivity
INFO[0035] Checking required container images
INFO[0035] Pulling required images... (this may take a while)
INFO[0035] Pulling image: docker/ucp-agent:3.2.0
INFO[0042] Pulling image: docker/ucp-auth:3.2.0
INFO[0049] Pulling image: docker/ucp-auth-store:3.2.0
INFO[0061] Pulling image: docker/ucp-azure-ip-allocator:3.2.0
INFO[0067] Pulling image: docker/ucp-calico-cni:3.2.0
INFO[0079] Pulling image: docker/ucp-calico-kube-controllers:3.2.0
INFO[0092] Pulling image: docker/ucp-calico-node:3.2.0
INFO[0102] Pulling image: docker/ucp-cfssl:3.2.0
INFO[0113] Pulling image: docker/ucp-compose:3.2.0
5
INFO[0180] Pulling image: docker/ucp-controller:3.2.0
INFO[0197] Pulling image: docker/ucp-dsinfo:3.2.0
INFO[0201] Pulling image: docker/ucp-etcd:3.2.0
INFO[0230] Pulling image: docker/ucp-hyperkube:3.2.0
INFO[0266] Pulling image: docker/ucp-interlock:3.2.0
INFO[0272] Pulling image: docker/ucp-interlock-extension:3.2.0
INFO[0278] Pulling image: docker/ucp-interlock-proxy:3.2.0
INFO[0287] Pulling image: docker/ucp-kube-compose:3.2.0
INFO[0293] Pulling image: docker/ucp-kube-compose-api:3.2.0
INFO[0301] Pulling image: docker/ucp-kube-dns:3.2.0
INFO[0307] Pulling image: docker/ucp-kube-dns-dnsmasq-nanny:3.2.0
INFO[0314] Pulling image: docker/ucp-kube-dns-sidecar:3.2.0
INFO[0321] Pulling image: docker/ucp-metrics:3.2.0
INFO[0343] Pulling image: docker/ucp-pause:3.2.0
INFO[0348] Pulling image: docker/ucp-swarm:3.2.0
INFO[0354] Completed pulling required images
INFO[0357] Running install agent container ...
INFO[0000] Loading install configuration
INFO[0000] Running Installation Steps
INFO[0000] Step 1 of 35: [Setup Internal Cluster CA]
INFO[0003] Step 2 of 35: [Setup Internal Client CA]
INFO[0003] Step 3 of 35: [Initialize etcd Cluster]
INFO[0007] Step 4 of 35: [Set Initial Config in etcd]
INFO[0007] Step 5 of 35: [Deploy RethinkDB Server]
INFO[0010] Step 6 of 35: [Initialize RethinkDB Tables]
INFO[0030] Step 7 of 35: [Create Auth Service Encryption Key Secret]
INFO[0030] Step 8 of 35: [Deploy Auth API Server]
INFO[0039] Step 9 of 35: [Setup Auth Configuration]
INFO[0040] Step 10 of 35: [Deploy Auth Worker Server]
INFO[0046] Step 11 of 35: [Deploy UCP Proxy Server]
INFO[0047] Step 12 of 35: [Initialize Swarm v1 Node Inventory]
INFO[0047] Step 13 of 35: [Deploy Swarm v1 Manager Server]
INFO[0048] Step 14 of 35: [Deploy Internal Cluster CA Server]
INFO[0050] Step 15 of 35: [Deploy Internal Client CA Server]
INFO[0052] Step 16 of 35: [Deploy UCP Controller Server]
INFO[0058] Step 17 of 35: [Deploy Kubernetes API Server]
INFO[0067] Step 18 of 35: [Deploy Kubernetes Controller Manager]
INFO[0073] Step 19 of 35: [Deploy Kubernetes Scheduler]
INFO[0078] Step 20 of 35: [Deploy Kubelet]
INFO[0079] Step 21 of 35: [Deploy Kubernetes Proxy]
INFO[0081] Step 22 of 35: [Wait for Healthy UCP Controller and Kubernetes API]
INFO[0082] Step 23 of 35: [Create Kubernetes Pod Security Policies]
INFO[0085] Step 24 of 35: [Install Kubernetes CNI Plugin]
INFO[0113] Step 25 of 35: [Install KubeDNS]
INFO[0121] Step 26 of 35: [Create UCP Controller Kubernetes Service Endpoints]
INFO[0124] Step 27 of 35: [Install Metrics Plugin]
INFO[0131] Step 28 of 35: [Install Kubernetes Compose Plugin]
INFO[0142] Step 29 of 35: [Deploy Manager Node Agent Service]
INFO[0142] Step 30 of 35: [Deploy Worker Node Agent Service]
INFO[0142] Step 31 of 35: [Deploy Windows Worker Node Agent Service]
INFO[0142] Step 32 of 35: [Deploy Cluster Agent Service]
INFO[0142] Step 33 of 35: [Set License]
INFO[0142] Step 34 of 35: [Set Registry CA Certificates]
INFO[0142] Step 35 of 35: [Wait for All Nodes to be Ready]
INFO[0147] Waiting for 1 nodes to be ready
INFO[0152] All Installation Steps Completed
cse@ubuntu1804-1:~$
cse@ubuntu1804-1:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f8c4666a7646 docker/ucp-agent:3.2.0 "/bin/ucp-agent node…" 58 seconds ago Up 56 seconds 2376/tcp ucp-manager-agent.z5m50h0rl2kh4jehuoe76hj8k.tj89jrf9xkxmw0t42oec3xkb0
611ca05ab239 docker/ucp-agent:3.2.0 "/bin/ucp-agent clus…" 58 seconds ago Up 56 seconds 2376/tcp ucp-cluster-agent.1.omzv7veky1kbmzgl78g25surq
df16260783ea 50810572f8d1 "/compose-controller…" About a minute ago Up About a minute k8s_ucp-kube-compose_compose-57cc55c56d-n7klp_kube-system_a7af24f5-b83b-11e9-86e4-0242ac11000b_0
5079f9dc068d 7f719dba281f "/api-server --kubec…" About a minute ago Up About a minute k8s_ucp-kube-compose-api_compose-api-556c6d8c86-xf9hc_kube-system_a7aefd70-b83b-11e9-86e4-0242ac11000b_0
506abc0ef18c 9fdd9422f8b8 "/bin/proxy" About a minute ago Up About a minute k8s_ucp-metrics-proxy_ucp-metrics-c7zpb_kube-system_a565c9f2-b83b-11e9-86e4-0242ac11000b_0
a99093a69d39 9fdd9422f8b8 "/bin/prometheus.sh …" About a minute ago Up About a minute k8s_ucp-metrics-prometheus_ucp-metrics-c7zpb_kube-system_a565c9f2-b83b-11e9-86e4-0242ac11000b_0
8d3bd381cdd4 9fdd9422f8b8 "/bin/sh -c 'while :…" About a minute ago Up About a minute k8s_ucp-metrics-inventory_ucp-metrics-c7zpb_kube-system_a565c9f2-b83b-11e9-86e4-0242ac11000b_0
93544ed4f512 docker/ucp-pause:3.2.0 "/pause" About a minute ago Up About a minute k8s_POD_compose-api-556c6d8c86-xf9hc_kube-system_a7aefd70-b83b-11e9-86e4-0242ac11000b_0
579b68869229 docker/ucp-pause:3.2.0 "/pause" About a minute ago Up About a minute k8s_POD_compose-57cc55c56d-n7klp_kube-system_a7af24f5-b83b-11e9-86e4-0242ac11000b_0
3182cfcded2c docker/ucp-pause:3.2.0 "/pause" About a minute ago Up About a minute k8s_POD_ucp-metrics-c7zpb_kube-system_a565c9f2-b83b-11e9-86e4-0242ac11000b_0
2c88d0f54623 435d88fe6b45 "/sidecar --v=2 --lo…" About a minute ago Up About a minute k8s_ucp-kubedns-sidecar_kube-dns-84cd964544-pljlj_kube-system_a0bb52b9-b83b-11e9-86e4-0242ac11000b_0
add0887ce338 ec8b25117519 "/dnsmasq-nanny -v=2…" About a minute ago Up About a minute k8s_ucp-dnsmasq-nanny_kube-dns-84cd964544-pljlj_kube-system_a0bb52b9-b83b-11e9-86e4-0242ac11000b_0
4bb226feb0af 28b1e608dc41 "/kube-dns --domain=…" About a minute ago Up About a minute k8s_ucp-kubedns_kube-dns-84cd964544-pljlj_kube-system_a0bb52b9-b83b-11e9-86e4-0242ac11000b_0
028ecd2f4ba8 docker/ucp-pause:3.2.0 "/pause" About a minute ago Up About a minute k8s_POD_kube-dns-84cd964544-pljlj_kube-system_a0bb52b9-b83b-11e9-86e4-0242ac11000b_0
529aed9d12fc eb607f503ccd "/usr/bin/kube-contr…" About a minute ago Up About a minute k8s_calico-kube-controllers_calico-kube-controllers-5589844c6c-gx7x8_kube-system_8edd2b9f-b83b-11e9-86e4-0242ac11000b_0
a77e677d8688 docker/ucp-pause:3.2.0 "/pause" About a minute ago Up About a minute k8s_POD_calico-kube-controllers-5589844c6c-gx7x8_kube-system_8edd2b9f-b83b-11e9-86e4-0242ac11000b_0
e065fac81ef2 6904e301c3a7 "/install-cni.sh" About a minute ago Up About a minute k8s_install-cni_calico-node-blhvh_kube-system_8e9166f2-b83b-11e9-86e4-0242ac11000b_0
c65d50dafef4 697d2c1dea15 "start_runit" About a minute ago Up About a minute k8s_calico-node_calico-node-blhvh_kube-system_8e9166f2-b83b-11e9-86e4-0242ac11000b_0
1f478e937ee2 docker/ucp-pause:3.2.0 "/pause" About a minute ago Up About a minute k8s_POD_calico-node-blhvh_kube-system_8e9166f2-b83b-11e9-86e4-0242ac11000b_0
56ef4c6e7449 docker/ucp-hyperkube:3.2.0 "kube-proxy --cluste…" 2 minutes ago Up 2 minutes ucp-kube-proxy
ae412f355aaa docker/ucp-hyperkube:3.2.0 "/bin/kubelet_entryp…" 2 minutes ago Up 2 minutes ucp-kubelet
93c0fb13401a docker/ucp-hyperkube:3.2.0 "kube-scheduler --ku…" 2 minutes ago Up 2 minutes (healthy) ucp-kube-scheduler
e20bfdd75b9a docker/ucp-hyperkube:3.2.0 "/bin/controller_man…" 2 minutes ago Up 2 minutes (healthy) ucp-kube-controller-manager
46aee6f0c836 docker/ucp-hyperkube:3.2.0 "/bin/apiserver_entr…" 2 minutes ago Up 2 minutes (healthy) 0.0.0.0:12388->12388/tcp ucp-kube-apiserver
5ad4de889f26 docker/ucp-controller:3.2.0 "/bin/controller ser…" 2 minutes ago Up 2 minutes (healthy) 0.0.0.0:443->8080/tcp, 0.0.0.0:6443->8081/tcp ucp-controller
b4788ba1fb8f docker/ucp-cfssl:3.2.0 "/bin/ucp-ca serve -…" 2 minutes ago Up 2 minutes (healthy) 0.0.0.0:12382->12382/tcp ucp-client-root-ca
4d54f68a269d docker/ucp-cfssl:3.2.0 "/bin/ucp-ca serve -…" 2 minutes ago Up 2 minutes (healthy) 0.0.0.0:12381->12381/tcp ucp-cluster-root-ca
80c74028f856 docker/ucp-swarm:3.2.0 "/bin/swarm manage -…" 2 minutes ago Up 2 minutes (healthy) 0.0.0.0:2376->2375/tcp ucp-swarm-manager
2df245efdbd5 docker/ucp-agent:3.2.0 "/bin/ucp-agent prox…" 2 minutes ago Up 2 minutes (healthy) 0.0.0.0:6444->6444/tcp, 0.0.0.0:12378->12378/tcp, 0.0.0.0:12376->2376/tcp ucp-proxy
d1fb6f51567e docker/ucp-auth:3.2.0 "/usr/local/bin/enzi…" 2 minutes ago Up 2 minutes (healthy) ucp-auth-worker.z5m50h0rl2kh4jehuoe76hj8k.soauyq1ovtbzu5dsvqm1aulrl
e12173e6f7b9 docker/ucp-auth:3.2.0 "/usr/local/bin/enzi…" 2 minutes ago Up 2 minutes (healthy) ucp-auth-api.z5m50h0rl2kh4jehuoe76hj8k.zbx3dkew00pro3s76s321wqgs
06667a03ffea docker/ucp-auth-store:3.2.0 "/bin/entrypoint.sh …" 3 minutes ago Up 3 minutes (healthy) 0.0.0.0:12383-12384->12383-12384/tcp ucp-auth-store
40d316287979 docker/ucp-etcd:3.2.0 "/bin/entrypoint.sh …" 3 minutes ago Up 3 minutes (healthy) 2380/tcp, 4001/tcp, 7001/tcp, 0.0.0.0:12380->12380/tcp, 0.0.0.0:12379->2379/tcp ucp-kv
cse@ubuntu1804-1:~$
Accessing the UCP
Now you should be able to access Docker Univeral Control Plane via https://<node-ip>
Click on “Sign In” and you will need to upload the license file to access Docker Enterprise UCP 3.2.0 WebUI as shown below:
Adding Worker Nodes to the Cluster
Let us try to add worker nodes to the cluster. Click on “Shared Resources” on the left pane and Click on “Nodes”. Select “Add Nodes” and you should be able to choose orchestrator of your choice. Also, it allows you to add either Linux or Windows nodes to the cluster as shown below:
I assume that you have a worker node installed with Ubuntu 18.04 and installed with the latest Docker binaries. Please remember it could be either Docker Community Edition or Enterprise Edition.
@ubuntu1804-1:~$ sudo curl -sSL https://get.docker.com/ | sh
$ sudo usermod -aG docker cs
$ sudo docker swarm join --token SWMTKN-1-3n4mwkzhXXXXXXt2hip0wonqagmjtos-bch9ezkt5kiroz6jncidrz13x <managernodeip>:2377
This node joined a swarm as a worker.
By now, you should be able to see both manager node and 1 worker node added under UCP.
If in case you see warning on UCP dashboard stating that you have similar hostname on both the manager and worker node, just go ahead and change the hostname on the worker node and it will automatically get updated on UCP dashboard.
Adding Windows worker Node to this existing Docker EE 3.0 Cluster
If you want to add Windows system as a worker node to the existing cluster, it is still possible. You will need Windows Server 2016+ system as a minimal OS.
I assume that Windows 2016 is already installed. Follow the below steps to first install Docker 19.03.x and then adding it to the cluster as a worker node.
Install-WindowsFeature -Name Containers
Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name docker -ProviderName DockerMsftProvider -Force -RequiredVersion 19.03.2
Restart the System
Start-Service docker
docker --version
Go to UCP window > Resources > Nodes. Select “Add Nodes”. Choose Node Type as “Windows” as shown below:
Once you select the square bracket under Step-2 it will display a docker swarm join command which you need to run on the Windows 2016 system to add it to your existing Docker EE 3.0 cluster.
Do verify that the nodes gets added successfully by clicking on “Nodes” tab under Resources.
Installing Docker Client Bundle
Click on Dashboard and scroll down to see Docker CLI option . This option allows you to download a client bundle to create and manage services using the Docker CLI client. Once you click, you will be able to find a new window as shown below:
Click on “user profile page” and it should redirect you to https://<manager-ip-node/manage/profile/clientbundle page with the below screenshot:
Click on “Generate Client Bundle” and it will download ucp-bundle-<username>.zip
$ unzip ucp-bundle-ajeetraina.zip
Archive: ucp-bundle-ajeetraina.zip
extracting: ca.pem
extracting: cert.pem
extracting: key.pem
extracting: cert.pub
extracting: kube.yml
extracting: env.sh
extracting: env.ps1
extracting: env.cmd
extracting: meta.json
extracting: tls/docker/key.pem
extracting: tls/kubernetes/ca.pem
extracting: tls/kubernetes/cert.pem
extracting: tls/kubernetes/key.pem
extracting: tls/docker/ca.pem
extracting: tls/docker/cert.pem
@ubuntu1804-1:~$ eval "$(<env.sh)"
The env script updates the DOCKER_HOST and DOCKER_CERT_PATH environment variables to make the Docker CLI client interact with UCP and use the client certificates you downloaded. From now on, when you use the Docker CLI client, it includes your user specific client certificates as part of the request to UCP.
Installing Kubectl on Docker EE 3.0
Once you have the Kubernetes version, install the kubectl client for the relevant operating system. As shown below, we need to install Kubectl version 1.14.3
Setting Kubectl version
@ubuntu1804-1:~$ k8sversion=v1.14.3
@ubuntu1804-1:~$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$k8sversion/bin/linux/amd64/kubectl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 41.1M 100 41.1M 0 0 7494k 0 0:00:05 0:00:05 --:--:-- 9070k
@ubuntu1804-1:~$ chmod +x ./kubectl
@ubuntu1804-1:~$ sudo mv ./kubectl /usr/local/bin/kubectl
@ubuntu1804-1:~$
Verifying the Kubectl Installation
~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.3-docker-2", GitCommit:"7cfcb52617bf94c36953159ee9a2bf14c7fcc7ba", GitTreeState:"clean", BuildDate:"2019-06-06T16:18:13Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Listing out the Kubernetes Nodes
cse@ubuntu1804-1:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node2 Ready <none> 23h v1.14.3-docker-2
ubuntu1804-1 Ready master 23h v1.14.3-docker-2
Enabling Helm and Tiller with UCP
$ kubectl create rolebinding default-view --clusterrole=view --serviceaccount=kube-system:default --namespace=kube-system
rolebinding.rbac.authorization.k8s.io/default-view created
$ kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
clusterrolebinding.rbac.authorization.k8s.io/add-on-cluster-admin created
cse@ubuntu1804-1:~$
Installing Helm
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > install-helm.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7001 100 7001 0 0 6341 0 0:00:01 0:00:01 --:--:-- 6347
$ chmod u+x install-helm.sh
$ ./install-helm.sh
Downloading https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.
cse@ubuntu1804-1:~$ helm init
Creating /home/cse/.helm
Creating /home/cse/.helm/repository
Creating /home/cse/.helm/repository/cache
Creating /home/cse/.helm/repository/local
Creating /home/cse/.helm/plugins
Creating /home/cse/.helm/starters
Creating /home/cse/.helm/cache/archive
Creating /home/cse/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/cse/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
cse@ubuntu1804-1:~$
Verifying Helm Installation
$ helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Troubeshooting:
In case you face an issue “Error: could not find a ready tiller pod” follow the below steps:
$ /usr/local/bin/tiller
[main] 2019/10/06 00:37:53 Starting Tiller v2.14.3 (tls=false)
[main] 2019/10/06 00:37:53 GRPC listening on :44134
[main] 2019/10/06 00:37:53 Probes listening on :44135
[main] 2019/10/06 00:37:53 Storage driver is ConfigMap
[main] 2019/10/06 00:37:53 Max history per release is 0
...
$ export HELM_HOST=localhost:44134
$ helm version # Should connect to localhost.
Deploying MySQL using Helm on Docker EE 3.0
Let us try out deploying MySQL using HelmPack.
$ helm install --name mysql stable/mysql
NAME: mysql
LAST DEPLOYED: Wed Aug 7 11:43:01 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
mysql-test 1 0s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql Pending 0s
==> v1/Secret
NAME TYPE DATA AGE
mysql Opaque 2 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP 10.96.77.83 <none> 3306/TCP 0s
==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mysql 0/1 0 0 0s
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your c luster:
mysql.default.svc.cluster.local
To get your root password run:
MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql -o jsonpa th="{.data.mysql-root-password}" | base64 --decode; echo)
To connect to your database:
1. Run an Ubuntu pod that you can use as a client:
kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
2. Install the mysql client:
$ apt-get update && apt-get install mysql-client -y
3. Connect using the mysql cli, then provide your password:
$ mysql -h mysql -p
To connect to your database directly from outside the K8s cluster:
MYSQL_HOST=127.0.0.1
MYSQL_PORT=3306
# Execute the following command to route the connection:
kubectl port-forward svc/mysql 3306
mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
cse@ubuntu1804-1:~$
Listing out the Releases
The helm list
command lists all of the releases. By default, it lists only releases that are deployed or failed. Flags like ‘–deleted’ and ‘–all’ will alter this behavior. Such flags can be combined: ‘–deleted –failed’. By default, items are sorted alphabetically. Use the ‘-d’ flag to sort by release date.
$ helm list
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
mysql 1 Wed Aug 7 11:43:01 2019 DEPLOYED mysql-1.3.0 5.7.14 default
$ kubectl get po,deploy,svc
NAME READY STATUS RESTARTS AGE
pod/mysql-6f6bff58d8-t2kwm 1/1 Running 0 5m35s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.extensions/mysql 1/1 1 0 5m35s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28h
service/mysql ClusterIP 10.96.77.83 <none> 3306/TCP 5m35s
cse@ubuntu1804-1:~$
Hence, you can use Helm flawlessly with UCP under Docker Enterprise 3.0.
In Part-2 of this blog post series, I will talk around iSCSI support for Kubernetes under Docker Enterprise 3.0. Stay tuned.
Are you a Beginner and looking out to build your career in Docker & Kubernetes? Head over to DockerLabs Slack channel to join 1100+ Slack members https://tinyurl.com/y973wcq8
Do check https://dockerlabs.collabnix.com to access 500+ FREE tutorials on Docker, Kubernetes & Cloud.
Comments are closed.