Kubernetes Hands-on Lab #4 – Deploy Prometheus Stack using Helm on Play with Kubernetes Platform

Estimated Reading Time: 8 minutes

 

Let’s talk about Kubernetes Deployment Challenges…

As monolithic systems become too large to deal with, many enterprises are drawn to breaking them down into the microservices architecture.Whenever we move from monolithic to microservice architecture, application consists of multiple components in terms of services talking to each other. Each components has its own resources and can be scaled individually. If we talk about Kubernetes, it can become very complex with all the objects you need to handle ― such as ConfigMaps, services, pods, Persistent Volumes ― in addition to the number of releases you need to manage. The below list of challenges might occur :

 

 
 
1. Manage, Edit and Update multiple k8s configuration
2. Deploy Multiple K8s configuration as a SINGLE application
3. Share and reuse K8s configurations and applications
4. Parameterize and support multiple environments
5. Manage application releases: rollout, rollback, diff, history
6. Define deployment lifecycle(control operations to be run in different phases
7. Validate release state after deployment
 
These can be managed with Kubernetes Helm, which offers a simple way to package everything into one simple application and advertises what you can configure.
 

Helm is a Deployment Management(and NOT JUST PACKAGE MANAGER) for Kubernetes. It does a heavy lifting of repeatable deployment, management of dependencies(reuse and share), management of multiple configurations, update, rollback and test application deployments(Releases).

Under this blog post, we will test drive Helm on top of Play with Kubernetes Platform. Let’s get started.

Open https://labs.play-with-k8s.com/ to access Kubernetes Playground.

Click on the Login button to authenticate with Docker Hub or GitHub ID.

 

Once you start the session, you will have your own lab environment.

Adding First Kubernetes Node

Click on “Add New Instance” on the left to build your first Kubernetes Cluster node. It automatically names it as “node1”. Each instance has Docker Community Edition (CE) and Kubeadm already pre-installed. This node will be treated as the master node for our cluster.

Bootstrapping the Master Node

 

You can bootstrap the Kubernetes cluster by initializing the master (node1) node with the below script. Copy this script content into bootstrap.sh file and make it executable using “chmod +x bootstrap.sh” command.

When you execute this script, as part of initialization, the kubeadm write several configuration files needed, setup RBAC and deployed Kubernetes control plane components (like kube-apiserver, kube-dns, kube-proxy, etcd, etc.). Control plane components are deployed as Docker containers.

Copy the above kubeadm join token command and save it for the next step. This command will be used to join other nodes to your cluster.

Adding Worker Nodes

Click on “Add New Node” to add a new worker node.

Checking the Cluster Status

[node1 ~]$ kubectl get nodes
NAME      STATUS     ROLES     AGE       VERSION
node1     Ready      master    18m       v1.11.3
node2     Ready      <none>    4m        v1.11.3
node3     Ready      <none>    39s       v1.11.3
node4     NotReady   <none>    22s       v1.11.3
node5     NotReady   <none>    4s        v1.11.3
[node1 ~]$
[node1 ]$ kubectl get po
No resources found.
[node1 ]$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   1h
[node1]$

 

Verifying the running Pods


 

[node1 ~]$ kubectl get nodes -o json |
>       jq ".items[] | {name:.metadata.name} + .status.capacity"

{
  "name": "node1",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node2",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node3",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node4",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node5",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}

Installing OpenSSL

[node1 ~]$ yum install -y openssl

Installing Helm

$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
[node1 ~]$ sh get_helm.sh
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.11.0-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
get_helm.sh: line 177: which: command not found
Run 'helm init' to configure helm.
[node1 ~]$ helm init
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming

Installing Prometheus 

Let us try to install Prometheus Stack on top of 5-Node K8s cluster using Helm.

First one can search for application stack using helm search <packagename> option.

[node1 ~]$ helm search prometheus
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
stable/prometheus                       7.3.4           2.4.3           Prometheus is a monitoring system and time series database.
stable/prometheus-adapter               v0.2.0          v0.2.1          A Helm chart for k8s prometheus adapter
stable/prometheus-blackbox-exporter     0.1.3           0.12.0          Prometheus Blackbox Exporter
stable/prometheus-cloudwatch-exporter   0.2.1           0.5.0           A Helm chart for prometheus cloudwatch-exporter
stable/prometheus-couchdb-exporter      0.1.0           1.0             A Helm chart to export the metrics from couchdb in Promet...
stable/prometheus-mysql-exporter        0.2.1           v0.11.0         A Helm chart for prometheus mysql exporter with cloudsqlp...
stable/prometheus-node-exporter         0.5.0           0.16.0          A Helm chart for prometheus node-exporter
stable/prometheus-operator              0.1.7           0.24.0          Provides easy monitoring definitions for Kubernetes servi...
stable/prometheus-postgres-exporter     0.5.0           0.4.6           A Helm chart for prometheus postgres-exporter
stable/prometheus-pushgateway           0.1.3           0.6.0           A Helm chart for prometheus pushgateway
stable/prometheus-rabbitmq-exporter     0.1.4           v0.28.0         Rabbitmq metrics exporter for prometheus
stable/prometheus-redis-exporter        0.3.2           0.21.1          Prometheus exporter for Redis metrics
stable/prometheus-to-sd                 0.1.1           0.2.2           Scrape metrics stored in prometheus format and push them ...
stable/elasticsearch-exporter           0.4.0           1.0.2           Elasticsearch stats exporter for Prometheus
stable/karma                            1.1.2           v0.14           A Helm chart for Karma - an UI for Prometheus Alertmanager
stable/stackdriver-exporter             0.0.4           0.5.1           Stackdriver exporter for Prometheus
stable/weave-cloud                      0.3.0           1.1.0           Weave Cloud is a add-on to Kubernetes which provides Cont...
stable/kube-state-metrics               0.9.0           1.4.0           Install kube-state-metrics to generate and expose cluster...
stable/mariadb                          5.2.2           10.1.36         Fast, reliable, scalable, and easy to use open-source rel...
[node1 ~]$

Update the Repo

[node1 ~]$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

Installing Prometheus

$helm install stable/prometheus

Error: namespaces “default” is forbidden: User “system:serviceaccount:kube-system:default” cannot get namespaces in the namespace “default”

How to fix?

To fix this issue, you need to follow below steps:

kubectl --namespace kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade

Listing Helm

[node1 ~]$ helm list
NAME            REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
excited-elk     1               Sun Oct 28 10:00:02 2018        DEPLOYED        prometheus-7.3.4        2.4.3           default
[node1 ~]$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
[node1 ~]$ helm install stable/prometheus
NAME:   excited-elk
LAST DEPLOYED: Sun Oct 28 10:00:02 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/DaemonSet
NAME                                  AGE
excited-elk-prometheus-node-exporter  1s

==> v1/Pod(related)

NAME                                                        READY  STATUS             RESTARTS  AGE
excited-elk-prometheus-node-exporter-7bjqc                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-node-exporter-gbcd7                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-node-exporter-tk56q                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-node-exporter-tkk9b                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-alertmanager-68f4f57c97-wrfjz        0/2    Pending            0         1s
excited-elk-prometheus-kube-state-metrics-858d44dfdc-vt4wj  0/1    ContainerCreating  0         1s
excited-elk-prometheus-pushgateway-58bfd54d6d-m4n69         0/1    ContainerCreating  0         1s
excited-elk-prometheus-server-5958586794-b97xn              0/2    Pending            0         1s

==> v1/ConfigMap

NAME                                 AGE
excited-elk-prometheus-alertmanager  1s
excited-elk-prometheus-server        1s

==> v1/ServiceAccount
excited-elk-prometheus-alertmanager        1s
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-node-exporter       1s
excited-elk-prometheus-pushgateway         1s
excited-elk-prometheus-server              1s

==> v1beta1/ClusterRole
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-server              1s

==> v1beta1/Deployment
excited-elk-prometheus-alertmanager        1s
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-pushgateway         1s
excited-elk-prometheus-server              1s

==> v1/PersistentVolumeClaim
excited-elk-prometheus-alertmanager  1s
excited-elk-prometheus-server        1s

==> v1beta1/ClusterRoleBinding
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-server              1s

==> v1/Service
excited-elk-prometheus-alertmanager        1s
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-node-exporter       1s
excited-elk-prometheus-pushgateway         1s
excited-elk-prometheus-server              1s


NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
excited-elk-prometheus-server.default.svc.cluster.local


Get the Prometheus server URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9090


The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
excited-elk-prometheus-alertmanager.default.svc.cluster.local


Get the Alertmanager URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9093


The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
excited-elk-prometheus-pushgateway.default.svc.cluster.local


Get the PushGateway URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9091

For more information on running Prometheus, visit:
https://prometheus.io/
[node1 ~]$ kubectl get all
NAME                                                             READY     STATUS    RESTARTS   AGE
pod/excited-elk-prometheus-alertmanager-68f4f57c97-wrfjz         0/2       Pending   0          3m
pod/excited-elk-prometheus-kube-state-metrics-858d44dfdc-vt4wj   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-7bjqc                   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-gbcd7                   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-tk56q                   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-tkk9b                   1/1       Running   0          3m
pod/excited-elk-prometheus-pushgateway-58bfd54d6d-m4n69          1/1       Running   0          3m
pod/excited-elk-prometheus-server-5958586794-b97xn               0/2       Pending   0          3m

NAME                                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/excited-elk-prometheus-alertmanager         ClusterIP   10.106.159.46   <none>        80/TCP     3m
service/excited-elk-prometheus-kube-state-metrics   ClusterIP   None            <none>        80/TCP     3m
service/excited-elk-prometheus-node-exporter        ClusterIP   None            <none>        9100/TCP   3m
service/excited-elk-prometheus-pushgateway          ClusterIP   10.106.88.15    <none>        9091/TCP   3m
service/excited-elk-prometheus-server               ClusterIP   10.107.15.64    <none>        80/TCP     3m
service/kubernetes                                  ClusterIP   10.96.0.1       <none>        443/TCP    37m

NAME                                                  DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/excited-elk-prometheus-node-exporter   4         4         4         4            4           <none>          3m

NAME                                                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/excited-elk-prometheus-alertmanager         1         1         1            0           3m
deployment.apps/excited-elk-prometheus-kube-state-metrics   1         1         1            1           3m
deployment.apps/excited-elk-prometheus-pushgateway          1         1         1            1           3m
deployment.apps/excited-elk-prometheus-server               1         1         1            0           3m

NAME                                                                   DESIRED   CURRENT   READY     AGE
replicaset.apps/excited-elk-prometheus-alertmanager-68f4f57c97         1         1         0         3m
replicaset.apps/excited-elk-prometheus-kube-state-metrics-858d44dfdc   1         1         1         3m
replicaset.apps/excited-elk-prometheus-pushgateway-58bfd54d6d          1         1         1         3m
replicaset.apps/excited-elk-prometheus-server-5958586794               1         1         0         3m
[node1 ~]$

Wait for few minutes while you can access Prometheus UI using https://<external-ip>:9090 In the upcoming blog series, I will bring more interesting stuffs around Helm on PWD Playground.

Kubernetes Hands-on Lab #3 – Deploy Istio Mesh on K8s Cluster

Kubernetes Hands-on Lab #1 – Setting up 5-Node K8s Cluster

Building Helm Chart for Kubernetes Cluster running on Docker Enterprise 2.0 using Docker-app 0.6.0

Estimated Reading Time: 7 minutes

 

Docker-app allows you to share your applications on Docker Hub directly. This tool not only makes Compose file shareable but provide us with simplified approach to share multi-service application (not just Docker Image) directly on Dockerhub.

 

Docker-app 0.6.0 got released 2 week back. Few of the notable features included under this release include –

  • Support for external files (extra configuration files): when pushing and pulling all files present in the application folder are included.
  • Render can now produce output in multiple formats (YAML or JSON).
  • split and merge now work properly when specifying an image as input and no output.
  • Command line accepts a trailing slash in the application path.

But one of the most important fix which arrived with this releases was related to Helm chart.  This release fixed multiple issues in Helm chart generation.

How I built Elastic Stack for Docker Swarm using Docker Application Packages(docker-app)

Under this blog post, I will showcase how docker-app can help you build Helm Chart for your 3-node Kubernetes Cluster running on Docker Enterprise 2.0.

Tested Infrastructure

Platform Google Cloud Platform
OS Instance Ubuntu 18.04
Machine Type n1-standard-4 (4 vCPUs, 15 GB memory)
No. of Nodes 3

I assume that you have 3 Ubuntu 18.04 instances having the above minimal configurations. Make sure all the hosts you want to manage with Docker EE have a minimum of:

  • Docker Enterprise Edition 17.06.2-ee-8. Values of n in the -ee- suffix must be 8 or higher
  • Linux kernel version 3.10 or higher
  • 4.00 GB of RAM
  • 3.00 GB of available disk space

 


Cloning the Repository

Login to the first Ubuntu 18.04 OS and run the below command to clone the repository.

git clone https://github.com/ajeetraina/docker101
cd docker101/docker-ee/ubuntu

 

Installing Docker EE

The best way to try Docker Enterprise Edition for yourself is to get the 30-day trial available at the Docker Store. Once you get your trial license, you can install Docker EE on your Linux servers.

As soon as you get 1 Month Trial Version of Docker EE, you will be provided with URL. Copy the section from URL starting from sub

https://storebits.docker.com/ee/ubuntu/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Exporting URL

export eeid=sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Installing Docker EE

I have built a script for you to get Docker EE 2.0 up and running flawlessly. Just one command and Docker EE is all set.

sh bootstrap.sh provision_dockeree

Setting up UCP

Run the below command to initiate container which will setup UCP for you.

sudo sh bootstrap.sh provision_ucp
openusm@master01:~/test/docker101/docker-ee/ubuntu$ sudo sh bootstrap.sh provision_ucp
Unable to find image 'docker/ucp:3.0.5' locally
3.0.5: Pulling from docker/ucp
ff3a5c916c92: Pull complete 
a52011fa0ead: Pull complete 
87e35eb74a08: Pull complete 
Digest: sha256:c8a609209183561de7779e5d32abc5fd9125944c67e8daf262dcbb9f2b1e44ff
Status: Downloaded newer image for docker/ucp:3.0.5
INFO[0000] Your engine version 17.06.2-ee-16, build 9ef4f0a (4.15.0-1021-gcp) is compatible with UCP 3.0.5 (f588f8a) 
Admin Username: collabnix
Admin Password: 
Confirm Admin Password: 
INFO[0043] Pulling required images... (this may take a while) 
INFO[0043] Pulling docker/ucp-auth:3.0.5                
INFO[0049] Pulling docker/ucp-hyperkube:3.0.5           
INFO[0064] Pulling docker/ucp-etcd:3.0.5                
INFO[0070] Pulling docker/ucp-interlock-proxy:3.0.5     
INFO[0086] Pulling docker/ucp-agent:3.0.5               
INFO[0092] Pulling docker/ucp-kube-compose:3.0.5        
INFO[0097] Pulling docker/ucp-dsinfo:3.0.5              
INFO[0104] Pulling docker/ucp-cfssl:3.0.5               
INFO[0107] Pulling docker/ucp-kube-dns-sidecar:3.0.5    
INFO[0112] Pulling docker/ucp-interlock:3.0.5           
INFO[0115] Pulling docker/ucp-kube-dns:3.0.5            
INFO[0120] Pulling docker/ucp-controller:3.0.5          
INFO[0128] Pulling docker/ucp-pause:3.0.5               
INFO[0132] Pulling docker/ucp-calico-kube-controllers:3.0.5 
INFO[0136] Pulling docker/ucp-auth-store:3.0.5          
INFO[0142] Pulling docker/ucp-calico-cni:3.0.5          
INFO[0149] Pulling docker/ucp-calico-node:3.0.5         
INFO[0158] Pulling docker/ucp-kube-dns-dnsmasq-nanny:3.0.5 
INFO[0163] Pulling docker/ucp-compose:3.0.5             
INFO[0167] Pulling docker/ucp-swarm:3.0.5               
INFO[0173] Pulling docker/ucp-metrics:3.0.5             
INFO[0179] Pulling docker/ucp-interlock-extension:3.0.5 
WARN[0183] None of the hostnames we'll be using in the UCP certificates [master01 127.0.0.1 172.17.0.1 10.140.0.2] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases 

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases: 
INFO[0000] Initializing a new swarm at 10.140.0.2 
Additional aliases: 
INFO[0000] Initializing a new swarm at 10.140.0.2       
INFO[0009] Installing UCP with host address 10.140.0.2 - If this is incorrect, please specify an alternative address with the '--host-address' flag 
INFO[0009] Deploying UCP Service...                     
INFO[0068] Installation completed on master01 (node slsvy00m1khejbo5itmupk034) 
INFO[0068] UCP Instance ID: omz7lso0zpeyzk17gxubvz72r   
INFO[0068] UCP Server SSL: SHA-256 Fingerprint=24:9B:51:4E:E2:F1:CD:1B:DE:E0:86:0F:DC:E7:29:B5:1E:0E:6B:0C:BF:24:CC:27:85:91:35:A1:6A:39:37:C6 
INFO[0068] Login to UCP at https://10.140.0.2:443       
INFO[0068] Username: collabnix                          
INFO[0068] Password: (your admin password) 

Logging in Docker EE

By now, you should be able to login to Docker EE Window using browser. Upload the license and you should be good to see the UCP console.

Installing Kubectl

Undoubtedly, kubectl has been favourite command for K8s users. It works great but it’s painful because you use it to manually run a command for each resource in your Kubernetes application. This is prone to error, because we might forget to deploy one resource, or introduce a typo when writing our kubectl commands. As we add more parts to our application, the probability of these problems occurring increases.

But still here’s a bonus – Execute the below script if you really want to use kubectl on Docker EE Platform.

sudo sh bootstrap.sh install_kubectl

Verify Kubectl Version

@master01:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.11", GitCommit:"1df6a8381669a6c753f79cb31ca2e3d57ee7c8a3", GitTreeState:"clean", BuildDate:"2018-04-05T17:24:
03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.11-docker-8d637ae", GitCommit:"8d637aedf46b9c21dde723e29c645b9f27106fa5", GitTreeState:"clean", BuildDate:"2
018-04-26T16:51:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Verifying Docker Version

Here you go.. Docker EE 2.0 is all set up with Swarm & Kubernetes running side by side.

 docker version
Client: Docker Enterprise Edition (EE) 2.0
 Version:      17.06.2-ee-16
 API version:  1.30
 Go version:   go1.8.7
 Git commit:   9ef4f0a
 Built:        Thu Jul 26 16:41:28 2018
 OS/Arch:      linux/amd64
Server: Docker Enterprise Edition (EE) 2.0
 Engine:
  Version:      17.06.2-ee-16
  API version:  1.30 (minimum version 1.12)
  Go version:   go1.8.7
  Git commit:   9ef4f0a
  Built:        Thu Jul 26 16:40:18 2018
  OS/Arch:      linux/amd64
  Experimental: false
 Universal Control Plane:
  Version:       3.0.5
  ApiVersion:                   1.30
  Arch:                         amd64
  BuildTime:                    Thu Aug 30 17:47:03 UTC 2018
  GitCommit:                    f588f8a
  GoVersion:                    go1.9.4
  MinApiVersion:                1.20
  Os:                           linux
 Kubernetes:
  Version:      1.8+
  buildDate:                   2018-04-26T16:51:21Z
  compiler:                    gc
  gitCommit:                   8d637aedf46b9c21dde723e29c645b9f27106fa5
  gitTreeState:                clean
  gitVersion:                  v1.8.11-docker-8d637ae
  goVersion:                   go1.8.3
  major:                       1
  minor:                       8+
  platform:                    linux/amd64
 Calico:
  Version:          v3.0.8
  cni:                             v2.0.6
  kube-controllers:                v2.0.5
  node:                            v3.0.8

Verifying the Kubernetes Nodes

@master01:~/test/docker101/docker-ee/ubuntu$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
master01   Ready     master    20m       v1.8.11-docker-8d637ae

Adding Worker Nodes

To add worker nodes, go to Add Nodes section under Docker Enterprise UI and click on “Add a Node”. It will display a command which need to be executed on worker nodes. This should be good to build multi-node Docker EE Swarm & Kubernetes Cluster

m@master01:~$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
av668en5dinpin5jpi6ro0yfs     worker01            Ready               Active              
k4grcnyl6vbf0z17bh67cz9l5     worker02            Ready               Active              
slsvy00m1khejbo5itmupk034 *   master01            Ready               Active              Leader

@master01:~$ kubectl get nodes
NAME       STATUS     ROLES     AGE       VERSION
master01   Ready      master    1h        v1.8.11-docker-8d637ae
worker01   NotReady   <none>    28s       v1.8.11-docker-8d637ae
worker02   Ready      <none>    3m        v1.8.11-docker-8d637ae
openusm@master01:~$ 

Downloading Client Bundle

In order to manage services using Docker CLI, one need to install client bundle and Docker Inc. provides an easy way to install it. I put it under the script “install-client-bundle” . All you need is to supply the correct username and password plus UCP URL to make it work.

Cloning the docker-app Repository

@master01:~/$git clone https://github.com/ajeetraina/app
@master01:~/$cd app
@master01:~/app$ sudo sh install-dockerapp

Verifying Docker-app version

openusm@master01:~/app$ docker-app version
Version:      v0.6.0
Git commit:   9f9c6680
Built:        Thu Oct  4 13:30:33 2018
OS/Arch:      linux/amd64
Experimental: off
Renderers:    none

 

Create WordPress Helm Package

Change directory to /app/examples/wordpress and run the below command to create Helm Chart.

openusm@master01:~/app/examples/wordpress$ docker-app helm --stack-version=v1beta1

 

docker-app helm wordpress will output a Helm package in the ./wordpress.helm folder. –compose-file (or -c), –set (or -e) and –settings-files (or -s) flags apply the same way they do for the render subcommand.

Deploy WordPress Application on Kubernetes Cluster

We are now all set to build WordPress application for Kubernetes Cluster using docker-app deploy command.

openusm@master01:~/app/examples/wordpress$ docker-app deploy -o kubernetes
top-level network "overlay" is ignored
service "mysql": network "overlay" is ignored
service "wordpress": network "overlay" is ignored
service "wordpress": depends_on are ignored
Waiting for the stack to be stable and running...
wordpress: Ready                [pod status: 1/1 ready, 0/1 pending, 0/1 failed]

 

 

 

If you’re interested in a fully conformant Kubernetes environment that is ready for the enterprise,    https://trial.docker.com/

Did you find this blog helpful? Feel free to share your experience. Get in touch with me on Twitter –  @ajeetsraina

If you are looking out for contribution, join me at Docker Community Slack Channel.

Installing Helm to deploy Kubernetes Applications on Docker Enterprise 2.0 Made Easy

Estimated Reading Time: 10 minutes

 

 

Let’s talk about RBAC under Docker EE 2.0…

Kubernetes RBAC(Role-based Access Control) security context is a fundamental part of Kubernetes security best practices, as well as rolling out TLS certificates / PKI authentication for connecting to the Kubernetes API server and between its components. Kubernetes RBAC is essentially an authorization and access control specification where you define the actions (GET, UPDATE, DELETE, etc) that Kubernetes subjects (i.e. human users, software, kubelets) are allowed to perform over Kubernetes entities (i.e. pods, secrets, nodes).

 

 

 

Early this year, Docker EE 2.0 introduced an enhanced RBAC solution for the first time that provided flexible and granular access controls across multiple teams and users. Docker EE leverages the Kubernetes webhook authentication model. This feature enables the validation of all requests by an outside source. With Docker EE we use the control plane’s RBAC controller, eNZi. Each Kubernetes request, whether issued via the CLI or the GUI, is validated against Docker EE’s authN/authZ database, and then rejected or accepted as appropriate.

 

With Docker EE 2.0, UCP now includes an upstream distribution of Kubernetes. From a security point of view this is the best of both worlds. Out of the box Docker EE 2.0 provides user authentication and RBAC on top of Kubernetes. To ensure the Kubernetes orchestrator follows all the security best practices UCP utilizes TLS for the Kubernetes API port. When combined with UCP’s auth model, this allows for the same client bundle to talk to the Swarm or Kubernetes API.

Early this year, I wrote a blog post which deep-dive into Docker EE 2.0 Architecture. Check it out if you are new and want to get into nitty-gritty of Docker Enterprise product.

Under the Hood: Demystifying Docker Enterprise Edition 2.0 Architecture

Under this blog post, I will show you an insanely easy way to deploy Helm Package manager on 3-Node Kubernetes Cluster running Docker Enterprise 2.0.

Tested Infrastructure

Platform Google Cloud Platform
OS Instance Ubuntu 18.04
Machine Type n1-standard-4 (4 vCPUs, 15 GB memory)
No. of Nodes 3

I assume that you have 3 Ubuntu 18.04 instances having the above minimal configurations. Make sure all the hosts you want to manage with Docker EE have a minimum of:

  • Docker Enterprise Edition 17.06.2-ee-8. Values of n in the -ee- suffix must be 8 or higher
  • Linux kernel version 3.10 or higher
  • 4.00 GB of RAM
  • 3.00 GB of available disk space

 


Cloning the Repository

Login to the first Ubuntu 18.04 OS and run the below command to clone the repository.

git clone https://github.com/ajeetraina/docker101
cd docker101/docker-ee/ubuntu

 

Installing Docker EE

The best way to try Docker Enterprise Edition for yourself is to get the 30-day trial available at the Docker Store. Once you get your trial license, you can install Docker EE on your Linux servers.

As soon as you get 1 Month Trial Version of Docker EE, you will be provided with URL. Copy the section from URL starting from sub

https://storebits.docker.com/ee/ubuntu/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Exporting URL

export eeid=sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Installing Docker EE

I have built a script for you to get Docker EE 2.0 up and running flawlessly. Just one command and Docker EE is all set.

sh bootstrap.sh provision_dockeree

Setting up UCP

Run the below command to initiate container which will setup UCP for you.

sudo sh bootstrap.sh provision_ucp
openusm@master01:~/test/docker101/docker-ee/ubuntu$ sudo sh bootstrap.sh provision_ucp
Unable to find image 'docker/ucp:3.0.5' locally
3.0.5: Pulling from docker/ucp
ff3a5c916c92: Pull complete 
a52011fa0ead: Pull complete 
87e35eb74a08: Pull complete 
Digest: sha256:c8a609209183561de7779e5d32abc5fd9125944c67e8daf262dcbb9f2b1e44ff
Status: Downloaded newer image for docker/ucp:3.0.5
INFO[0000] Your engine version 17.06.2-ee-16, build 9ef4f0a (4.15.0-1021-gcp) is compatible with UCP 3.0.5 (f588f8a) 
Admin Username: collabnix
Admin Password: 
Confirm Admin Password: 
INFO[0043] Pulling required images... (this may take a while) 
INFO[0043] Pulling docker/ucp-auth:3.0.5                
INFO[0049] Pulling docker/ucp-hyperkube:3.0.5           
INFO[0064] Pulling docker/ucp-etcd:3.0.5                
INFO[0070] Pulling docker/ucp-interlock-proxy:3.0.5     
INFO[0086] Pulling docker/ucp-agent:3.0.5               
INFO[0092] Pulling docker/ucp-kube-compose:3.0.5        
INFO[0097] Pulling docker/ucp-dsinfo:3.0.5              
INFO[0104] Pulling docker/ucp-cfssl:3.0.5               
INFO[0107] Pulling docker/ucp-kube-dns-sidecar:3.0.5    
INFO[0112] Pulling docker/ucp-interlock:3.0.5           
INFO[0115] Pulling docker/ucp-kube-dns:3.0.5            
INFO[0120] Pulling docker/ucp-controller:3.0.5          
INFO[0128] Pulling docker/ucp-pause:3.0.5               
INFO[0132] Pulling docker/ucp-calico-kube-controllers:3.0.5 
INFO[0136] Pulling docker/ucp-auth-store:3.0.5          
INFO[0142] Pulling docker/ucp-calico-cni:3.0.5          
INFO[0149] Pulling docker/ucp-calico-node:3.0.5         
INFO[0158] Pulling docker/ucp-kube-dns-dnsmasq-nanny:3.0.5 
INFO[0163] Pulling docker/ucp-compose:3.0.5             
INFO[0167] Pulling docker/ucp-swarm:3.0.5               
INFO[0173] Pulling docker/ucp-metrics:3.0.5             
INFO[0179] Pulling docker/ucp-interlock-extension:3.0.5 
WARN[0183] None of the hostnames we'll be using in the UCP certificates [master01 127.0.0.1 172.17.0.1 10.140.0.2] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases 

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases: 
INFO[0000] Initializing a new swarm at 10.140.0.2 
Additional aliases: 
INFO[0000] Initializing a new swarm at 10.140.0.2       
INFO[0009] Installing UCP with host address 10.140.0.2 - If this is incorrect, please specify an alternative address with the '--host-address' flag 
INFO[0009] Deploying UCP Service...                     
INFO[0068] Installation completed on master01 (node slsvy00m1khejbo5itmupk034) 
INFO[0068] UCP Instance ID: omz7lso0zpeyzk17gxubvz72r   
INFO[0068] UCP Server SSL: SHA-256 Fingerprint=24:9B:51:4E:E2:F1:CD:1B:DE:E0:86:0F:DC:E7:29:B5:1E:0E:6B:0C:BF:24:CC:27:85:91:35:A1:6A:39:37:C6 
INFO[0068] Login to UCP at https://10.140.0.2:443       
INFO[0068] Username: collabnix                          
INFO[0068] Password: (your admin password) 

Logging in Docker EE

By now, you should be able to login to Docker EE Window using browser. Upload the license and you should be good to see the UCP console.

Installing Kubectl

Undoubtedly, kubectl has been favourite command for K8s users. It works great but it’s painful because you use it to manually run a command for each resource in your Kubernetes application. This is prone to error, because we might forget to deploy one resource, or introduce a typo when writing our kubectl commands. As we add more parts to our application, the probability of these problems occurring increases.

But still here’s a bonus – Execute the below script if you really want to use kubectl on Docker EE Platform.

sudo sh bootstrap.sh install_kubectl

Verify Kubectl Version

@master01:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.11", GitCommit:"1df6a8381669a6c753f79cb31ca2e3d57ee7c8a3", GitTreeState:"clean", BuildDate:"2018-04-05T17:24:
03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.11-docker-8d637ae", GitCommit:"8d637aedf46b9c21dde723e29c645b9f27106fa5", GitTreeState:"clean", BuildDate:"2
018-04-26T16:51:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Verifying Docker Version

Here you go.. Docker EE 2.0 is all set up with Swarm & Kubernetes running side by side.

 docker version
Client: Docker Enterprise Edition (EE) 2.0
 Version:      17.06.2-ee-16
 API version:  1.30
 Go version:   go1.8.7
 Git commit:   9ef4f0a
 Built:        Thu Jul 26 16:41:28 2018
 OS/Arch:      linux/amd64
Server: Docker Enterprise Edition (EE) 2.0
 Engine:
  Version:      17.06.2-ee-16
  API version:  1.30 (minimum version 1.12)
  Go version:   go1.8.7
  Git commit:   9ef4f0a
  Built:        Thu Jul 26 16:40:18 2018
  OS/Arch:      linux/amd64
  Experimental: false
 Universal Control Plane:
  Version:       3.0.5
  ApiVersion:                   1.30
  Arch:                         amd64
  BuildTime:                    Thu Aug 30 17:47:03 UTC 2018
  GitCommit:                    f588f8a
  GoVersion:                    go1.9.4
  MinApiVersion:                1.20
  Os:                           linux
 Kubernetes:
  Version:      1.8+
  buildDate:                   2018-04-26T16:51:21Z
  compiler:                    gc
  gitCommit:                   8d637aedf46b9c21dde723e29c645b9f27106fa5
  gitTreeState:                clean
  gitVersion:                  v1.8.11-docker-8d637ae
  goVersion:                   go1.8.3
  major:                       1
  minor:                       8+
  platform:                    linux/amd64
 Calico:
  Version:          v3.0.8
  cni:                             v2.0.6
  kube-controllers:                v2.0.5
  node:                            v3.0.8

Verifying the Kubernetes Nodes

@master01:~/test/docker101/docker-ee/ubuntu$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
master01   Ready     master    20m       v1.8.11-docker-8d637ae

Adding Worker Nodes

To add worker nodes, go to Add Nodes section under Docker Enterprise UI and click on “Add a Node”. It will display a command which need to be executed on worker nodes. This should be good to build multi-node Docker EE Swarm & Kubernetes Cluster

m@master01:~$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
av668en5dinpin5jpi6ro0yfs     worker01            Ready               Active              
k4grcnyl6vbf0z17bh67cz9l5     worker02            Ready               Active              
slsvy00m1khejbo5itmupk034 *   master01            Ready               Active              Leader

@master01:~$ kubectl get nodes
NAME       STATUS     ROLES     AGE       VERSION
master01   Ready      master    1h        v1.8.11-docker-8d637ae
worker01   NotReady   <none>    28s       v1.8.11-docker-8d637ae
worker02   Ready      <none>    3m        v1.8.11-docker-8d637ae
openusm@master01:~$ 

Downloading Client Bundle

In order to manage services using Docker CLI, one need to install client bundle and Docker Inc. provides an easy way to install it. I put it under the script “install-client-bundle” . All you need is to supply the correct username and password plus UCP URL to make it work.

 

Installing Helm

openusm@master01:~$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > install-helm.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7230  100  7230    0     0  17173      0 --:--:-- --:--:-- --:--:-- 17132
openusm@master01:~$ chmod u+x install-helm.sh
openusm@master01:~$ ./install-helm.sh
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.11.0-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.
openusm@master01:~$ helm init
Creating /home/openusm/.helm 
Creating /home/openusm/.helm/repository 
Creating /home/openusm/.helm/repository/cache 
Creating /home/openusm/.helm/repository/local 
Creating /home/openusm/.helm/plugins 
Creating /home/openusm/.helm/starters 
Creating /home/openusm/.helm/cache/archive 
Creating /home/openusm/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /home/openusm/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
openusm@master01:~$

Verifying Helm Version

helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}

Installing MYSQL using Helm

helm install --name mysql stable/mysql
Error: release mysql failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "default": access denied

You will encounter the above error message and it is expected. This is caused by tiller not having the correct service account / permissions to install applications into the cluster.

Creating Service Account for Tiller

To create a service account for tiller and apply the correct grants, follow these steps:

Create the tiller service account:

openusm@master01:~$ kubectl create serviceaccount --namespace kube-system tiller
serviceaccount "tiller" created

 

In the UCP UI, navigate to the User Management > Grants menu and click  the Create button.In the Subject area, choose Service Account and select the kube-system Namespace and tiller Service Account; then hit Next:

 

In the Role area, select Full Control. Then hit Next. In the Resource area, select Namespaces, flip the switch to enable Apply grant to all existing and new namespaces, then hit Create:

 

 

Verify the final grant. It should look similar to what is shown below:

 

At this stage, if you have tiller installed in the cluster already, you will need to patch the tiller deployment to use the tiller service account just created:

$ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

So before we can use helm with a kubernetes cluster, you need to install tiller on it. It’s as easy as running the below command:

openusm@master01:~$ helm init --service-account tiller
$HELM_HOME has been configured at /home/openusm/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!

The above command installs Tiller (the Helm server-side component) onto your Kubernetes Cluster and sets up local configuration in $HELM_HOME (default ~/.helm/).As with the rest of the Helm commands, ‘helm init’ discovers Kubernetes clusters by reading $KUBECONFIG (default ‘~/.kube/config’) and using the default context.

We are good to go ahead and install a chart via helm install:

openusm@master01:~$ helm install --name mysql stable/mysql
NAME:   mysql
LAST DEPLOYED: Thu Oct 11 13:37:52 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod(related)
NAME                    READY  STATUS   RESTARTS  AGE
mysql-694696f6d5-w4j5n  0/1    Pending  0         1s
==> v1/Secret
NAME   AGE
mysql  1s
==> v1/ConfigMap
mysql-test  1s
==> v1/PersistentVolumeClaim
mysql  1s
==> v1/Service
mysql  1s
==> v1beta1/Deployment
mysql  1s
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mysql.default.svc.cluster.local
To get your root password run:
    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)
To connect to your database:
1. Run an Ubuntu pod that you can use as a client:
    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
2. Install the mysql client:
    $ apt-get update && apt-get install mysql-client -y
3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysql -p
To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306
    # Execute the following command to route the connection:
    kubectl port-forward svc/mysql 3306
    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
    

In my next blog post, I will showcase how to build IstioMesh on Docker Enterprise 2.0. Stay tuned !

If you’re interested in a fully conformant Kubernetes environment that is ready for the enterprise,    https://trial.docker.com/

Did you find this blog helpful? Feel free to share your experience. Get in touch with me on Twitter –  @ajeetsraina

If you are looking out for contribution, join me at Docker Community Slack Channel.