Building Helm Chart for Kubernetes Cluster running on Docker Enterprise 2.0 using Docker-app 0.6.0

Estimated Reading Time: 8 minutes

 

Docker-app allows you to share your applications on Docker Hub directly. This tool not only makes Compose file shareable but provide us with simplified approach to share multi-service application (not just Docker Image) directly on Dockerhub.

 

Docker-app 0.6.0 got released 2 week back. Few of the notable features included under this release include –

  • Support for external files (extra configuration files): when pushing and pulling all files present in the application folder are included.
  • Render can now produce output in multiple formats (YAML or JSON).
  • split and merge now work properly when specifying an image as input and no output.
  • Command line accepts a trailing slash in the application path.

But one of the most important fix which arrived with this releases was related to Helm chart.  This release fixed multiple issues in Helm chart generation.

How I built Elastic Stack for Docker Swarm using Docker Application Packages(docker-app)

Under this blog post, I will showcase how docker-app can help you build Helm Chart for your 3-node Kubernetes Cluster running on Docker Enterprise 2.0.

Tested Infrastructure

Platform Google Cloud Platform
OS Instance Ubuntu 18.04
Machine Type n1-standard-4 (4 vCPUs, 15 GB memory)
No. of Nodes 3

I assume that you have 3 Ubuntu 18.04 instances having the above minimal configurations. Make sure all the hosts you want to manage with Docker EE have a minimum of:

  • Docker Enterprise Edition 17.06.2-ee-8. Values of n in the -ee- suffix must be 8 or higher
  • Linux kernel version 3.10 or higher
  • 4.00 GB of RAM
  • 3.00 GB of available disk space

 


Cloning the Repository

Login to the first Ubuntu 18.04 OS and run the below command to clone the repository.

git clone https://github.com/ajeetraina/docker101
cd docker101/docker-ee/ubuntu

 

Installing Docker EE

The best way to try Docker Enterprise Edition for yourself is to get the 30-day trial available at the Docker Store. Once you get your trial license, you can install Docker EE on your Linux servers.

As soon as you get 1 Month Trial Version of Docker EE, you will be provided with URL. Copy the section from URL starting from sub

https://storebits.docker.com/ee/ubuntu/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Exporting URL

export eeid=sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Installing Docker EE

I have built a script for you to get Docker EE 2.0 up and running flawlessly. Just one command and Docker EE is all set.

sh bootstrap.sh provision_dockeree

Setting up UCP

Run the below command to initiate container which will setup UCP for you.

sudo sh bootstrap.sh provision_ucp
openusm@master01:~/test/docker101/docker-ee/ubuntu$ sudo sh bootstrap.sh provision_ucp
Unable to find image 'docker/ucp:3.0.5' locally
3.0.5: Pulling from docker/ucp
ff3a5c916c92: Pull complete 
a52011fa0ead: Pull complete 
87e35eb74a08: Pull complete 
Digest: sha256:c8a609209183561de7779e5d32abc5fd9125944c67e8daf262dcbb9f2b1e44ff
Status: Downloaded newer image for docker/ucp:3.0.5
INFO[0000] Your engine version 17.06.2-ee-16, build 9ef4f0a (4.15.0-1021-gcp) is compatible with UCP 3.0.5 (f588f8a) 
Admin Username: collabnix
Admin Password: 
Confirm Admin Password: 
INFO[0043] Pulling required images... (this may take a while) 
INFO[0043] Pulling docker/ucp-auth:3.0.5                
INFO[0049] Pulling docker/ucp-hyperkube:3.0.5           
INFO[0064] Pulling docker/ucp-etcd:3.0.5                
INFO[0070] Pulling docker/ucp-interlock-proxy:3.0.5     
INFO[0086] Pulling docker/ucp-agent:3.0.5               
INFO[0092] Pulling docker/ucp-kube-compose:3.0.5        
INFO[0097] Pulling docker/ucp-dsinfo:3.0.5              
INFO[0104] Pulling docker/ucp-cfssl:3.0.5               
INFO[0107] Pulling docker/ucp-kube-dns-sidecar:3.0.5    
INFO[0112] Pulling docker/ucp-interlock:3.0.5           
INFO[0115] Pulling docker/ucp-kube-dns:3.0.5            
INFO[0120] Pulling docker/ucp-controller:3.0.5          
INFO[0128] Pulling docker/ucp-pause:3.0.5               
INFO[0132] Pulling docker/ucp-calico-kube-controllers:3.0.5 
INFO[0136] Pulling docker/ucp-auth-store:3.0.5          
INFO[0142] Pulling docker/ucp-calico-cni:3.0.5          
INFO[0149] Pulling docker/ucp-calico-node:3.0.5         
INFO[0158] Pulling docker/ucp-kube-dns-dnsmasq-nanny:3.0.5 
INFO[0163] Pulling docker/ucp-compose:3.0.5             
INFO[0167] Pulling docker/ucp-swarm:3.0.5               
INFO[0173] Pulling docker/ucp-metrics:3.0.5             
INFO[0179] Pulling docker/ucp-interlock-extension:3.0.5 
WARN[0183] None of the hostnames we'll be using in the UCP certificates [master01 127.0.0.1 172.17.0.1 10.140.0.2] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases 

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases: 
INFO[0000] Initializing a new swarm at 10.140.0.2 
Additional aliases: 
INFO[0000] Initializing a new swarm at 10.140.0.2       
INFO[0009] Installing UCP with host address 10.140.0.2 - If this is incorrect, please specify an alternative address with the '--host-address' flag 
INFO[0009] Deploying UCP Service...                     
INFO[0068] Installation completed on master01 (node slsvy00m1khejbo5itmupk034) 
INFO[0068] UCP Instance ID: omz7lso0zpeyzk17gxubvz72r   
INFO[0068] UCP Server SSL: SHA-256 Fingerprint=24:9B:51:4E:E2:F1:CD:1B:DE:E0:86:0F:DC:E7:29:B5:1E:0E:6B:0C:BF:24:CC:27:85:91:35:A1:6A:39:37:C6 
INFO[0068] Login to UCP at https://10.140.0.2:443       
INFO[0068] Username: collabnix                          
INFO[0068] Password: (your admin password) 

Logging in Docker EE

By now, you should be able to login to Docker EE Window using browser. Upload the license and you should be good to see the UCP console.

Installing Kubectl

Undoubtedly, kubectl has been favourite command for K8s users. It works great but it’s painful because you use it to manually run a command for each resource in your Kubernetes application. This is prone to error, because we might forget to deploy one resource, or introduce a typo when writing our kubectl commands. As we add more parts to our application, the probability of these problems occurring increases.

But still here’s a bonus – Execute the below script if you really want to use kubectl on Docker EE Platform.

sudo sh bootstrap.sh install_kubectl

Verify Kubectl Version

@master01:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.11", GitCommit:"1df6a8381669a6c753f79cb31ca2e3d57ee7c8a3", GitTreeState:"clean", BuildDate:"2018-04-05T17:24:
03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.11-docker-8d637ae", GitCommit:"8d637aedf46b9c21dde723e29c645b9f27106fa5", GitTreeState:"clean", BuildDate:"2
018-04-26T16:51:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Verifying Docker Version

Here you go.. Docker EE 2.0 is all set up with Swarm & Kubernetes running side by side.

 docker version
Client: Docker Enterprise Edition (EE) 2.0
 Version:      17.06.2-ee-16
 API version:  1.30
 Go version:   go1.8.7
 Git commit:   9ef4f0a
 Built:        Thu Jul 26 16:41:28 2018
 OS/Arch:      linux/amd64
Server: Docker Enterprise Edition (EE) 2.0
 Engine:
  Version:      17.06.2-ee-16
  API version:  1.30 (minimum version 1.12)
  Go version:   go1.8.7
  Git commit:   9ef4f0a
  Built:        Thu Jul 26 16:40:18 2018
  OS/Arch:      linux/amd64
  Experimental: false
 Universal Control Plane:
  Version:       3.0.5
  ApiVersion:                   1.30
  Arch:                         amd64
  BuildTime:                    Thu Aug 30 17:47:03 UTC 2018
  GitCommit:                    f588f8a
  GoVersion:                    go1.9.4
  MinApiVersion:                1.20
  Os:                           linux
 Kubernetes:
  Version:      1.8+
  buildDate:                   2018-04-26T16:51:21Z
  compiler:                    gc
  gitCommit:                   8d637aedf46b9c21dde723e29c645b9f27106fa5
  gitTreeState:                clean
  gitVersion:                  v1.8.11-docker-8d637ae
  goVersion:                   go1.8.3
  major:                       1
  minor:                       8+
  platform:                    linux/amd64
 Calico:
  Version:          v3.0.8
  cni:                             v2.0.6
  kube-controllers:                v2.0.5
  node:                            v3.0.8

Verifying the Kubernetes Nodes

@master01:~/test/docker101/docker-ee/ubuntu$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
master01   Ready     master    20m       v1.8.11-docker-8d637ae

Adding Worker Nodes

To add worker nodes, go to Add Nodes section under Docker Enterprise UI and click on “Add a Node”. It will display a command which need to be executed on worker nodes. This should be good to build multi-node Docker EE Swarm & Kubernetes Cluster

m@master01:~$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
av668en5dinpin5jpi6ro0yfs     worker01            Ready               Active              
k4grcnyl6vbf0z17bh67cz9l5     worker02            Ready               Active              
slsvy00m1khejbo5itmupk034 *   master01            Ready               Active              Leader

@master01:~$ kubectl get nodes
NAME       STATUS     ROLES     AGE       VERSION
master01   Ready      master    1h        v1.8.11-docker-8d637ae
worker01   NotReady   <none>    28s       v1.8.11-docker-8d637ae
worker02   Ready      <none>    3m        v1.8.11-docker-8d637ae
openusm@master01:~$ 

Downloading Client Bundle

In order to manage services using Docker CLI, one need to install client bundle and Docker Inc. provides an easy way to install it. I put it under the script “install-client-bundle” . All you need is to supply the correct username and password plus UCP URL to make it work.

Cloning the docker-app Repository

@master01:~/$git clone https://github.com/ajeetraina/app
@master01:~/$cd app
@master01:~/app$ sudo sh install-dockerapp

Verifying Docker-app version

openusm@master01:~/app$ docker-app version
Version:      v0.6.0
Git commit:   9f9c6680
Built:        Thu Oct  4 13:30:33 2018
OS/Arch:      linux/amd64
Experimental: off
Renderers:    none

 

Create WordPress Helm Package

Change directory to /app/examples/wordpress and run the below command to create Helm Chart.

openusm@master01:~/app/examples/wordpress$ docker-app helm --stack-version=v1beta1

 

docker-app helm wordpress will output a Helm package in the ./wordpress.helm folder. –compose-file (or -c), –set (or -e) and –settings-files (or -s) flags apply the same way they do for the render subcommand.

Deploy WordPress Application on Kubernetes Cluster

We are now all set to build WordPress application for Kubernetes Cluster using docker-app deploy command.

openusm@master01:~/app/examples/wordpress$ docker-app deploy -o kubernetes
top-level network "overlay" is ignored
service "mysql": network "overlay" is ignored
service "wordpress": network "overlay" is ignored
service "wordpress": depends_on are ignored
Waiting for the stack to be stable and running...
wordpress: Ready                [pod status: 1/1 ready, 0/1 pending, 0/1 failed]

 

 

 

If you’re interested in a fully conformant Kubernetes environment that is ready for the enterprise,    https://trial.docker.com/

Did you find this blog helpful? Feel free to share your experience. Get in touch with me on Twitter –  @ajeetsraina

If you are looking out for contribution, join me at Docker Community Slack Channel.

Installing Helm to deploy Kubernetes Applications on Docker Enterprise 2.0 Made Easy

Estimated Reading Time: 11 minutes

 

 

Let’s talk about RBAC under Docker EE 2.0…

Kubernetes RBAC(Role-based Access Control) security context is a fundamental part of Kubernetes security best practices, as well as rolling out TLS certificates / PKI authentication for connecting to the Kubernetes API server and between its components. Kubernetes RBAC is essentially an authorization and access control specification where you define the actions (GET, UPDATE, DELETE, etc) that Kubernetes subjects (i.e. human users, software, kubelets) are allowed to perform over Kubernetes entities (i.e. pods, secrets, nodes).

 

 

 

Early this year, Docker EE 2.0 introduced an enhanced RBAC solution for the first time that provided flexible and granular access controls across multiple teams and users. Docker EE leverages the Kubernetes webhook authentication model. This feature enables the validation of all requests by an outside source. With Docker EE we use the control plane’s RBAC controller, eNZi. Each Kubernetes request, whether issued via the CLI or the GUI, is validated against Docker EE’s authN/authZ database, and then rejected or accepted as appropriate.

 

With Docker EE 2.0, UCP now includes an upstream distribution of Kubernetes. From a security point of view this is the best of both worlds. Out of the box Docker EE 2.0 provides user authentication and RBAC on top of Kubernetes. To ensure the Kubernetes orchestrator follows all the security best practices UCP utilizes TLS for the Kubernetes API port. When combined with UCP’s auth model, this allows for the same client bundle to talk to the Swarm or Kubernetes API.

Early this year, I wrote a blog post which deep-dive into Docker EE 2.0 Architecture. Check it out if you are new and want to get into nitty-gritty of Docker Enterprise product.

Under the Hood: Demystifying Docker Enterprise Edition 2.0 Architecture

Under this blog post, I will show you an insanely easy way to deploy Helm Package manager on 3-Node Kubernetes Cluster running Docker Enterprise 2.0.

Tested Infrastructure

Platform Google Cloud Platform
OS Instance Ubuntu 18.04
Machine Type n1-standard-4 (4 vCPUs, 15 GB memory)
No. of Nodes 3

I assume that you have 3 Ubuntu 18.04 instances having the above minimal configurations. Make sure all the hosts you want to manage with Docker EE have a minimum of:

  • Docker Enterprise Edition 17.06.2-ee-8. Values of n in the -ee- suffix must be 8 or higher
  • Linux kernel version 3.10 or higher
  • 4.00 GB of RAM
  • 3.00 GB of available disk space

 


Cloning the Repository

Login to the first Ubuntu 18.04 OS and run the below command to clone the repository.

git clone https://github.com/ajeetraina/docker101
cd docker101/docker-ee/ubuntu

 

Installing Docker EE

The best way to try Docker Enterprise Edition for yourself is to get the 30-day trial available at the Docker Store. Once you get your trial license, you can install Docker EE on your Linux servers.

As soon as you get 1 Month Trial Version of Docker EE, you will be provided with URL. Copy the section from URL starting from sub

https://storebits.docker.com/ee/ubuntu/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Exporting URL

export eeid=sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Installing Docker EE

I have built a script for you to get Docker EE 2.0 up and running flawlessly. Just one command and Docker EE is all set.

sh bootstrap.sh provision_dockeree

Setting up UCP

Run the below command to initiate container which will setup UCP for you.

sudo sh bootstrap.sh provision_ucp
openusm@master01:~/test/docker101/docker-ee/ubuntu$ sudo sh bootstrap.sh provision_ucp
Unable to find image 'docker/ucp:3.0.5' locally
3.0.5: Pulling from docker/ucp
ff3a5c916c92: Pull complete 
a52011fa0ead: Pull complete 
87e35eb74a08: Pull complete 
Digest: sha256:c8a609209183561de7779e5d32abc5fd9125944c67e8daf262dcbb9f2b1e44ff
Status: Downloaded newer image for docker/ucp:3.0.5
INFO[0000] Your engine version 17.06.2-ee-16, build 9ef4f0a (4.15.0-1021-gcp) is compatible with UCP 3.0.5 (f588f8a) 
Admin Username: collabnix
Admin Password: 
Confirm Admin Password: 
INFO[0043] Pulling required images... (this may take a while) 
INFO[0043] Pulling docker/ucp-auth:3.0.5                
INFO[0049] Pulling docker/ucp-hyperkube:3.0.5           
INFO[0064] Pulling docker/ucp-etcd:3.0.5                
INFO[0070] Pulling docker/ucp-interlock-proxy:3.0.5     
INFO[0086] Pulling docker/ucp-agent:3.0.5               
INFO[0092] Pulling docker/ucp-kube-compose:3.0.5        
INFO[0097] Pulling docker/ucp-dsinfo:3.0.5              
INFO[0104] Pulling docker/ucp-cfssl:3.0.5               
INFO[0107] Pulling docker/ucp-kube-dns-sidecar:3.0.5    
INFO[0112] Pulling docker/ucp-interlock:3.0.5           
INFO[0115] Pulling docker/ucp-kube-dns:3.0.5            
INFO[0120] Pulling docker/ucp-controller:3.0.5          
INFO[0128] Pulling docker/ucp-pause:3.0.5               
INFO[0132] Pulling docker/ucp-calico-kube-controllers:3.0.5 
INFO[0136] Pulling docker/ucp-auth-store:3.0.5          
INFO[0142] Pulling docker/ucp-calico-cni:3.0.5          
INFO[0149] Pulling docker/ucp-calico-node:3.0.5         
INFO[0158] Pulling docker/ucp-kube-dns-dnsmasq-nanny:3.0.5 
INFO[0163] Pulling docker/ucp-compose:3.0.5             
INFO[0167] Pulling docker/ucp-swarm:3.0.5               
INFO[0173] Pulling docker/ucp-metrics:3.0.5             
INFO[0179] Pulling docker/ucp-interlock-extension:3.0.5 
WARN[0183] None of the hostnames we'll be using in the UCP certificates [master01 127.0.0.1 172.17.0.1 10.140.0.2] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases 

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases: 
INFO[0000] Initializing a new swarm at 10.140.0.2 
Additional aliases: 
INFO[0000] Initializing a new swarm at 10.140.0.2       
INFO[0009] Installing UCP with host address 10.140.0.2 - If this is incorrect, please specify an alternative address with the '--host-address' flag 
INFO[0009] Deploying UCP Service...                     
INFO[0068] Installation completed on master01 (node slsvy00m1khejbo5itmupk034) 
INFO[0068] UCP Instance ID: omz7lso0zpeyzk17gxubvz72r   
INFO[0068] UCP Server SSL: SHA-256 Fingerprint=24:9B:51:4E:E2:F1:CD:1B:DE:E0:86:0F:DC:E7:29:B5:1E:0E:6B:0C:BF:24:CC:27:85:91:35:A1:6A:39:37:C6 
INFO[0068] Login to UCP at https://10.140.0.2:443       
INFO[0068] Username: collabnix                          
INFO[0068] Password: (your admin password) 

Logging in Docker EE

By now, you should be able to login to Docker EE Window using browser. Upload the license and you should be good to see the UCP console.

Installing Kubectl

Undoubtedly, kubectl has been favourite command for K8s users. It works great but it’s painful because you use it to manually run a command for each resource in your Kubernetes application. This is prone to error, because we might forget to deploy one resource, or introduce a typo when writing our kubectl commands. As we add more parts to our application, the probability of these problems occurring increases.

But still here’s a bonus – Execute the below script if you really want to use kubectl on Docker EE Platform.

sudo sh bootstrap.sh install_kubectl

Verify Kubectl Version

@master01:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.11", GitCommit:"1df6a8381669a6c753f79cb31ca2e3d57ee7c8a3", GitTreeState:"clean", BuildDate:"2018-04-05T17:24:
03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.11-docker-8d637ae", GitCommit:"8d637aedf46b9c21dde723e29c645b9f27106fa5", GitTreeState:"clean", BuildDate:"2
018-04-26T16:51:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Verifying Docker Version

Here you go.. Docker EE 2.0 is all set up with Swarm & Kubernetes running side by side.

 docker version
Client: Docker Enterprise Edition (EE) 2.0
 Version:      17.06.2-ee-16
 API version:  1.30
 Go version:   go1.8.7
 Git commit:   9ef4f0a
 Built:        Thu Jul 26 16:41:28 2018
 OS/Arch:      linux/amd64
Server: Docker Enterprise Edition (EE) 2.0
 Engine:
  Version:      17.06.2-ee-16
  API version:  1.30 (minimum version 1.12)
  Go version:   go1.8.7
  Git commit:   9ef4f0a
  Built:        Thu Jul 26 16:40:18 2018
  OS/Arch:      linux/amd64
  Experimental: false
 Universal Control Plane:
  Version:       3.0.5
  ApiVersion:                   1.30
  Arch:                         amd64
  BuildTime:                    Thu Aug 30 17:47:03 UTC 2018
  GitCommit:                    f588f8a
  GoVersion:                    go1.9.4
  MinApiVersion:                1.20
  Os:                           linux
 Kubernetes:
  Version:      1.8+
  buildDate:                   2018-04-26T16:51:21Z
  compiler:                    gc
  gitCommit:                   8d637aedf46b9c21dde723e29c645b9f27106fa5
  gitTreeState:                clean
  gitVersion:                  v1.8.11-docker-8d637ae
  goVersion:                   go1.8.3
  major:                       1
  minor:                       8+
  platform:                    linux/amd64
 Calico:
  Version:          v3.0.8
  cni:                             v2.0.6
  kube-controllers:                v2.0.5
  node:                            v3.0.8

Verifying the Kubernetes Nodes

@master01:~/test/docker101/docker-ee/ubuntu$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
master01   Ready     master    20m       v1.8.11-docker-8d637ae

Adding Worker Nodes

To add worker nodes, go to Add Nodes section under Docker Enterprise UI and click on “Add a Node”. It will display a command which need to be executed on worker nodes. This should be good to build multi-node Docker EE Swarm & Kubernetes Cluster

m@master01:~$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
av668en5dinpin5jpi6ro0yfs     worker01            Ready               Active              
k4grcnyl6vbf0z17bh67cz9l5     worker02            Ready               Active              
slsvy00m1khejbo5itmupk034 *   master01            Ready               Active              Leader

@master01:~$ kubectl get nodes
NAME       STATUS     ROLES     AGE       VERSION
master01   Ready      master    1h        v1.8.11-docker-8d637ae
worker01   NotReady   <none>    28s       v1.8.11-docker-8d637ae
worker02   Ready      <none>    3m        v1.8.11-docker-8d637ae
openusm@master01:~$ 

Downloading Client Bundle

In order to manage services using Docker CLI, one need to install client bundle and Docker Inc. provides an easy way to install it. I put it under the script “install-client-bundle” . All you need is to supply the correct username and password plus UCP URL to make it work.

 

Installing Helm

openusm@master01:~$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > install-helm.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7230  100  7230    0     0  17173      0 --:--:-- --:--:-- --:--:-- 17132
openusm@master01:~$ chmod u+x install-helm.sh
openusm@master01:~$ ./install-helm.sh
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.11.0-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.
openusm@master01:~$ helm init
Creating /home/openusm/.helm 
Creating /home/openusm/.helm/repository 
Creating /home/openusm/.helm/repository/cache 
Creating /home/openusm/.helm/repository/local 
Creating /home/openusm/.helm/plugins 
Creating /home/openusm/.helm/starters 
Creating /home/openusm/.helm/cache/archive 
Creating /home/openusm/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /home/openusm/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
openusm@master01:~$

Verifying Helm Version

helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}

Installing MYSQL using Helm

helm install --name mysql stable/mysql
Error: release mysql failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "default": access denied

You will encounter the above error message and it is expected. This is caused by tiller not having the correct service account / permissions to install applications into the cluster.

Creating Service Account for Tiller

To create a service account for tiller and apply the correct grants, follow these steps:

Create the tiller service account:

openusm@master01:~$ kubectl create serviceaccount --namespace kube-system tiller
serviceaccount "tiller" created

 

In the UCP UI, navigate to the User Management > Grants menu and click  the Create button.In the Subject area, choose Service Account and select the kube-system Namespace and tiller Service Account; then hit Next:

 

In the Role area, select Full Control. Then hit Next. In the Resource area, select Namespaces, flip the switch to enable Apply grant to all existing and new namespaces, then hit Create:

 

 

Verify the final grant. It should look similar to what is shown below:

 

At this stage, if you have tiller installed in the cluster already, you will need to patch the tiller deployment to use the tiller service account just created:

$ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

So before we can use helm with a kubernetes cluster, you need to install tiller on it. It’s as easy as running the below command:

openusm@master01:~$ helm init --service-account tiller
$HELM_HOME has been configured at /home/openusm/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!

The above command installs Tiller (the Helm server-side component) onto your Kubernetes Cluster and sets up local configuration in $HELM_HOME (default ~/.helm/).As with the rest of the Helm commands, ‘helm init’ discovers Kubernetes clusters by reading $KUBECONFIG (default ‘~/.kube/config’) and using the default context.

We are good to go ahead and install a chart via helm install:

openusm@master01:~$ helm install --name mysql stable/mysql
NAME:   mysql
LAST DEPLOYED: Thu Oct 11 13:37:52 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod(related)
NAME                    READY  STATUS   RESTARTS  AGE
mysql-694696f6d5-w4j5n  0/1    Pending  0         1s
==> v1/Secret
NAME   AGE
mysql  1s
==> v1/ConfigMap
mysql-test  1s
==> v1/PersistentVolumeClaim
mysql  1s
==> v1/Service
mysql  1s
==> v1beta1/Deployment
mysql  1s
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mysql.default.svc.cluster.local
To get your root password run:
    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)
To connect to your database:
1. Run an Ubuntu pod that you can use as a client:
    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
2. Install the mysql client:
    $ apt-get update && apt-get install mysql-client -y
3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysql -p
To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306
    # Execute the following command to route the connection:
    kubectl port-forward svc/mysql 3306
    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
    

In my next blog post, I will showcase how to build IstioMesh on Docker Enterprise 2.0. Stay tuned !

If you’re interested in a fully conformant Kubernetes environment that is ready for the enterprise,    https://trial.docker.com/

Did you find this blog helpful? Feel free to share your experience. Get in touch with me on Twitter –  @ajeetsraina

If you are looking out for contribution, join me at Docker Community Slack Channel.

Kubernetes Hands-on Lab #3 – Deploy Istio Mesh on K8s Cluster

Estimated Reading Time: 9 minutes

 

Istio is completely an open source service mesh that layers transparently onto existing distributed applications. Istio v1.0 got announced last month and is ready for production. It is written completely in Go Language and its a fully grown platform which provides APIs that let it integrate into any logging platform, or telemetry or policy system. This project adds a very tiny overhead to your system. It is hosted on GitHub under this link. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.

Istio is composed of these components:

  • Envoy – Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services. The proxies form a secure microservice mesh providing a rich set of functions like discovery, rich layer-7 routing, circuit breakers, policy enforcement and telemetry recording/reporting functions.

    Note: The service mesh is not an overlay network. It simplifies and enhances how microservices in an application talk to each other over the network provided by the underlying platform.

  • Mixer – Central component that is leveraged by the proxies and microservices to enforce policies such as authorization, rate limits, quotas, authentication, request tracing and telemetry collection.
  • Pilot – A component responsible for configuring the proxies at runtime.
  • Citadel – A centralized component responsible for certificate issuance and rotation.
  • Node Agent – A per-node component responsible for certificate issuance and rotation.
  • Galley– Central component for validating, ingesting, aggregating, transforming and distributing config within Istio.

What benefits does Istio bring?

Figure: The sidecar incepts all the network traffic

  • Istio lets you connect, secure, control, and observe services.
  • It helps to reduce the complexity of service deployments and eases the strain on your development teams.
  • It provides developers and DevOps fine-grained visibility and control over traffic without requiring any changes to application code.
  • It provides CIOs with the necessary tools needed to help enforce security and compliance requirements across the enterprise.
  • It provides behavioral insights & operational control over the service mesh as a whole.
  • Istio makes it easy to create a network of deployed services with automatic Load Balancing for HTTP, gRPC, Web Socket & TCP Traffic.
  • It provides fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
  • It enables a pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
  • Istio provides automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
  • It provides secure service-to-service communication in a cluster with strong identity-based authentication and authorization.

Istio currently supports Kubernetes.Under this blog post, I will showcase how can one bring up Istio on Play with Kubernetes Platform.

Open https://labs.play-with-k8s.com/ to access Kubernetes Playground.
Click on the Login button to authenticate with Docker Hub or GitHub ID.

Once you start the session, you will have your own lab environment.

Adding First Kubernetes Node

Click on “Add New Instance” on the left to build your first Kubernetes Cluster node. It automatically names it as “node1”. Each instance has Docker Community Edition (CE) and Kubeadm already pre-installed. This node will be treated as the master node for our cluster.

Bootstrapping the Master Node

You can bootstrap the Kubernetes cluster by initializing the master (node1) node with the below script. Copy this script content into bootstrap.sh file and make it executable using “chmod +x bootstrap.sh” command.

When you execute this script, as part of initialization, the kubeadm write several configuration files needed, setup RBAC and deployed Kubernetes control plane components (like kube-apiserver, kube-dns, kube-proxy, etcd, etc.). Control plane components are deployed as Docker containers.

Copy the above kubeadm join token command and save it for the next step. This command will be used to join other nodes to your cluster.

Adding Worker Nodes

Click on “Add New Node” to add a new worker node.

Checking the Cluster Status

 

 

Verifying the running Pods


Installing Istio 1.0.0

Istio is deployed in a separate Kubernetes namespace istio-system. We will verify it later. As of now, you can copy the below content in a file called install_istio.sh and save it. You can make it executable and run it to install Istio and related tools.

You should be able to see screen flooding with the below output.

 

 

As shown above, it will enable the Prometheus, ServiceGraph, Jaeger, Grafana, and Zipkin by default.

Please note – While executing this script, it might end up with the below error message –

unable to recognize “install/kubernetes/istio-demo.yaml”: no matches for admissionregistration.k8s.io/, Kind=MutatingWebhookConfiguration

The error message is expected.

As soon as the command gets executed completely, you should be able to see a long list of ports which gets displayed at the top center of the page.

Verifying the Services


Exposing the Services

To expose Prometheus, Grafana & Servicegraph services, you will need to delete the existing services and then use NodePort instead of ClusterIP so as to access the service using the port displayed on the top of the instance page. (as shown below)

You should be able to access Grafana page by clicking on “30004” port and Prometheus page by clicking on “30003”.

You can check Prometheus metrics by selecting the necessary option as shown below:

Under Grafana Page, you can add “Data Source” for Prometheus and ensure that the dashboard is up and running:

Congratulations! You have installed Istio on Kubernetes cluster. Below listed services have been installed on K8s playground:

  • Istio Controllers and related RBAC rules
  • Istio Custom Resource Definitions
  • Prometheus and Grafana for Monitoring
  • Jeager for Distributed Tracing
  • Istio Sidecar Injector (we’ll take a look next section)

Installing Istioctl

Istioctl is configuration command line utility of Istio. It helps to create, list, modify and delete configuration resources in the Istio system.

Deploying the Sample BookInfo Application

Now Istio is installed and verified, you can deploy one of the sample applications provided with the installation- BookInfo. This is a simple mock bookstore application made up of four services that provide a web product page, book details, reviews (with several versions of the review service), and ratings – all managed using Istio.

Deploying BookInfo Services

 

[node1 istio-1.0.0]$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

service “details” created

deployment “details-v1” created

service “ratings” created

deployment “ratings-v1” created

service “reviews” created

deployment “reviews-v1” created

deployment “reviews-v2” created

deployment “reviews-v3” created

service “productpage” created

deployment “productpage-v1” created

 

 

[node1 istio-1.0.0]$ istioctl create -f samples/bookinfo/networking/bookinfo-gateway.yaml

Created config gateway/default/bookinfo-gateway at revision 13436

Created config virtual-service/default/bookinfo at revision 13438

 

 

Verifying BookInfo Application

[node1 istio-1.0.0]$ kubectl get services

NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE

details       ClusterIP   10.97.29.111     <none>        9080/TCP   1m

kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP    1h

productpage   ClusterIP   10.106.144.171   <none>        9080/TCP   1m

ratings       ClusterIP   10.111.164.221   <none>        9080/TCP   1m

reviews       ClusterIP   10.99.195.21     <none>        9080/TCP   1m

[node1 istio-1.0.0]$ curl 10.106.144.171:9080

<!DOCTYPE html>

<html>

<head>

<title>Simple Bookstore App</title>

<meta charset=”utf-8″>

<meta http-equiv=”X-UA-Compatible” content=”IE=edge”>

<meta name=”viewport” content=”width=device-width, initial-scale=1″>

 

<!– Latest compiled and minified CSS –>

<link rel=”stylesheet” href=”https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css”>

 

<!– Optional theme –>

<link rel=”stylesheet” href=”https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap-theme.min.css”>

 

</head>

<body>

 

 

<p><h3>Hello! This is a simple bookstore application consisting of three services as shown below</h3></p>

 

<table class=”table table-condensed table-bordered table-hover”><tr><th>endpoint</th><td>details</td></tr><tr><th>name</th><td>http://details:9080</td></tr><tr><th>children</th><td><table class=”table table-condensed table-bordered table-hover”><tr><th>endpoint</th><th>name</th><th>children</th></tr><tr><td>details</td><td>http://details:9080</td><td></td></tr><tr><td>reviews</td><td>http://reviews:9080</td><td><table class=”table table-condensedtable-bordered table-hover”><tr><th>endpoint</th><th>name</th><th>children</th></tr><tr><td>ratings</td><td>http://ratings:9080</td><td></td></tr></table></td></tr></table></td></tr></table>

 

<p><h4>Click on one of the links below to auto generate a request to the backend as a real user or a tester

</h4></p>

<p><a href=”/productpage?u=normal”>Normal user</a></p>

<p><a href=”/productpage?u=test”>Test user</a></p>

 

 

 

<!– Latest compiled and minified JavaScript –>

<script src=”https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js”></script>

 

<!– Latest compiled and minified JavaScript –>

<script src=”https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js”></script>

 

</body>

</html>

 

Accessing it via Web URL

[node1 istio-1.0.0]$ kubectl get services

NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE

details       ClusterIP   10.97.29.111     <none>        9080/TCP   2m

kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP    1h

productpage   ClusterIP   10.106.144.171   <none>        9080/TCP   2m

ratings       ClusterIP   10.111.164.221   <none>        9080/TCP   2m

reviews       ClusterIP   10.99.195.21     <none>        9080/TCP   2m

[node1 istio-1.0.0]$ kubectl delete svc productpage

service “productpage” deleted

[node1 istio-1.0.0]$ kubectl create service nodeport productpage –tcp=9080 –node-port=30010

service “productpage” created

 

You should now be able the BookInfo Sample as shown below:

In my next blog post, I will showcase how to bring up Kubernetes Dashboard under Play with Kubernetes Platform.

 

Kubernetes Hands-on Lab #2 – Running Our First Nginx Cluster

Estimated Reading Time: 2 minutes

 

 

Nginx (pronounced “engine-x”) is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server). The nginx project started with a strong focus on high concurrency, high performance and low memory usage. It is licensed under the 2-clause BSD-like license and it runs on Linux, BSD variants, Mac OS X, Solaris, AIX, HP-UX, as well as on other *nix flavors. It also has a proof of concept port for Microsoft Windows.

In my last blog post, I showcased how to build 5-Node Kubernetes cluster. Under this blog post, we will see how to build our first Nginx application on this cluster environment.

Verifying 5-Node K8s Cluster

[node1 ~]$ kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node1     Ready     master    1h        v1.10.2
node2     Ready     <none>    1h        v1.10.2
node3     Ready     <none>    1h        v1.10.2
node4     Ready     <none>    1h        v1.10.2
node5     Ready     <none>    14m       v1.10.2
[node1 ~]$

Running Nginx having 4 Replicas

kubectl run nginx --image=nginx:latest --replicas=4

Verifying K8s Pods Up and Running

[node1 ~]$ kubectl get po
NAME                     READY     STATUS    RESTARTS   AGE
nginx-5db977d67c-6sdfd   1/1       Running   0          2m
nginx-5db977d67c-jfq9h   1/1       Running   0          2m
nginx-5db977d67c-vs925   1/1       Running   0          2m
nginx-5db977d67c-z5r45   1/1       Running   0          2m
[node1 ~]$

Watch the pods

kubectl get pods -w

Expose the ElasticSearch HTTP API port:


kubectl expose deploy/nginx --port 80

Testing the Nginx Service


IP=$(kubectl get svc nginx -o go-template --template '{{ .spec.clusterIP }}')

Send a few requests:

[node1 ~]$ curl $IP:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[node1 ~]$

In my next blog post, I will showcase how to build Istio Application on Play with Kubernetes Platform.

Kubernetes Hands-on Lab #3 – Deploy Istio Mesh on K8s Cluster

Kubernetes Hands-on Lab #1 – Setting up 5-Node K8s Cluster

Estimated Reading Time: 4 minutes

 

Are you new to Kubernetes? Want to build your career in Kubernetes? Then Welcome ! You are at the right place. This blog post series brings you tutorials that help you get hands-on experience using Kubernetes. Here you will find a mix of labs and tutorials that will help you, no matter if you are a beginner, SysAdmin, IT Pro or Developer. Yes, you read it correct ! Its $0 learning platform. You don’t need any infrastructure. Most of the tutorials runs on Play with K8s Platform. This is a free browser based learning platform for you. Kubernetes tools like kubeadm, kompose & kubectl are already installed for you. All you need is to get started.

Kubernetes (often abbreviated to K8S), is a container orchestration platform for applications that run on containers. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.

Kubernetes can speed up the development process by making easy, automated deployments, updates (rolling-update) and by managing our apps and services with almost zero downtime. It also provides self-healing. Kubernetes can detect and restart services when a process crashes inside the container. Any developer can package up applications and deploy them on Kubernetes with basic Docker knowledge.

At a minimum, Kubernetes can schedule and run application containers on clusters of physical or virtual machines. However, Kubernetes also allows developers to ‘cut the cord’ to physical and virtual machines, moving from a host-centric infrastructure to a container-centric infrastructure, which provides the full advantages and benefits inherent to containers. Kubernetes provides the infrastructure to build a truly container-centric development environment. K8s provides a rich set of features for container grouping, container orchestration, health checking, service discovery, load balancing, horizontal autoscaling, secrets & configuration management, storage orchestration, resource usage monitoring, CLI, and dashboard.

This is the first blog targeted at setting up 5-Node Kubernetes cluster. To get started with Kubernetes, follow the below steps:

Click on “Start” button to get access to PWK instances as shown below:

Click on Add Instances to setup first k8s node.

Cloning the Repository

git clone https://github.com/ajeetraina/kubernetes101/
cd kubernetes101/install

Bootstrapping the First Node Cluster

sh bootstrap.sh

Adding New K8s Cluster Node

Click on Add Instances to setup first k8s node cluster

Wait for 1 minute time till it gets completed.

Copy the command starting with kubeadm join ..... We will need it to be run on the worker node.

Setting up Worker Node

Click on “Add New Instance” and paste the last kubeadm command on this fresh new worker node.

[node2 ~]$ kubeadm join --token 4f924f.14eb7618a20d2ece 192.168.0.8:6443 --discovery-token-ca-cert-hash  sha256:a5c25aa4573e06a0c11b11df23c8f85c95bae36cbb07d5e7879d9341a3ec67b3```

You will see the below output:

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Skipping pre-flight checks[discovery] Trying to connect to API Server "192.168.0.8:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.8:6443"
[discovery] Requesting info from "https://192.168.0.8:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.8:6443"[discovery] Successfully established connection with API Server "192.168.0.8:6443"
[bootstrap] Detected server version: v1.8.15
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.
[node2 ~]$

Verifying Kubernetes Cluster

Run the below command on master node

[node1 ~]$ kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node1     Ready     master    15m       v1.10.2
node2     Ready     <none>    1m        v1.10.2
[node1 ~]$

Adding Worker Nodes

[node1 ~]$ kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node1     Ready     master    58m       v1.10.2
node2     Ready     <none>    57m       v1.10.2
node3     Ready     <none>    57m       v1.10.2
node4     Ready     <none>    57m       v1.10.2
node5     Ready     <none>    54s       v1.10.2
[node1 ]$ kubectl get po
No resources found.
[node1 ]$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   1h
[node1 ]$

In the next blog series, I will showcase how to build a simple Nginx application on top of 5-Node Kubernetes cluster.

Kubernetes Hands-on Lab #2 – Running Our First Nginx Cluster

Kubernetes Hands-on Lab #3 – Deploy Istio Mesh on K8s Cluster

How I built Elastic Stack for Docker Swarm using Docker Application Packages(docker-app)

Estimated Reading Time: 7 minutes

 

Let’s begin with Problem Statement !

DockerHub is a cloud-based registry service which allows you to link to code repositories, build your images, test them, store manually pushed images so you can deploy images to your hosts. It provides a centralized resource for container image discovery, distribution and change management as well as workflow automation throughout the development pipeline. We share Docker images all the time, but let’s agree to the fact that we don’t have a good way of sharing the multi-service applications that use them.

Let us take an example of Elastic Stack. Built on an open source foundation, the Elastic Stack lets you reliably and securely take data from any source, in any format, and search, analyze, and visualize it in real time with the help of Elasticsearch, Logstash, Kibana and multiple other tools and technique. In order to build these tools in the form of containers, one need to start building Docker Image for each of these tools. The recommended way is constructing a Dockerfile for each of these tools. In turn, Docker Compose uses these images to build required services. Whenever the docker stack deploy CLI is used to deploy the application stack, these Docker images are pulled from Dockerhub for the first time and then picked up locally once downloaded to your system. What if you could upload your whole application stack to DockerHub? Yes, it’s possible today and docker-app is the tool which can make Compose-based applications shareable on Docker Hub and DTR.

Docker-app v0.5.0 is now Available !

Docker Application Package v0.5.0 is the latest offering from  Docker, Inc. You can download it from this link. The binaries are available for Linux, Windows and MacOS Platform. If you are looking out for source code,  this is the direct link.

   

The docker-app v0.5.0 comes with notable features and improvements which are listed below:

  • The improved docker-app inspect command to shows a summary of services, networks, volumes and secrets.

  • The docker-app push CLI now works on Windows and bypasses the local docker daemon by talking directly to the registry.
  • The docker-app save and docker-app ls have been obsoleted.
  • All commands now accept an application package as a URL.
  • The docker-app push command now accepts a custom repository name.
  • The docker-app completion command can generate zsh completion in addition to bash.

In my last blog post, I talked about docker-app for the first time and showcased its usage soon after I returned back from Dockercon. Under this post, I will show how I built Elastic Stack using docker-app for 5-Node Docker Swarm cluster.

Prerequisite:

  • Click on Icon near to Instance to choose 3 Managers & 2 Worker Nodes

Deploy 5 Node Docker Swarm Cluster

$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUSENGINE VERSION
iy9mbeduxd4mmjxoikbn5ulds *   manager1            Ready               Active              Reachable18.03.1-ce
mx916kgqg6gfgqdr2gn1eksxy     manager2            Ready               Active              Leader18.03.1-ce
xaeq943o84g9spy6mebj64tw3     manager3            Ready               Active              Reachable18.03.1-ce
8umdv6m82nrpevuris1e45wnq     worker1             Ready               Active18.03.1-ce
o3yobqgg7wjvjw2ec5ythszgw     worker2             Ready               Active18.03.1-ce

Cloning the Repository

$ git clone https://github.com/ajeetraina/app
Cloning into 'app'...remote: Enumerating objects: 134, done.
remote: Counting objects: 100% (134/134), done.remote: Compressing objects: 100% (134/134), done.
remote: Total 14511 (delta 95), reused 0 (delta 0), pack-reused 14377Receiving objects: 100% (14511/14511), 17.37 MiB | 13.35 MiB/s, done.
Resolving deltas: 100% (5391/5391), done.

Install Docker-app

$ cd app/examples/elk/
[manager1] (local) root@192.168.0.30 ~/app/examples/elk$ ls
README.md          devel              elk.dockerapp      install-dockerapp  prod
[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$ chmod +x install-dockerapp[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$ sh install-dockerappConnecting to github.com (192.30.253.112:443)
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (52.216.131.187:443)docker-app-linux.tar 100% |*************************************************************|  8895k  0:00:00 ETA
[manager1] (local) root@192.168.0.30 ~/app/examples/elk

Verifying Docker-app Version

$ docker-app version
Version:      v0.4.0
Git commit:   525d93bc
Built:        Tue Aug 21 13:02:46 2018
OS/Arch:      linux/amd64
Experimental: off
Renderers:    none

I assume you have a docker compose file for ELK stack application already available with you. If not, you can download a sample file from this link. Place this YAML file under the same directory(app/examples/elk/). Now with docker-app installed, let’s create an Application Package based on this Compose file:

$ docker-app init elk

Once you run the above command, it create a new directory elk.dockerapp/ that contains three different YAML files:

docker-compose.yml  elk.dockerapp
[manager1] (local) root@192.168.0.30 ~/myelk
$ tree elk.dockerapp/
elk.dockerapp/
├── docker-compose.yml
├── metadata.yml
└── settings.yml

0 directories, 3 files

Edit each of these files as shown to look similar to what are placed under this link.

Rendering Docker Compose file

$ docker-app render elk
version: "3.4"
services:
  elasticsearch:    command:
    - elasticsearch    - -Enetwork.host=0.0.0.0
    - -Ediscovery.zen.ping.unicast.hosts=elasticsearch
    deploy:
      mode: replicated
      replicas: 2
    environment:
      ES_JAVA_OPTS: -Xms2g -Xmx2g
    image: elasticsearch:5
    networks:
      elk: null
    volumes:
    - type: volume
      target: /usr/share/elasticsearch/data
  kibana:
    deploy:
      mode: replicated
      replicas: 2
    environment:
      ELASTICSEARCH_URL: http://elasticsearch:9200
    healthcheck:
      test:
      - CMD-SHELL
      - wget -qO- http://localhost:5601 > /dev/null
      interval: 30s
      retries: 3
    image: kibana:latest
    networks:
      elk: null
    ports:
    - mode: ingress
      target: 5601
      published: 5601
      protocol: tcp
  logstash:
    command:
    - sh
    - -c
    - logstash -e 'input { syslog  { type => syslog port => 10514   } gelf { } } output
      { stdout { codec => rubydebug } elasticsearch { hosts => [ "elasticsearch" ]
      } }'
    deploy:
      mode: replicated
      replicas: 2
    hostname: logstash
    image: logstash:alpine
    networks:
      elk: null
    ports:
    - mode: ingress
      target: 10514
      published: 10514
      protocol: tcp
    - mode: ingress
      target: 10514
      published: 10514
      protocol: udp
    - mode: ingress
      target: 12201
      published: 12201
      protocol: udp
networks:
  elk: {

Setting the kernel parameter for ELK stack

sysctl -w vm.max_map_count=262144

Deploying the Application Stack


[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$ docker-app deploy elk --settings-files elk.dockerapp/settings.yml
Creating network elk_elk
Creating service elk_kibana
Creating service elk_logstash
Creating service elk_elasticsearch
[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$

Inspecting ELK Stack

[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$ docker-app inspect elk
myelk 0.1.0
Maintained by: Ajeet_Raina <ajeetraina@gmail.com>

ELK using Dockerapp

Setting                       Default
-------                       -------
elasticsearch.deploy.mode     replicated
elasticsearch.deploy.replicas 2
elasticsearch.image.name      elasticsearch:5
kibana.deploy.mode            replicated
kibana.deploy.replicas        2
kibana.image.name             kibana:latest
kibana.port                   5601
logstash.deploy.mode          replicated

Verifying Stack services are up & running

[manager1] (local) root@192.168.0.30 ~/app/examples/elk/docker101/play-with-docker/visualizer
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
uk2whax6f3jq        elk_elasticsearch   replicated          2/2                 elasticsearch:5
nm4p3yswvh5y        elk_kibana          replicated          2/2                 kibana:latest       *:5601->56
01/tcp
g5ubng6rhcyp        elk_logstash        replicated          2/2                 logstash:alpine     *:10514->1
0514/tcp, *:10514->10514/udp, *:12201->12201/udp
[manager1] (local) root@192


Pushing the App Package to Dockerhub

Password:[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to
 https://hub.docker.com to create one.
Username: ajeetraina
Password:Login Succeeded

Pushing the App package to DockerHub


[manager1] (local) root@192.168.0.30 ~/app/examples/elk$ docker-app push --namespace ajeetraina --tag 1.0.2
The push refers to repository [docker.io/ajeetraina/elk.dockerapp]
15e73d68a400: Pushed
1.0.2: digest: sha256:c5a8e3b7e2c7a5566a3e4247f8171516033e7e9791dfdb6ebe622d3830884d9b size: 524
[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$

Important Note: If you are using Docker-app v0.5.0, you might face issue related to pulling the image from Dockerhub as it report unsupported OS error message. Here’s a link to this open issue.

Testing the Application Package

Open up a new PWD window. Install docker-app as shown above and try to run the below command:

docker-app deploy ajeetraina/elk.dockerapp:1.0.2

This should bring up your complete Elastic Stack Platform.

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you want to keep track of latest Docker related information, follow me at https://www.linkedin.com/in/ajeetsraina/.

Getting Started with OpenUSM on Docker for Windows Platform

Estimated Reading Time: 6 minutes

 

OpenUSM is a modern approach to DellEMC Server Management, Insight Logs Analytics and Machine Learning solution integrated with monitoring & logging pipeline using Docker containers & Redfish. It is 100% container-based platform-agnostic solution which can be run from laptop, server or cloud and works seamlessly on any of Linux or Windows platform with Docker Engine running on top of it. It follows “Container-Per-Server(CPS)” model. For each server management tasks, there are Python-scripts which when executed builds and run Docker containers, uses Redfish API to communicate directly with Dell iDRAC, collects iDRAC/LC logs and pushes it to ELK(Elasticsearch, Logstash & Kibana) stack for further log analytics and Machine Learning. OpenUSM is currently hosted at https://github.com/openusm/openusm

OpenUSM today support both Linux and Windows Platform. It has already been validated on Linux OS like Debian, Ubuntu and CentOS system. You can find extensive documents here.

Under this blog post, I will showcase how to get started with OpenUSM on Docker for Windows Platform.

Tested Platform:

  • Microsoft Windows 10 Enterprise
  • X64 based PC

Pre-requisite:

  • Installing Python 2.7
  • Installing Winsyslog
  • Installing Docker for Windows
  • Configuring Docker for Windows

Installing Python 2.7

To install Python 2.7, the simplest way is to use Chocolatey. Chocolatey is software management automation. Chocolatey works with over 20+ installer technologies for Windows, but it can manage things you would normally xcopy deploy (like runtime binaries and zip files). You can also work with registry settings or managing files and configurations, or any combination.

Run the below command to install Chocolatey on your Windows 10 laptop:

Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString(‘https://chocolatey.org/install.ps1’))

If you face any issue, do refer this link.

Now install python using choco as shown below:

choco install python

You can verify if python is installed or not using the below command:

PS C:\> C:\Python27\python.exe -V
Python 2.7.15

Installing WinSyslog

For OpenUSM to work, syslog server is required. You can install any Syslog server available in the internet. For this demo, I will use WinSyslog which is too easy to setup. Once you install it, it will open up the window as shown below:

Click on Options section under File Menu and you shall see the below window:

As shown above, you will need to enter your local laptop IP address and port to fetch logs for OpenUSM. You can test it by clicking on “Send” and you shall see “Syslog Messages send successfully to 192.168.1.6” if it goes well.

Installing Docker for Windows:

Open https://docs.docker.com/docker-for-windows/install/ and click on “Install from Docker Store” to open up the below page to download Docker for Windows CE Edition.

Docker CE for Windows is Docker designed to run on Windows 10. It is a native Windows application that provides an easy-to-use development environment for building, shipping, and running dockerized apps. Docker CE for Windows uses Windows-native Hyper-V virtualization and networking and is the fastest and most reliable way to develop Docker apps on Windows. Docker CE for Windows supports running both Linux and Windows Docker containers. You can install either of stable or Edge release from the below link.

 

Double-click Docker for Windows Installer to run the installer. When the installation finishes, Docker starts automatically. The whale  in the notification area indicates that Docker is running, and accessible from a terminal.

You can verify Docker version either by visiting “About Docker” in the top menu:

Or you can open a command-line terminal like PowerShell, and try out below Docker command to check the version –

Configuring Docker for Windows for OpenUSM

We need to perform few of configuration changes related to Docker for Windows before we proceed with setting up ELK stack. First we need to enable share drives for ELK stack to work. Docker for Windows provides you a simplified approach to enable this feature. Click on Whale Icon > Shared Drives > Select “C:” local drive which will be made available to your Docker containers which run ELK Stack.

Once you select and click on “Apply” it will restart Docker as well as Kubernetes(if enabled earlier). This should be good enough for OpenUSM to work smoothly.

Cloning the OpenUSM Repository

git clone https://github.com/openusm/openusm

cd openusm/logging/

Setting up ELK Stack

Docker for Windows is a development platform and comes with docker-compose installed by default. All you need is to run the below command to bring up ELK stack… Awesome, Isn’t it?

docker-compose up -d

You can verify if ELK has come up or not by running the below command as shown:

Open up http://127.0.0.1:5601 to access Kibana UI as shown below:

Sending DellEMC HW Sensor Logs to ELK Stack

It’s time for an action now. I assume you are connected to your Lab infrastructure using VPN in order to access DellEMC PowerEdge servers. It just takes few seconds to send sensor logs(Fan, Temperature etc.) of DellEMC PowerEdge server sitting in your datacenter to ELK stack using the below python script.

 

PS C:\Users\Ajeet_Raina\openusm\logging> C:\Python27\python.exe .\sensorexporter.py -i <idrac-ip> -ei 127.0.0.1 -eu elastic -ep <password>

This script uses Redfish to talk to remote DellEMC PE Server, fetches the logs and send it to syslog server which we configured earlier, pushes it to Logstash and elasticsearch and get it displayed via Kibana UI – all in just few seconds. Isn’t it cool?

Visualizing the logs under Kibana UI

When you open kibana UI for the first time, the index pattern mightn’t come up. Click on “Index Pattern” under Management tab on the left hand side. Next, click on “Create Index Pattern”. Search for Fan* and temp*. By now, you should be able to see temperature and Fan speed logs under Discover tab.

 

Click on “Discover” tab to see the overall logs fetched directly from iDRAC IPs.

 

Click on “Visualize” tab to add filter. In the below example, I have chosen iDRAC IP, Minimum and Maximum Reading as shown below:

Click on “Dashboard” to add specific filters for Fan speed, choose your type of visualization(I selected “Pie Chart” option) and select the metrics to display it as shown below:

 

In my next blog post, I will talk about Elastic’s Machine Learning “anomaly score” and how the various scores presented in the dashboards relate to the “unusualness” of individual occurrences within the data set of fan speed and temperature as a whole. Stay tuned !

2 Minutes to Docker MacVLAN Networking – A Beginners Guide

Estimated Reading Time: 3 minutes

 

Scenario:  Say, you have built Docker applications(legacy in nature like network traffic monitoring, system management etc.) which is expected to be directly connected to the underlying physical network. In this type of situation, you can use the macvlan network driver to assign a MAC address to each container’s virtual network interface, making it appear to be a physical network interface directly connected to the physical network.

Last year, I wrote a blog post on “How MacVLAN work under Docker Swarm?” for those users who want to assign underlying physical network address to Docker containers which runs various Swarm services. Do check it out.

Docker 17.06 Swarm Mode: Now with built-in MacVLAN & Node-Local Networks support

In case you’re completely new to Docker networking, when Docker is installed, a default bridge network named docker0 is created. Each new Docker container is automatically attached to this network. Besides docker0 , two other networks get created automatically by Docker: host (no isolation between host and containers on this network, to the outside world they are on the same network) and none (attached containers run on container-specific network stack)

Assume you have a clean Docker Host system with just 3 networks available – bridge, host and null

root@ubuntu:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
871f1f745cc4        bridge              bridge              local
113bf063604d        host                host                local
2c510f91a22d        none                null                local
root@ubuntu:~#

My Network Configuration is quite simple. It has eth0 and eth1 interface. I will just use eth0.

root@ubuntu:~# ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:7d:83:13:8e
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr fe:05:ce:a1:2d:5d
          inet addr:100.98.26.43  Bcast:100.98.26.255  Mask:255.255.255.0
          inet6 addr: fe80::fc05:ceff:fea1:2d5d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:923700 errors:0 dropped:367 overruns:0 frame:0
          TX packets:56656 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:150640769 (150.6 MB)  TX bytes:5125449 (5.1 MB)
          Interrupt:31 Memory:ac000000-ac7fffff

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:45 errors:0 dropped:0 overruns:0 frame:0
          TX packets:45 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3816 (3.8 KB)  TX bytes:3816 (3.8 KB)

Creating MacVLAN network on top of eth0.

docker network create -d macvlan --subnet=100.98.26.43/24 --gateway=100.98.26.1  -o parent=eth0 pub_net

Verifying MacVLAN network

root@ubuntu:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
871f1f745cc4        bridge              bridge              local
113bf063604d        host                host                local
2c510f91a22d        none                null                local
bed75b16aab8        pub_net             macvlan             local
root@ubuntu:~#

Let us create a sample Docker Image and assign statics IP(ensure that it is from free pool)

root@ubuntu:~# docker  run --net=pub_net --ip=100.98.26.47 -itd alpine /bin/sh
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
ff3a5c916c92: Pull complete
Digest: sha256:e1871801d30885a610511c867de0d6baca7ed4e6a2573d506bbec7fd3b03873f
Status: Downloaded newer image for alpine:latest
493a9566c31c15b1a19855f44ef914e7979b46defde55ac6ee9d7db6c9b620e0

Important Point: When using macvlan, you cannot ping or communicate with the default namespace IP address. For example, if you create a container and try to ping the Docker host’s eth0, it will not work. That traffic is explicitly filtered by the kernel modules themselves to offer additional provider isolation and security.

Enabling Container to Host Communication

It’s simple. Just run the below command:

Example: ip link add mac0 link $PARENTDEV type macvlan mode bridge

So, in our case, it will be:

ip link add mac0 link eth0 type macvlan mode bridge
ip addr add 100.98.26.38/24 dev mac0
ifconfig mac0 up

Let us try creating container and pinging:

root@ubuntu:~# docker run --net=pub_net -d --ip=100.98.26.53 -p 81:80 nginx
10146a39d7d8839b670fc5666950c0e265037105e61b0382575466cc62d34824
root@ubuntu:~# ping 100.98.26.53
PING 100.98.26.53 (100.98.26.53) 56(84) bytes of data.
64 bytes from 100.98.26.53: icmp_seq=1 ttl=64 time=1.00 ms
64 bytes from 100.98.26.53: icmp_seq=2 ttl=64 time=0.501 ms

Wow ! It just worked.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Test Drive Your First Istio Deployment using Play with Kubernetes Platform

Estimated Reading Time: 3 minutes

If you’re a Developer and have been spending lot of time in developing apps recently, you already understand a whole new set of challenges related to Microservice architecture. Although there has been a shift from bloated monolithic apps to small, focused Microservices to speed up implementation and to improve resiliency but the fact is  developers have to really worry about the challenges in integrating services in distributed systems which includes accountability for service discovery, load balancing, registration, fault tolerance, monitoring, routing, compliance and security.

Let us understand the challenges which Microservice bring to developers and operators in details. Consider a 1st Generation simple Service Mesh scenario. As shown below, Service (A) talks to Service (B). Instead of talking directly, the request gets routed through Nginx. The Nginx finds route in Consul (which is actually a service discovery tool) and automatic retries on HTTP 502’s happen.

 

But with the advent of growing number of microservices architecture, the below listed challenges arises for both developers as well as operation team which are discussed below –

  • How to enable these growing number of microservices to talk to each other?
  • How to enable these growing number of microservices to load-balance?
  • How to enable these growing number of microservices to provide role-based routing?
  • How to implement outgoing traffic on these microservices and test canary deployment?
  • How to manage complexity around these growing pieces of microservices?
  • How can operator implement fine-grained control of traffic behavior with rich-routing rules?
  • How shall one implement Traffic encryption, service-to-service authentication and strong identity assertions?

 

In nutshell, although you could put service discovery and retry logic into application or networking middleware but the fact is service discovery becomes tricky to get right.

Enter Istio’s Service Mesh

“Service Mesh” is one of the hottest buzzword of 2018. As its name suggest, it is a configurable infrastructure layer for a microservices app. It describes the network of microservices that make up applications and the interactions between them. It makes communication between service instances flexible, reliable, and fast. The mesh provides service discovery, load balancing, encryption, authentication and authorization, support for the circuit breaker pattern, and other capabilities.

Istio is a completely open source service mesh that layers transparently onto existing distributed applications. Istio v1.0 got announced last month and is ready for production. It is written completely in Go Language and is actually a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. This project adds a very tiny overhead to your system. It is being hosted on GitHub. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.

Read the full story at Knowledgehut.

What’s New in Docker Enterprise Edition 18.03 Engine Release?

Estimated Reading Time: 6 minutes

 

A New Docker Enterprise Engine 18.03.0-ee-1 has been released. This is a stand-alone engine release intended for EE Basic customers. Docker EE Basic includes the Docker EE Engine and does not include UCP or DTR.

In case you’re new, Docker is available in two editions:

  • Community Edition (CE)
  • Enterprise Edition (EE)

Docker Community Edition (CE) is ideal for individual developers and small teams looking to get started with Docker and experimenting with container-based apps. Docker Enterprise Edition (EE) is designed for enterprise development and IT teams who build, ship, and run business critical applications in production at scale. Below are the list of capabilities around various Docker Editions:

[Please Note – Since UCP and DTR can not be run on 18.03 EE Engine, it is intended for EE Basic customers. It is not intended, qualified, or supported for use in an EE Standard or EE Advanced environment.If you’re deploying UCP or DTR, use Docker EE Engine 17.06.]

In my recent blog, I talked about “Docker EE 2.0 – Under the Hood” where I covered on 3 major components of EE 2.0 which together enable a full software supply chain, from image creation, to secure image storage, to secure image deployment. Under this blog post, I am going to talk about what’s new features have been included under the newly introduced Docker EE 18.03 Engine.

 

Let us talk about each of these features in detail.

Containerd 1.1 Merged under 18.03 EE Engine

containerd is an industry-standard core container runtime. It is based on the Docker Engine’s core container runtime to benefit from its maturity and existing contributors. It provides a daemon for managing running containers. It is available today as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments.

1.1 is the second major release for containerd with added support for CRI, the
Kubernetes Container Runtime InterfaceCRI is a new plugin which allows connecting the containerd daemon directly to a Kubernetes kubelet to be used as the container runtime. The CRI GRPC interface listens on the same socket as the containerd GRPC interface and runs in the same process.

containerd 1.1 announced in April implements Kubernetes Container Runtime Interface (CRI), so it can be used directly by Kubernetes, as well as Docker Engine. It implements namespaces so that clients from different container systems (e.g. Docker Engine and DC/OS) can leverage a single containerd instance on one system while being logically separated. This release allows  you to plug in any OCI-compliant runtime, such as Kata containers, gVisor etc.  And includes  many performance improvements such as significantly better pod start latency and cpu/memory usage of the CRI plugin. And for additional performance benefits it replaces graphdrivers with more efficient snapshotters, that BuildKit leverages for faster build.

 

 

The new containerd 1.1 includes –

  • CRI plugin
  • ZFS, AUFS, and native snapshotter
  • Improvements to the ctr tool
  • Better support for multiple platforms
  • Cross namespace content sharing
  • Better mount cleanup
  • Support for disabling plugins
  • TCP debug address for remote debugging
  • Update to Go 1.10
  • Improvements to the garbage collector

FIPS 140-2 Compliant Engine

FIPS stands for Federal Information Processing Standard (FIPS) Publication. It is a U.S. government computer security standard which is used to approve cryptographic modules. It is important to note that the Docker EE cryptography libraries are at the “In-process(Co-ordination)” phase of the FIPS 140-2 Level 1 Cryptographic Module Validation Program.The Docker platform is validated against widely-accepted standards and best practices is a critical aspect of the product development as this enables companies and agencies across all industries to adopt Docker containers. FIPS is a notable standard which validates and approves the use of various security encryption modules within a software system.

For further detail: https://blog.docker.com/2017/06/docker-ee-is-now-in-process-for-fips-140-2/

 

 

Source: https://csrc.nist.gov/Projects/Cryptographic-Module-Validation-Program/Modules-In-Process/Modules-In-Process-List

Support for Windows Server 1709 (RS3) / 1803 (RS4)

Docker EE 18.03 Engine added support for Microsoft Windows Server 1709 & Windows Server 1803. With this release, there has been subtle improvement made in terms of density and performance via smaller image sizes than Server 2016. With this release, a single deployment architecture for Windows and Linux via parity with ingress networking and VIP service discovery has been introduced. Reduced operational requirements via relaxed image
compatibility is one of notable feature introduced with this newer release.

Below are the list of customer pain points which Docker, Inc focused for last couple of years:

 

Refer https://docs.docker.com/ee/engine/release-notes/#runtime for more information.

The –chown support in COPY/ ADD Commands in Docker EE

With Docker EE 18.03 EE Release, there is a support for –chown with COPY and ADD in Dockerfile. This release improve security by enabling developer-specific users vs root for run-time file operations. This release provide parity with functionality added in Docker CE. At this point of time, this has been introduced under Linux OS only.

Support for Compose on Kubernetes CLI

Under Docker EE 18.03 release, now you can use the same Compose files to deploy apps to Kubernetes on Enterprise Edition. This feature is not available under any of Desktop or Community Edition Release.

You should be able to pass –orchestrator parameter to specify the orchestrator.

 

This means that it should be easy to move applications between Kubernetes and Swarm, and simplify application configuration. Under this release, one can create native Kubernetes Stack object and able to interact with Stacks via the Kubernetes API. This release improve UX and move out of experimental phase. Functionality now available in the main, supported Docker CLI.

Easy setup of Docker Content Trust

Docker Content Trust was introduced first of all in Docker Engine 1.8 and Docker CS Engine 1.9.0 and is available in Docker EE.It allows image operations with a remote Docker registry to enforce client-side signing and verification of image tags. It enables digital signatures for data sent to, and received from, remote Docker registries. These signatures allow Docker client-side interfaces to verify the integrity and publisher of specific image tags.

Docker Content Trust works with Docker Hub as well as Docker Trusted Registry (Notary integration is experimental in version 1.4 of Docker Trusted Registry).It provides strong cryptographic guarantees over what code and what versions of software are being run in your infrastructure. Docker Content Trust integrates The Update Framework (TUF) into Docker using Notary , an open source tool that provides trust over any content

Below are the list of CLI options available under the current release –

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.