Sysctl Support for Docker Swarm Cluster for the first time in Docker 19.03.0 Pre-Release

Estimated Reading Time: 7 minutes

Docker CE 19.03.0 Beta 1 went public 2 week back. It was the first release which arrived with sysctl support for Docker Swarm Mode for the first time. This is definitely a great news for popular communities like Elastic Stack, Redis etc. as they rely on tuning the kernel parameter to get rid of memory exceptions. For example, Elasticsearch uses a mmapfs directory by default to store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions, hence one need to  increase the limits everytime using sysctl tool. Great to see that Docker Inc. acknowledged the fact that kernel tuning is required sometimes and provides explicit support under Docker 19.03.0 Pre-Release. Great Job !

Wait..Do I really need sysctl?

Say, you have deployed your application on Docker Swarm. it’s pretty simple and it’s working great. Your application is growing day by day and now you just need to scale it. How are you going to do it? The simple answer is: docker service scale app=<number of tasks>.Surely, it is possible today but your containers can quickly hit kernel limits. One of the most popular kernel parameter is net.core.somaxconn. This parameter represents the maximum number of connections that can be queued for acceptance. The default value on Linux is 128, which is rather low.

The Linux kernel is flexible, and you can even modify the way it works on the fly by dynamically changing some of its parameters, thanks to the sysctl command. The sysctl programs allow to limit system-wide resource use. This can help a lot in system administration, e.g. when a user starts too many processes and therefore makes the system unresponsive for other users. Sysctl basically provides an interface that allows you to examine and change several hundred kernel parameters in Linux or BSD. Changes take effect immediately, and there’s even a way to make them persist after a reboot. By using sysctl judiciously, you can optimize your box without having to recompile the kernel and get the results immediately.

Please note that Not all sysctls are namespaced as of Docker 19.03.0 CE Pre-Release. Docker does not support changing sysctls inside of a container that also modify the host system.

Docker does support setting namespaced kernel parameters at runtime & runc honors this. Have a look:

$ docker run --runtime=runc --sysctl net.ipv4.ip_forward=1 -it alpine sh
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
bdf0201b3a05: Pull complete 
Digest: sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913
Status: Downloaded newer image for alpine:latest
/ # sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
/ # 

It is important to note that sysctl support is not new to Docker. The support for sysctl in Docker Compose all started during compose file format v2.1.

For example, to set Kernel parameters in the container, you can use either an array or a dictionary.

sysctls:
  net.core.somaxconn: 1024
  net.ipv4.tcp_syncookies: 0

sysctls:
  - net.core.somaxconn=1024
  - net.ipv4.tcp_syncookies=0

Under this blog post, I will showcase how to use sysctl under 2-Node Docker Swarm Cluster. Let us get started –

Installing a Node with Docker 19.03.0 Beta 1 Test Build on Ubuntu 18.10

Method:I

Downloading the static binary archive. Go to https://download.docker.com/linux/static/stable/ (or change stable to nightly or test), choose your hardware platform, and download the .tgz file relating to the version of Docker CE you want to install.

Captain'sBay==>wget https://download.docker.com/linux/static/test/x86_64/docker-19.03.0-beta1.tgz
--2019-04-10 09:20:01--  https://download.docker.com/linux/static/test/x86_64/docker-19.03.0-beta1.tgz
Resolving download.docker.com (download.docker.com)... 54.230.75.15, 54.230.75.117, 54.230.75.202, ...
Connecting to download.docker.com (download.docker.com)|54.230.75.15|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 62701372 (60M) [application/x-tar]
Saving to: ‘docker-19.03.0-beta1.tgz’
docker-19.03.0-beta1.tgz  100%[=====================================>]  59.80M  10.7MB/s    in 7.1s    
2019-04-10 09:20:09 (8.38 MB/s) - ‘docker-19.03.0-beta1.tgz’ saved [62701372/62701372]
Extract the archive
You can use the tar utility. The dockerd and docker binaries are extracted.

Extract the tar file

Captain'sBay==>tar xzvf docker-19.03.0-beta1.tgz 
docker/
docker/ctr
docker/containerd-shim
docker/dockerd
docker/docker-proxy
docker/runc
docker/containerd
docker/docker-init
docker/docker
Captain'sBay==>

Move the binaries to executable path

Move the binaries to a directory on your executable path It could be such as /usr/bin/. If you skip this step, you must provide the path to the executable when you invoke docker or dockerd commands.

Captain'sBay==>sudo cp -rf docker/* /usr/local/bin/

Start the Docker daemon:

$ sudo dockerd &
Client: Docker Engine - Community
 Version:           19.03.0-beta1
 API version:       1.40
 Go version:        go1.12.1
 Git commit:        62240a9
 Built:             Thu Apr  4 19:15:07 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.0-beta1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.1
  Git commit:       62240a9
  Built:            Thu Apr  4 19:22:34 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
Captain'sBay==>

Testing with hello-world

Captain'sBay==>sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete 
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest
INFO[2019-04-10T09:26:23.338596029Z] shim containerd-shim started                  address="/containerd-shim/m
oby/5b23a7045ca683d888c9d1026451af743b7bf4005c6b8dd92b9e95e125e68134/shim.sock" debug=false pid=2953
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/
For more examples and ideas, visit:
 https://docs.docker.com/get-started/
## Verifying the new `docker context` command

Verifying Docker version

root@DebianBuster:~# docker version
Client:
 Version:           19.03.0-beta1
 API version:       1.40
 Go version:        go1.11.5
 Git commit:        62240a9
 Built:             Thu Apr  4 19:18:53 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.0-beta1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.11.5
  Git commit:       62240a9
  Built:            Thu Apr  4 19:17:35 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
root@DebianBuster:~#

Creating a 2-Node Docker Swarm Mode Cluster

swarm-node-1:~$ sudo docker swarm init --advertise-addr 10.140.0.6 --listen-addr 10.140.0
.6:2377
Swarm initialized: current node (c78wm1g99q1a1g2sxiuawqyps) is now a manager.
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-1bc88158q1v4b4gdof8k0u532bxzdvrgxfztwgj2r443337mja-cmhuu258lu0327
e32l0g4pl47 10.140.0.6:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Run the below command on worker node:

swarm-node2:~$ sudo docker swarm join --token SWMTKN-1-1bc88158q1v4b4gdof8k0u532bxzdvrgxf
ztwgj2r443337mja-cmhuu258lu0327e32l0g4pl47 10.140.0.6:2377
This node joined a swarm as a worker.

Listing the Swarm Mode CLuster

$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
      ENGINE VERSION
rf3xns913p4tlprmu98z2o8hi     swarm-node2         Ready               Active                            
      19.03.0-beta1
isbcijzlrft3ahpbzhgipwr9a *   swarm-node-1        Ready               Active              Leader        
      19.03.0-beta1

Running Multi-service Docker Compose for Redis

Redis is an open source, in-memory data structure store, used as a database, cache and message broker. Redis Commander is an application that allows users to explore a Redis instance through a browser. Let us look at the below Docker compose file for Redis as well as Redis Commander shown below:

version: '3'
services:
  redis:
    hostname: redis
    image: redis

  redis-commander:
    hostname: redis-commander
    image: rediscommander/redis-commander:latest
    restart: always
    environment:
    - REDIS_HOSTS=local:redis:6379
    ports:
    - "8081:8081"

Ensure that Docker Compose is installed on your system using the below commands:

curl -L https://github.com/docker/compose/releases/download/1.24.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Run the below command to bring up Redis application running on Docker Swarm Mode:

sudo docker stack deploy -c docker-compose.yml myapp

$ sudo docker stack deploy -c docker-compose.yml myapp
Ignoring unsupported options: restart
Creating network myapp_default
Creating service myapp_redis-commander
Creating service myapp_redis

Verifying if the services are up and running:

~$ sudo docker stack ls
NAME                SERVICES            ORCHESTRATOR
myapp               2                   Swarm

~$ sudo docker service ls
ID                  NAME                    MODE                REPLICAS            IMAGE               
                    PORTS
ucakpqi7ozg1        myapp_redis             replicated          1/1                 redis:latest        
                    
fxor8v90a4m0        myapp_redis-commander   replicated          0/1                 rediscommander/redis
-commander:latest   *:8081->8081/tcp

Checking the service logs:


$ docker service logs -f myapp3_redis
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:C 17 Apr 2019 06:26:08.006 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:C 17 Apr 2019 06:26:08.006 # Redis version=5.0.4, bits=64, commit=00000000, modified=0, pid=1, just started
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:C 17 Apr 2019 06:26:08.006 # Warning: no config file specified, using the default config. In order to specify a configfile use redis-server /path/to/redis.conf
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:M 17 Apr 2019 06:26:08.009 * Running mode=standalone, port=6379.
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:M 17 Apr 2019 06:26:08.009 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.

As you see above, there is a warning around /proc/sys/net/core/somaxconn lower value set to 128.

Building Docker Compose File using Sysctl parameter

Let us try to build a new Docker compose file with sysctl parameter specified:

Copy the below content and save it as a docker-compose.yml file.

version: '3'
services:
  redis:
    hostname: redis
    image: redis
  sysctls:
    net.core.somaxconn: 1024
  redis-commander:
    image: rediscommander/redis-commander:latest
    restart: always
    environment:
    - REDIS_HOSTS=local:redis:6379
    ports:
    - "8081:8081"

Running Your Redis application

$ sudo docker stack deploy -c docker-compose.yml myapp
Ignoring unsupported options: restart
Creating network myapp_default
Creating service myapp_redis
Creating service myapp_redis-commander

$ sudo docker service ls
ID                  NAME                    MODE                REPLICAS            IMAGE               
                    PORTS
2oxhaychob7s        myapp_redis             replicated          1/1                 redis:latest        
                    
pjdwti7hkg1q        myapp_redis-commander   replicated          1/1                 rediscommander/redis
-commander:latest   *:80->8081/tcp

Verifying the Redis service logs

$ sudo docker service logs -f myapp_redis
myapp_redis.1.mp57syo3okka@swarm-node-1    | 1:C 17 Apr 2019 06:59:44.510 # oO0OoO0OoO0Oo Redis is start
ing oO0OoO0OoO0Oo
myapp_redis.1.mp57syo3okka@swarm-node-1    | 1:C 17 Apr 2019 06:59:44.510 # Redis version=5.0.4, bits=64
, commit=00000000, modified=0, pid=1, just started
myapp_redis.1.mp57syo3okka@swarm-node-1    | 1:M 17 Apr 2019 06:59:44.511 * Running mode=standalone, port=6379.

You can see that the warning around /proc/sys/net/core/somaxconn is no longer being displayed which shows that the sysctls parameter has really worked.

In my next blog post, I will talk around rootless Docker and how to get it tested. Stay tuned !

Docker 19.03.0 Pre-Release: Fast Context Switching, Rootless Docker, Sysctl support for Swarm Services

Estimated Reading Time: 9 minutes

Last week Docker Community Edition 19.03.0 Beta 1 was announced and release notes went public here.Under this release, there were numerous exciting features which were introduced for the first time. Some of the notable features include – fast context switching, rootless docker, sysctl support for Swarm services, device support for Microsoft Windows.

Not only this, there were numerous enhancement around Docker Swarm, Docker Engine API, networking, Docker client, security & Buildkit. Below are the list of features and direct links to GitHub.

Let’s talk about Context Switching..

A context is essentially the configuration that you use to access a particular cluster. Say, for example, in my particular case, I have 4 different clusters – mix of Swarm and Kubernetes running locally and remotely. Assume that I have a default cluster running on my Desktop machine , 2 node Swarm Cluster running on Google Cloud Platform, 5-Node Cluster running on Play with Docker playground and a single-node Kubernetes cluster running on Minikube and that I need to access pretty regularly. Using docker context CLI I can easily switch from one cluster(which could be my development cluster) to test to production cluster in seconds.

Under this blog post, I will focus on fast context switching feature which was introduced for the first time. Let’s get started:

Tested Infrastructure:

  • A Node with Docker 19.03.0 Beta1 installed on Ubuntu 18.10
  • 2 Docker Swarm Node Cluster(swarm-node1 and swarm-node2) setup on installed on Ubuntu 18.10
  • 5-Node Swarm Cluster running on Play with Docker Platform
  • Create 3 Node Kubernetes Cluster using GKE

Installing a Node with Docker 19.03.0 Beta 1 Test Build on Ubuntu 18.10

Method:I

Downloading the static binary archive. Go to https://download.docker.com/linux/static/stable/ (or change stable to nightly or test), choose your hardware platform, and download the .tgz file relating to the version of Docker CE you want to install.

Captain'sBay==>wget https://download.docker.com/linux/static/test/x86_64/docker-19.03.0-beta1.tgz
--2019-04-10 09:20:01--  https://download.docker.com/linux/static/test/x86_64/docker-19.03.0-beta1.tgz
Resolving download.docker.com (download.docker.com)... 54.230.75.15, 54.230.75.117, 54.230.75.202, ...
Connecting to download.docker.com (download.docker.com)|54.230.75.15|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 62701372 (60M) [application/x-tar]
Saving to: ‘docker-19.03.0-beta1.tgz’
docker-19.03.0-beta1.tgz  100%[=====================================>]  59.80M  10.7MB/s    in 7.1s    
2019-04-10 09:20:09 (8.38 MB/s) - ‘docker-19.03.0-beta1.tgz’ saved [62701372/62701372]
Extract the archive
You can use the tar utility. The dockerd and docker binaries are extracted.

Extract the tar file

Captain'sBay==>tar xzvf docker-19.03.0-beta1.tgz 
docker/
docker/ctr
docker/containerd-shim
docker/dockerd
docker/docker-proxy
docker/runc
docker/containerd
docker/docker-init
docker/docker
Captain'sBay==>

Move the binaries to executable path

Move the binaries to a directory on your executable path It could be such as /usr/bin/. If you skip this step, you must provide the path to the executable when you invoke docker or dockerd commands.

Captain'sBay==>sudo cp -rf docker/* /usr/local/bin/

Start the Docker daemon:

$ sudo dockerd &
Client: Docker Engine - Community
 Version:           19.03.0-beta1
 API version:       1.40
 Go version:        go1.12.1
 Git commit:        62240a9
 Built:             Thu Apr  4 19:15:07 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.0-beta1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.1
  Git commit:       62240a9
  Built:            Thu Apr  4 19:22:34 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
Captain'sBay==>

Testing with hello-world

Captain'sBay==>sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete 
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest
INFO[2019-04-10T09:26:23.338596029Z] shim containerd-shim started                  address="/containerd-shim/m
oby/5b23a7045ca683d888c9d1026451af743b7bf4005c6b8dd92b9e95e125e68134/shim.sock" debug=false pid=2953
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/
For more examples and ideas, visit:
 https://docs.docker.com/get-started/
## Verifying the new `docker context` command

Method:II

If you have less time and want a single liner command to handle this, check this out –


root@DebianBuster:~# curl https://get.docker.com | CHANNEL=test sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 13063  100 13063    0     0   2305      0  0:00:05  0:00:05 --:--:--  2971
# Executing docker install script, commit: 2f4ae48
+ sh -c apt-get update -qq >/dev/null
+ sh -c apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
+ sh -c curl -fsSL "https://download.docker.com/linux/debian/gpg" | apt-key add -qq - >/dev/null
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sh -c echo "deb [arch=amd64] https://download.docker.com/linux/debian buster test" > /etc/apt/sources.list.d/docker.list
+ sh -c apt-get update -qq >/dev/null
+ [ -n  ]
+ sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/null
+ sh -c docker version
Client:
 Version:           19.03.0-beta1
 API version:       1.40
 Go version:        go1.11.5
 Git commit:        62240a9
 Built:             Thu Apr  4 19:18:53 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.0-beta1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.11.5
  Git commit:       62240a9
  Built:            Thu Apr  4 19:17:35 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker your-user

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.
root@DebianBuster:~#

Verifying Docker version

root@DebianBuster:~# docker version
Client:
 Version:           19.03.0-beta1
 API version:       1.40
 Go version:        go1.11.5
 Git commit:        62240a9
 Built:             Thu Apr  4 19:18:53 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.0-beta1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.11.5
  Git commit:       62240a9
  Built:            Thu Apr  4 19:17:35 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
root@DebianBuster:~#

Verifying docker context CLI

$ sudo docker context --help
Usage:  docker context COMMAND
Manage contexts
Commands:
  create      Create a context
  export      Export a context to a tar or kubeconfig file
  import      Import a context from a tar file
  inspect     Display detailed information on one or more contexts
  ls          List contexts
  rm          Remove one or more contexts
  update      Update a context
  use         Set the current docker context
Run 'docker context COMMAND --help' for more information on a command.

Creating a 2 Node Swarm Cluster

Install Docker 19.03.0 Beta 1 on both the nodes(using the same method discussed above). You can use GCP Free Tier account to create 2-Node Swarm Cluster.

Configuring remote access with systemd unit file

Use the command sudo systemctl edit docker.service to open an override file for docker.service in a text editor.

Add or modify the following lines, substituting your own values.

Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://10.140.0.6:2375

Save the file.

Reload the systemctl configuration

$ sudo systemctl daemon-reload Restart Docker.
$ sudo systemctl restart docker.service

Repeat it for other nodes which you are planning to include for building Swarm Mode cluster.

swarm-node-1:~$ sudo docker swarm init --advertise-addr 10.140.0.6 --listen-addr 10.140.0
.6:2377
Swarm initialized: current node (c78wm1g99q1a1g2sxiuawqyps) is now a manager.
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-1bc88158q1v4b4gdof8k0u532bxzdvrgxfztwgj2r443337mja-cmhuu258lu0327
e32l0g4pl47 10.140.0.6:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Run the below command on worker node:

swarm-node2:~$ sudo docker swarm join --token SWMTKN-1-1bc88158q1v4b4gdof8k0u532bxzdvrgxf
ztwgj2r443337mja-cmhuu258lu0327e32l0g4pl47 10.140.0.6:2377
This node joined a swarm as a worker.

Listing the Swarm Mode CLuster

root@swarm-node-1:~# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
0v5r9xmpbxzqpy72u41ihfck0     swarm-node2         Ready               Active                                  19.03.0-beta1
xwmay5i48xxbzlp7is7a3uord *   swarm-node-1        Ready               Active              Leader              19.03.0-beta1

Switching the Context

Listing the Context

node-1:~$ sudo docker context ls
NAME                DESCRIPTION                               DOCKER ENDPOINT               KUBERNETES E
NDPOINT   ORCHESTRATOR
default *           Current DOCKER_HOST based configuration   unix:///var/run/docker.sock               
         swarm

Adding the new Context

docker context create --docker host=tcp://10.140.0.6:2375 swarm-context1

Using the new context for Swarm

docker context use swarm-context1

Listing the Swarm Mode Cluster

 sudo docker context ls
NAME                DESCRIPTION                               DOCKER ENDPOINT               KUBERNETES E
NDPOINT   ORCHESTRATOR
default             Current DOCKER_HOST based configuration   unix:///var/run/docker.sock               
         swarm
swarm-context1 *                                              tcp://10.140.0.6:2375             
 sudo docker context ls --format '{{json .}}' | jq .
{
  "Current": true,
  "Description": "Current DOCKER_HOST based configuration",
  "DockerEndpoint": "unix:///var/run/docker.sock",
  "KubernetesEndpoint": "",
  "Name": "default",
  "StackOrchestrator": "swarm"
}
{
  "Current": false,
  "Description": "",
  "DockerEndpoint": "tcp://10.140.0.6:2375",
  "KubernetesEndpoint": "",
  "Name": "swarm-context1",
  "StackOrchestrator": ""
}
$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
      ENGINE VERSION
xwmay5i48xxbzlp7is7a3uord *   swarm-node-1        Ready               Active              Leader        
      19.03.0-beta1
$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
0v5r9xmpbxzqpy72u41ihfck0     swarm-node2         Ready               Active                                  19.03.0-beta1
xwmay5i48xxbzlp7is7a3uord *   swarm-node-1        Ready               Active              Leader              19.03.0-beta1
tanvirkour1985@sys1:~$ 

Context Switching to remotely running Play with Docker(PWD) Platform

This is one of the most exciting part of this blog. I simply love PWD platform as I find it perfect playground for test driving Docker Swarm cluster. Just a click and you get 5-Node Docker Swarm Cluster in just 5 seconds.

Just click on 3-Manager and 2 workers and you get 5-Node Docker Swarm cluster for FREE.

Let us try to access this PWD cluster using docker context CLI.

Say, by default we have just 1 context for local Docker Host.

[:)Captain'sBay=>sudo docker context ls
NAME                DESCRIPTION                               DOCKER ENDPOINT               KUBERNETES ENDPOINT                 ORCHESTRATOR
default *           Current DOCKER_HOST based configuration   unix:///var/run/docker.sock   https://127.0.0.1:16443 (default)   swarm
swarm-context1                                                tcp://10.140.0.6:2375                                             

Let us go ahead and add PWD context. As shown below, you need to pick up the FQDN name but remember to remove “@” and replace it with “.”(dot)

[:)Captain'sBay=>sudo docker context create --docker host=tcp://ip172-18-0-5-biosq9o6chi000as1470.direct.labs.play-with-docker.com:2375 pwd-clu
ster1
pwd-cluster1
Successfully created context "pwd-cluster1"

This creates a context by name “pwd-cluster1”. You can verify it by listing out the current contexts available.

[:)Captain'sBay=>sudo docker context ls
NAME                DESCRIPTION                               DOCKER ENDPOINT                                                                 K
UBERNETES ENDPOINT                 ORCHESTRATOR
default *           Current DOCKER_HOST based configuration   unix:///var/run/docker.sock                                                     h
ttps://127.0.0.1:16443 (default)   swarm
pwd-cluster1                                                  tcp://ip172-18-0-5-biosq9o6chi000as1470.direct.labs.play-with-docker.com:2375    
                                   
swarm-context1                                                tcp://10.140.0.6:2375                                                            
                                   
[:)Captain'sBay=>

Let us switch to pwd-cluster1 by simply typing docker context use CLI.

[:)Captain'sBay=>sudo docker context use pwd-cluster1
pwd-cluster1
Current context is now "pwd-cluster1"

Listing out the context and verifying if it points to right PWD cluster.

[:)Captain'sBay=>sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
wnrz5fks5drzs9agkyl8z3ffi *   manager1            Ready               Active              Leader              18.09.4
dcweon0icoolfs3kirj0p3qgg     manager2            Ready               Active              Reachable           18.09.4
f78bkvfbzot2jkr2n6cen7240     manager3            Ready               Active              Reachable           18.09.4
xla6nb5ql5i6pkjruyxpc1hzk     worker1             Ready               Active                                  18.09.4
45nk1t94ympplgaasiryunwvk     worker2             Ready               Active                                  18.09.4
[:)Captain'sBay=>

In my next blog post, I will showcase how to switch to Kubernetes cluster running on Minikube and GKE using compose on Kubernetes.

If you are a beginner and looking out to build your career in Docker | Kubernetes | Cloud, do visit DockerLabs. Join our community by clicking here. Thank You.

How I built Elastic Stack for Docker Swarm using Docker Application Packages(docker-app)

Estimated Reading Time: 6 minutes 

Let’s begin with Problem Statement !

DockerHub is a cloud-based registry service which allows you to link to code repositories, build your images, test them, store manually pushed images so you can deploy images to your hosts. It provides a centralized resource for container image discovery, distribution and change management as well as workflow automation throughout the development pipeline. We share Docker images all the time, but let’s agree to the fact that we don’t have a good way of sharing the multi-service applications that use them.

Let us take an example of Elastic Stack. Built on an open source foundation, the Elastic Stack lets you reliably and securely take data from any source, in any format, and search, analyze, and visualize it in real time with the help of Elasticsearch, Logstash, Kibana and multiple other tools and technique. In order to build these tools in the form of containers, one need to start building Docker Image for each of these tools. The recommended way is constructing a Dockerfile for each of these tools. In turn, Docker Compose uses these images to build required services. Whenever the docker stack deploy CLI is used to deploy the application stack, these Docker images are pulled from Dockerhub for the first time and then picked up locally once downloaded to your system. What if you could upload your whole application stack to DockerHub? Yes, it’s possible today and docker-app is the tool which can make Compose-based applications shareable on Docker Hub and DTR.

Docker-app v0.5.0 is now Available !

Docker Application Package v0.5.0 is the latest offering from  Docker, Inc. You can download it from this link. The binaries are available for Linux, Windows and MacOS Platform. If you are looking out for source code,  this is the direct link.

   

The docker-app v0.5.0 comes with notable features and improvements which are listed below:

  • The improved docker-app inspect command to shows a summary of services, networks, volumes and secrets.

  • The docker-app push CLI now works on Windows and bypasses the local docker daemon by talking directly to the registry.
  • The docker-app save and docker-app ls have been obsoleted.
  • All commands now accept an application package as a URL.
  • The docker-app push command now accepts a custom repository name.
  • The docker-app completion command can generate zsh completion in addition to bash.

In my last blog post, I talked about docker-app for the first time and showcased its usage soon after I returned back from Dockercon. Under this post, I will show how I built Elastic Stack using docker-app for 5-Node Docker Swarm cluster.

Prerequisite:

  • Click on Icon near to Instance to choose 3 Managers & 2 Worker Nodes

Deploy 5 Node Docker Swarm Cluster

$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUSENGINE VERSION
iy9mbeduxd4mmjxoikbn5ulds *   manager1            Ready               Active              Reachable18.03.1-ce
mx916kgqg6gfgqdr2gn1eksxy     manager2            Ready               Active              Leader18.03.1-ce
xaeq943o84g9spy6mebj64tw3     manager3            Ready               Active              Reachable18.03.1-ce
8umdv6m82nrpevuris1e45wnq     worker1             Ready               Active18.03.1-ce
o3yobqgg7wjvjw2ec5ythszgw     worker2             Ready               Active18.03.1-ce

Cloning the Repository

$ git clone https://github.com/ajeetraina/app
Cloning into 'app'...remote: Enumerating objects: 134, done.
remote: Counting objects: 100% (134/134), done.remote: Compressing objects: 100% (134/134), done.
remote: Total 14511 (delta 95), reused 0 (delta 0), pack-reused 14377Receiving objects: 100% (14511/14511), 17.37 MiB | 13.35 MiB/s, done.
Resolving deltas: 100% (5391/5391), done.

Install Docker-app

$ cd app/examples/elk/
[manager1] (local) root@192.168.0.30 ~/app/examples/elk$ ls
README.md          devel              elk.dockerapp      install-dockerapp  prod
[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$ chmod +x install-dockerapp[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$ sh install-dockerappConnecting to github.com (192.30.253.112:443)
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (52.216.131.187:443)docker-app-linux.tar 100% |*************************************************************|  8895k  0:00:00 ETA
[manager1] (local) root@192.168.0.30 ~/app/examples/elk

Verifying Docker-app Version

$ docker-app version
Version:      v0.4.0
Git commit:   525d93bc
Built:        Tue Aug 21 13:02:46 2018
OS/Arch:      linux/amd64
Experimental: off
Renderers:    none

I assume you have a docker compose file for ELK stack application already available with you. If not, you can download a sample file from this link. Place this YAML file under the same directory(app/examples/elk/). Now with docker-app installed, let’s create an Application Package based on this Compose file:

$ docker-app init elk

Once you run the above command, it create a new directory elk.dockerapp/ that contains three different YAML files:

docker-compose.yml  elk.dockerapp
[manager1] (local) root@192.168.0.30 ~/myelk
$ tree elk.dockerapp/
elk.dockerapp/
├── docker-compose.yml
├── metadata.yml
└── settings.yml

0 directories, 3 files

Edit each of these files as shown to look similar to what are placed under this link.

Rendering Docker Compose file

$ docker-app render elk
version: "3.4"
services:
  elasticsearch:    command:
    - elasticsearch    - -Enetwork.host=0.0.0.0
    - -Ediscovery.zen.ping.unicast.hosts=elasticsearch
    deploy:
      mode: replicated
      replicas: 2
    environment:
      ES_JAVA_OPTS: -Xms2g -Xmx2g
    image: elasticsearch:5
    networks:
      elk: null
    volumes:
    - type: volume
      target: /usr/share/elasticsearch/data
  kibana:
    deploy:
      mode: replicated
      replicas: 2
    environment:
      ELASTICSEARCH_URL: http://elasticsearch:9200
    healthcheck:
      test:
      - CMD-SHELL
      - wget -qO- http://localhost:5601 > /dev/null
      interval: 30s
      retries: 3
    image: kibana:latest
    networks:
      elk: null
    ports:
    - mode: ingress
      target: 5601
      published: 5601
      protocol: tcp
  logstash:
    command:
    - sh
    - -c
    - logstash -e 'input { syslog  { type => syslog port => 10514   } gelf { } } output
      { stdout { codec => rubydebug } elasticsearch { hosts => [ "elasticsearch" ]
      } }'
    deploy:
      mode: replicated
      replicas: 2
    hostname: logstash
    image: logstash:alpine
    networks:
      elk: null
    ports:
    - mode: ingress
      target: 10514
      published: 10514
      protocol: tcp
    - mode: ingress
      target: 10514
      published: 10514
      protocol: udp
    - mode: ingress
      target: 12201
      published: 12201
      protocol: udp
networks:
  elk: {

Setting the kernel parameter for ELK stack

sysctl -w vm.max_map_count=262144

Deploying the Application Stack


[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$ docker-app deploy elk --settings-files elk.dockerapp/settings.yml
Creating network elk_elk
Creating service elk_kibana
Creating service elk_logstash
Creating service elk_elasticsearch
[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$

Inspecting ELK Stack

[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$ docker-app inspect elk
myelk 0.1.0
Maintained by: Ajeet_Raina <ajeetraina@gmail.com>

ELK using Dockerapp

Setting                       Default
-------                       -------
elasticsearch.deploy.mode     replicated
elasticsearch.deploy.replicas 2
elasticsearch.image.name      elasticsearch:5
kibana.deploy.mode            replicated
kibana.deploy.replicas        2
kibana.image.name             kibana:latest
kibana.port                   5601
logstash.deploy.mode          replicated

Verifying Stack services are up & running

[manager1] (local) root@192.168.0.30 ~/app/examples/elk/docker101/play-with-docker/visualizer
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
uk2whax6f3jq        elk_elasticsearch   replicated          2/2                 elasticsearch:5
nm4p3yswvh5y        elk_kibana          replicated          2/2                 kibana:latest       *:5601->56
01/tcp
g5ubng6rhcyp        elk_logstash        replicated          2/2                 logstash:alpine     *:10514->1
0514/tcp, *:10514->10514/udp, *:12201->12201/udp
[manager1] (local) root@192


Pushing the App Package to Dockerhub

Password:[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to
 https://hub.docker.com to create one.
Username: ajeetraina
Password:Login Succeeded

Pushing the App package to DockerHub


[manager1] (local) root@192.168.0.30 ~/app/examples/elk$ docker-app push --namespace ajeetraina --tag 1.0.2
The push refers to repository [docker.io/ajeetraina/elk.dockerapp]
15e73d68a400: Pushed
1.0.2: digest: sha256:c5a8e3b7e2c7a5566a3e4247f8171516033e7e9791dfdb6ebe622d3830884d9b size: 524
[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$

Important Note: If you are using Docker-app v0.5.0, you might face issue related to pulling the image from Dockerhub as it report unsupported OS error message. Here’s a link to this open issue.

Testing the Application Package

Open up a new PWD window. Install docker-app as shown above and try to run the below command:

docker-app deploy ajeetraina/elk.dockerapp:1.0.2

This should bring up your complete Elastic Stack Platform.

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you want to keep track of latest Docker related information, follow me at https://www.linkedin.com/in/ajeetsraina/.

A First Look at Docker Application Package “docker-app”

Estimated Reading Time: 9 minutes 

Did you know? There are more than 300,000 Docker Compose files on GitHub.

Docker Compose is a tool for defining and running multi-container Docker applications. It  is an amazing developer tool to create the development environment for your application stack. It allows you to define each component of your application following a clear and simple syntax in YAML files. It works in all environments: production, staging, development, testing, as well as CI workflows. Though Compose files are easy to describe a set of related services but there are couple of problems which has emerged in the past. One of the major concern has been around multiple environments to deploy the application with small configuration differences.

Consider a scenario where you have separate development, test, and production environments for your Web application. Under development environment, your team might be spending time in building up Web application(say, WordPress), developing WP Plugins and templates, debugging the issue etc.  When you are in development you’ll probably want to check your code changes in real-time. The usual way to do this is mounting a volume with your source code in the container that has the runtime of your application. But for production this works differently. Before you host your web application in production environment, you might want to turn-off the debug mode and host it under the right port so as to test your application usability and accessibility. In production you have a cluster with multiple nodes, and in most of the case volume is local to the node where your container (or service) is running, then you cannot mount the source code without complex stuff that involve code synchronization, signals, etc. In nutshell, this might require multiple Docker compose files for each environment and as your number of service applications increases, it becomes more cumbersome to manage those pieces of Compose files. Hence, we need a tool which can ease the way Compose files can be shareable across  different environment seamlessly.

To solve this problem, Docker, Inc recently announced a new tool called “docker-app”(Application Packages) which makes “Compose files more reusable and shareable”. This tool not only makes Compose file shareable but provide us with simplified approach to share multi-service application (not just Docker Image) directly on Dockerhub.

 

 

Under this blog post, I will showcase how docker-app tool makes it easier to use Docker Compose for sharing and collaboration and then pushing it directly to DockerHub. Let us get started-

Prerequisite:

  • Click on Icon near to Instance to choose 3 Managers & 2 Worker Nodes

Deploy 5 Node Swarm Mode Cluster

$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
juld0kwbajyn11gx3bon9bsct *   manager1            Ready               Active              Leader              18.03.1-ce
uu675q2209xotom4vys0el5jw     manager2            Ready               Active              Reachable           18.03.1-ce
05jewa2brfkvgzklpvlze01rr     manager3            Ready               Active              Reachable           18.03.1-ce
n3frm1rv4gn93his3511llm6r     worker1             Ready               Active                                  18.03.1-ce
50vsx5nvwx5rbkxob2ua1c6dr     worker2             Ready               Active                                  18.03.1-ce

Cloning the Repository

$ git clone https://github.com/ajeetraina/app
Cloning into 'app'...remote: Counting objects: 14147, done.
remote: Total 14147 (delta 0), reused 0 (delta 0), pack-reused 14147Receiving objects: 100% (14147/14147), 17.32 MiB | 18.43 MiB/s, done.
Resolving deltas: 100% (5152/5152), done.

Installing docker-app

wget https://github.com/docker/app/releases/download/v0.3.0/docker-app-linux.tar.gz
tar xf docker-app-linux.tar.gz
cp docker-app-linux /usr/local/bin/docker-app

OR

$ ./install.sh
Connecting to github.com (192.30.253.112:443)
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (52.216.227.152:443)
docker-app-linux.tar 100% |**************************************************************|  8780k  0:00:00 ETA
[manager1] (local) root@192.168.0.13 ~/app
$ 

Verify docker-app version

$ docker-app version
Version:      v0.3.0
Git commit:   fba6a09
Built:        Fri Jun 29 13:09:30 2018
OS/Arch:      linux/amd64
Experimental: off
Renderers:    none

The docker-app tool comes with various options as shown below:

$ docker-app
Build and deploy Docker applications.

Usage:
  docker-app [command]

Available Commands:
  deploy      Deploy or update an application
  helm        Generate a Helm chart
  help        Help about any command
  init        Start building a Docker application
  inspect     Shows metadata and settings for a given application
  ls          List applications.
  merge       Merge the application as a single file multi-document YAML
  push        Push the application to a registry
  render      Render the Compose file for the application
  save        Save the application as an image to the docker daemon(in preparation for push)
  split       Split a single-file application into multiple files
  version     Print version information

Flags:
      --debug   Enable debug mode
  -h, --help    help for docker-app

Use "docker-app [command] --help" for more information about a command.
[manager1] (local) root@192.168.0.48 ~/app

WordPress Application under dev & Prod environment

If you browse to app/examples/wordpress directory under GitHub Repo, you will see that there is a folder called wordpress.dockerapp that contains three YAML documents:

  • metadatas
  • the Compose file
  • settings for your application

Okay, Fine ! But how you created those files?

The docker-app tool comes with an option “init” which initialize any application with the above  3 YAML files and the directory structure can be initialized with the below command:

docker-app init --single-file wordpress

I have already created a directory structure for my environment and you can find few examples under this directory.

Listing the WordPress Application package related files/directories

$ ls
README.md            install-wp           with-secrets.yml
devel                prod                 wordpress.dockerapp

As you see above, I have created each folder for my environment – dev and prod. Under these directories, I have created prod-settings.yml and dev-settings.yml. You can view the content via this link.

WordPress Application Package for Dev Environ

I can pass “-f” <YAML> parameter to docker-app tool to render the respective environment settings seamlessly as shown below:

$ docker-app render wordpress -f devel/dev-settings.yml
version: "3.6"
services:
  mysql:
    deploy:
      mode: replicated
      replicas: 1
      endpoint_mode: dnsrr
    environment:
      MYSQL_DATABASE: wordpressdata
      MYSQL_PASSWORD: wordpress
      MYSQL_ROOT_PASSWORD: wordpress101
      MYSQL_USER: wordpress
    image: mysql:5.6
    networks:
      overlay: null
    volumes:
    - type: volume
      source: db_data
      target: /var/lib/mysql
  wordpress:
    depends_on:
    - mysql
    deploy:
      mode: replicated
      replicas: 1
      endpoint_mode: vip
    environment:
      WORDPRESS_DB_HOST: mysql
      WORDPRESS_DB_NAME: wordpressdata
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DEBUG: "true"
    image: wordpress
    networks:
      overlay: null
    ports:
    - mode: ingress
      target: 80
      published: 8082
      protocol: tcp
networks:
  overlay: {}
volumes:
  db_data:
    name: db_data

WordPress Application Package for Prod

Under Prod environment, I have the following content under prod/prod-settings.yml as shown :

debug: false
wordpress:
  port: 80

For production environment, it is obvious that I want my application to be exposed under the standard port:80. Post rendering, you should be able to see port:80 exposed as shown below in the snippet:

 image: wordpress
    networks:
      overlay: null
    ports:
    - mode: ingress
      target: 80
      published: 80
      protocol: tcp
networks:
  overlay: {}
volumes:
  db_data:
    name: db_data

 

Inspect the WordPress App

$ docker-app inspect wordpress
wordpress 1.0.0
Maintained by: ajeetraina <ajeetraina@gmail.com>

Welcome to Collabnix

Setting                       Default
-------                       -------
debug                         true
mysql.database                wordpressdata
mysql.image.version           5.6
mysql.rootpass                wordpress101
mysql.scale.endpoint_mode     dnsrr
mysql.scale.mode              replicated
mysql.scale.replicas          1
mysql.user.name               wordpress
mysql.user.password           wordpress
volumes.db_data.name          db_data
wordpress.port                8081
wordpress.scale.endpoint_mode vip
wordpress.scale.mode          replicated
wordpress.scale.replicas      1
[manager1] (local) root@192.168.0.13 ~/app/examples/wordpress
$

Deploying the WordPress App

$ docker-app deploy wordpress
Creating network wordpress_overlay
Creating service wordpress_mysql
Creating service wordpress_wordpress

Switching to Dev Environ

If I want to switch back to Dev environment, all I need is to pass the dev specific YAML file using “-f” parameter  as shown below:

$docker-app deploy wordpress -f devel/dev-settings.yml

docker-app

Switching to Prod Environ

$docker-app deploy wordpress -f prod/prod-settings.yml

docker-app

[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app deploy -f devel/dev-settings.yml
Updating service wordpress_wordpress (id: l95b4s6xi7q5mg7vj26lhzslb)
Updating service wordpress_mysql (id: lhr4h2uaer861zz1b04pst5sh)
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app deploy -f prod/prod-settings.yml
Updating service wordpress_wordpress (id: l95b4s6xi7q5mg7vj26lhzslb)
Updating service wordpress_mysql (id: lhr4h2uaer861zz1b04pst5sh)
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Pushing Application Package to Dockerhub

So I have my application ready to be pushed to Dockerhub. Yes, you heard it right. I said, Application packages and NOT Docker Image.

Let me first authenticate myself before I push it to Dockerhub registry:

$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to
 https://hub.docker.com to create one.
Username: ajeetraina
Password:
Login Succeeded
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Saving this Application Package as Docker Image

The docker-app CLI is feature-rich and allows to save the entire application as a Docker Image. Let’s try it out –

[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app save wordpress
Saved application as image: wordpress.dockerapp:1.0.0
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Listing out the images

$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
wordpress.dockerapp   1.0.0               c1ec4d18c16c        47 seconds ago      1.62kB
mysql                 5.6                 97fdbdd65c6a        3 days ago          256MB
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Listing out the services

$ docker stack services wordpress
ID                  NAME                  MODE                REPLICAS            IMAGE               PORTS
l95b4s6xi7q5        wordpress_wordpress   replicated          1/1                 wordpress:latest    *:80->80
/tcp
lhr4h2uaer86        wordpress_mysql       replicated          1/1                 mysql:5.6
[manager1] (local) root@192.168.0.48 ~/docker101/play-with-docker/visualizer

Using docker-app ls command to list out the application packages

The ‘ls’ command has been recently introduced under v0.3.0. Let us try it once –

$ docker-app ls
REPOSITORY            TAG                 IMAGE ID            CREATED              SIZE
wordpress.dockerapp   1.0.1               299fb78857cb        About a minute ago   1.62kB
wordpress.dockerapp   1.0.0               c1ec4d18c16c        16 minutes ago       1.62kB

Pusing it to Dockerhub

$ docker-app push --namespace ajeetraina --tag 1.0.1
The push refers to repository [docker.io/ajeetraina/wordpress.dockerapp]
51cfe2cfc2a8: Pushed
1.0.1: digest: sha256:14145fc6e743f09f92177a372b4a4851796ab6b8dc8fe49a0882fc5b5c1be4f9 size: 524

Say, you built WordPress application package and pushed it to Dockerhub. Now one of your colleague want to pull it on his development system and deploy in his environment.

Pulling it from Dockerhub

$ docker pull ajeetraina/wordpress.dockerapp:1.0.1
1.0.1: Pulling from ajeetraina/wordpress.dockerapp
a59931d48895: Pull complete
Digest: sha256:14145fc6e743f09f92177a372b4a4851796ab6b8dc8fe49a0882fc5b5c1be4f9
Status: Downloaded newer image for ajeetraina/wordpress.dockerapp:1.0.1
[manager3] (local) root@192.168.0.24 ~/app
$ docker images
REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE
ajeetraina/wordpress.dockerapp   1.0.1               299fb78857cb        8 minutes ago       1.62kB
[manager3] (local) root@192.168.0.24 ~/app
$

Deploying the Application

$ docker images
REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE
ajeetraina/wordpress.dockerapp   1.0.1               299fb78857cb        9 minutes ago       1.62kB
[manager3] (local) root@192.168.0.24 ~/app
$ docker-app deploy ajeetraina/wordpress
Creating network wordpress_overlay
Creating service wordpress_mysql
Creating service wordpress_wordpress
[manager3] (local) root@192.168.0.24 ~/app
$

Using docker-app merge option

Docker Team has introduced docker-app merge option under the new 0.3.0 release.

[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app merge -o mywordpress
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ ls
README.md            install-wp           prod                 wordpress.dockerapp
devel                mywordpress          with-secrets.yml
$ cat mywordpress
version: 1.0.1
name: wordpress
description: "Welcome to Collabnix"
maintainers:
  - name: ajeetraina
    email: ajeetraina@gmail.com
targets:
  swarm: true
  kubernetes: true

--
version: "3.6"

services:

  mysql:
    image: mysql:${mysql.image.version}
    environment:
      MYSQL_ROOT_PASSWORD: ${mysql.rootpass}
      MYSQL_DATABASE: ${mysql.database}
      MYSQL_USER: ${mysql.user.name}
      MYSQL_PASSWORD: ${mysql.user.password}
    volumes:
       - source: db_data
         target: /var/lib/mysql
         type: volume
    networks:
       - overlay
    deploy:
      mode: ${mysql.scale.mode}
      replicas: ${mysql.scale.replicas}
      endpoint_mode: ${mysql.scale.endpoint_mode}

  wordpress:
    image: wordpress
    environment:
      WORDPRESS_DB_USER: ${mysql.user.name}
      WORDPRESS_DB_PASSWORD: ${mysql.user.password}
      WORDPRESS_DB_NAME: ${mysql.database}
      WORDPRESS_DB_HOST: mysql
      WORDPRESS_DEBUG: ${debug}
    ports:
      - "${wordpress.port}:80"
    networks:
      - overlay
    deploy:
      mode: ${wordpress.scale.mode}
      replicas: ${wordpress.scale.replicas}
      endpoint_mode: ${wordpress.scale.endpoint_mode}
    depends_on:
      - mysql

volumes:
  db_data:
     name: ${volumes.db_data.name}

networks:
  overlay:

--
debug: true
mysql:
  image:
    version: 5.6
  rootpass: wordpress101
  database: wordpressdata
  user:
    name: wordpress
    password: wordpress
  scale:
    endpoint_mode: dnsrr
    mode: replicated
    replicas: 1
wordpress:
  scale:
    mode: replicated
    replicas: 1
    endpoint_mode: vip
  port: 8081
volumes:
  db_data:
    name: db_data

docker-app comes with a few other helpful commands as well, in particular the ability to create Helm Charts from your Docker Applications. This can be useful if you’re adopting Kubernetes, and standardising on Helm to manage the lifecycle of your application components, but want to maintain the simplicity of Compose when writing you applications. This also makes it easy to run the same applications locally just using Docker, if you don’t want to be running a full Kubernetes cluster.

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you want to keep track of latest Docker related information, follow me at https://www.linkedin.com/in/ajeetsraina/.

Test Drive Elastic stack on PWD platform running Docker 17.06 CE Swarm Mode in 5 minutes

Estimated Reading Time: 6 minutesLet’s talk about Dockerized Elastic Stack…

Elastic Stack is an open source solution that reliably and securely take data from any source, in any format, and search, analyze, and visualize it in real time. It is a collection of open source products – Elasticsearch, Logstash, Kibana & recently added  fourth product, called Beats. Elastic Stack can be deployed on premises or made available as Software as a Service.

Brief about Elastic Stack Components:

Elasticsearch:

Elasticsearch is a RESTful, distributed, highly scalable, JSON-based search and analytics engine built on top of Apache Lucene and released under Apache license. It is Java-based and designed for horizontal scalability, maximum reliability, and easy management. It is basically an open-source full-text search and analytics engine. It allows you to store, search, and analyze big volumes of data quickly and in near real time. It is generally used as the underlying engine/technology that powers applications that have complex search features and requirements.

                                                                                                                                              ~Source: https://www.elastic.co

Logstash:

Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.” (Elasticsearch). Logstash is a dynamic data collection pipeline with an extensible plugin ecosystem and strong Elasticsearch synergy. The product was originally optimized for log data but has expanded the scope to take data from all sources.Data is often scattered or siloed across many systems in many formats. Logstash supports a variety of inputs that pull in events from a multitude of common sources, all at the same time. Easily ingest from your logs, metrics, web applications, data stores, and various AWS services, all in continuous, streaming fashion.As data travels from source to store, Logstash filters parse each event, identify named fields to build structure, and transform them to converge on a common format for easier, accelerated analysis and business value.

Logstash dynamically transforms and prepare your data regardless of format or complexity:

  • Derive structure from unstructured data with grok
  • Decipher geo coordinates from IP addresses
  • Anonymize PII data, exclude sensitive fields completely
  • Ease overall processing independent of the data source, format, or schema.

Logstash has a pluggable framework featuring over 200 plugins. Mix, match, and orchestrate different inputs, filters, and outputs to work in pipeline harmony.

Kibana:

Lastly, Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack. It gives you the freedom to select the way you give shape to your data. And you don’t always have to know what you’re looking for. With its interactive visualizations, start with one question and see where it leads you.Kibana developer tools offer powerful ways to help developers interact with the Elastic Stack. With Console, you can bypass using curl from the terminal and tinker with your Elasticsearch data directly. The Search Profiler lets you easily see where time is spent during search requests. And authoring complex grok patterns in your Logstash configuration becomes a breeze with the Grok Debugger.

In next 5 minutes, we are going to test drive ELK stack on PWD playground.

Let’s get started –

Open up https://play-with-docker.com

 

Click on icon next to Instances to open up ready-made templates for Docker Swarm Mode:

 

Choose the first template (as highlighted in the above figure) to select 3 Managers and 2 Workers. It will bring up Docker 17.06 Swarm Mode cluster in just 10 seconds.

Run the below command to show up the cluster nodes:

docker node ls

Run the necessary command on node which will run elasticsearch:

sysctl -w vm.max_map_count=262144
echo ‘vm.max_map_count=262144’ >> /etc/sysctl.conf

Clone the GitHub repository:

git clone https://github.com/ajeetraina/docker101
cd docker101/play-with-docker/visualizer

Run the below command to bring up visualiser tool as shown below:

Soon you will notice port 8080 displayed on the top of the page which when clicked will open up visualiser tool.

It’s time to clone ELK stack and execute the below command to bring up ELK stack across Docker 17.06 Swarm Mode cluster:

git clone https://github.com/ajeetraina/swarm-elk
cd swarm-elk
docker stack deploy -c docker-compose.yml myself

 

[Credits to Andrew Hromis for building this docker-compose file. I leveraged his project repository to bring up the ELK stack in the first try]

You will soon see the below list of containers appearing on the nodes:

Run the below command to see the list of services running across the cluster:

docker service ls

Click on port 5601 displayed on the top of the PWD page:

Please Note:  Kibana need data in Elasticsearch to work with. The .kibana index holds Kibana related data, and if they is the only index you have there is no data available that Kibana can visualise.Before you can use Kibana you will therefore need to index some data into Elasticsearch. This can be done e.g. using Logstash or directly through the REST interface using curl.

Soon you will see the below Kibana page:

 

Enabling High Availability for Elastic Stack through scaling

Let us scale out more number of replicas for elasticsearch:

Pushing data into Logstash:

Example #1:

Let us push NGINX web server logs into logstash and see if Kibana is able to detect it:

docker run -d --name nginx-with-syslog --log-driver=syslog --log-opt syslog-address=udp://10.0.173.7:12201 -p 80:80 nginx:alpine

Now if you open up Kibana UI, you should be able to see logs being displayed for Nginx:

Example #2:

We can also push logs to logstash using the below command:

docker run --rm -it --log-driver=gelf --log-opt gelf-address=udp://10.0.173.7:12201 alpine ping 8.8.8.8

Open up Kibana and now you will see the below GREEN status:

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

Docker 17.06 Swarm Mode: Now with built-in MacVLAN & Node-Local Networks support

Estimated Reading Time: 4 minutesDocker 17.06.0-ce-RC5 got announced 5 days back and is available for testing. It brings numerous new features & enablements under this new upcoming release. Few of my favourites includes support for Secrets on Windows,  allows specifying a secret location within the container, adds --format option to docker system df command, adds support for placement preference to docker stack deploy, adds monitored resource type metadata for GCP logging driver and adding build & engine info prometheus metrics to list a few. But one of the notable and most awaited feature include support of swarm-mode services with node-local networks such as macvlan, ipvlan, bridge and host.

Under the new upcoming 17.06 release, Docker provides support for local scope networks in Swarm. This includes any local scope network driver. Some examples of these are bridgehost, and macvlan though any local scope network driver, built-in or plug-in, will work with Swarm. Previously only swarm scope networks like overlay were supported. This is a great news for all Docker Networking enthusiasts.

A Brief Intro to MacVLAN:

Picture1

 

macvlan

In case you’re new , the MACVLAN driver provides direct access between containers and the physical network. It also allows containers to receive routable IP addresses that are on the subnet of the physical network.

MACVLAN offers a number of unique features and capabilities. It has positive performance implications by virtue of having a very simple and lightweight architecture. It’s use cases includes very low latency applications and networking design that requires containers be on the same subnet as and using IPs as the external host network.The macvlan driver uses the concept of a parent interface. This interface can be a physical interface such as eth0, a sub-interface for 802.1q VLAN tagging like eth0.10 (.10representing VLAN 10), or even a bonded host adaptor which bundles two Ethernet interfaces into a single logical interface.

To test-drive MacVLAN under Swarm Mode, I will leverage the existing 3 node Swarm Mode clusters on my VMware ESXi system. I have tested it on bare metal system and VirtualBox and it works equally great.  

[Updated: 9/27/2017 – I have added docker-stack.yml at the end of this guide to show you how to build services out of docker-compose.yml file. DO NOT FORGET TO CHECK IT OUT]

Installing Docker 17.06 on all the Nodes:

curl -fsSL https://test.docker.com > install-docker.sh
sh install-docker.sh

 

Verifying the latest Docker version:

Screen Shot 2017-06-26 at 12.51.18 AM

 

Setting up 2 Node Swarm Mode Cluster:

 

 

Attention VirtualBox Users: – In case you are using VirtualBox,  the MACVLAN driver requires the network and interfaces to be in promiscuous mode. 

A local network config is created on each host. The config holds host-specific information, such as the subnet allocated for this host’s containers. --ip-range is used to specify a pool of IP addresses that is a subset of IPs from the subnet. This is one method of IPAM to guarantee unique IP allocations.

Manager:

manager1==>sudo docker network create --config-only --subnet 100.98.26.0/24 -o parent=ens160.60 --ip-range 100.98.26.100/24 collabnet

 

Worker-1:

worker1==>sudo docker network create --config-only --subnet 100.98.26.0/24 -o parent=ens160.60 --ip-range 100.98.26.100/24 collabnet

 

 

Instantiating the macvlan network globally

Manager:

manager1==> $sudo docker network create -d macvlan --scope swarm --config-from collabnet swarm-macvlan

 

Deploying a service to the swarm-macvlan network:

Let us go ahead and deploy WordPress application. We will be creating 2 services – wordpressapp and wordpressdb1 and attach it to “swarm-macvlan” network as shown below:

Creating Backend Service:

docker service create --replicas 1 --name wordpressdb1 --network swarm-macvlan --env MYSQL_ROOT_PASSWORD=collab123 --env MYSQL_DATABASE=wordpress mysql

Let us verify if MacVLAN network scope holds this container:

 

Creating Frontend Service

Next, it’s time to create wordpress application i.e. wordpressapp

docker service create --env WORDPRESS_DB_HOST=wordpressdb1 --env WORDPRESS_DB_PASSWORD=collab123 --network swarm-macvlan --replicas 4 --name wordpressapp --publish 80:80/tcp wordpress:latest

Verify if both the services are up and running:

 

Verifying if all the containers on the master node picks up desired IP address from the subnet:

 

Docker Compose File showcasing MacVLAN Configuration

Ensure that you run the below commands to setup MacVLAN configuration for your services before you execute the above docker stack deploy CLI:

root@ubuntu-1610:~# docker network create --config-only --subnet 100.98.26.0/24 --gateway 100.98.26.1 -o parent=ens160.60 --ip-range 100.98.26.120/24 collabnet
da2912d762cbf5f5ea412e6e4d69352a3285f720e23740529af9e533c7168729
 
root@ubuntu-1610:~#docker network create -d macvlan --scope swarm --config-from collabnet swarm-macvlan
jp76lts6hbbheqlbbhggumujd

 

Verify that the containers inspection shows the correct information:

root@ubuntu-1610:~/docker101/play-with-docker/wordpress/example1# docker network inspect swarm-macvlan
[
{
“Name”: “swarm-macvlan”,
“Id”: “jp76lts6hbbheqlbbhggumujd”,
“Created”: “2017-09-27T02:12:00.827562388-04:00”,
“Scope”: “swarm”,
“Driver”: “macvlan”,
“EnableIPv6”: false,
“IPAM”: {
“Driver”: “default”,
“Options”: null,
“Config”: [
{
“Subnet”: “100.98.26.0/24”,
“IPRange”: “100.98.26.120/24”,
“Gateway”: “100.98.26.1”
}
]
},
“Internal”: false,
“Attachable”: false,
“Ingress”: false,
“ConfigFrom”: {
“Network”: “collabnet”
},
“ConfigOnly”: false,
“Containers”: {
“3c3f1ec48225ef18e8879f3ebea37c2d0c1b139df131b87adf05dc4d0f4d8e3f”: {
“Name”: “myapp2_wordpress.1.nd2m62alxmpo2lyn079x0w9yv”,
“EndpointID”: “a15e96456870590588b3a2764da02b7f69a4e63c061dda2798abb7edfc5e5060”,
“MacAddress”: “02:42:64:62:1a:02”,
“IPv4Address”: “100.98.26.2/24”,
“IPv6Address”: “”
},
“d47d9ebc94b1aa378e73bb58b32707643eb7f1fff836ab0d290c8b4f024cee73”: {
“Name”: “myapp2_db.1.cxz3y1cg1m6urdwo1ixc4zin7”,
“EndpointID”: “201163c233fe385aa9bd8b84c9d6a263b18e42893176271c585df4772b0a2f8b”,
“MacAddress”: “02:42:64:62:1a:03”,
“IPv4Address”: “100.98.26.3/24”,
“IPv6Address”: “”
}
},
“Options”: {
“parent”: “ens160”
},
“Labels”: {},
“Peers”: [
{
“Name”: “ubuntu-1610-1633ea48e392”,
“IP”: “100.98.26.60”
}
]
}
]

Docker Stack Deploy CLI:

docker stack deploy -c docker-stack.yml myapp2
Ignoring unsupported options: restart
Creating service myapp2_db
Creating service myapp2_wordpress

Verifying if the services are up and running:

root@ubuntu-1610:~/# docker stack ls
NAME SERVICES
myapp2 2
root@ubuntu-1610:~/# docker stack ps myapp2
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
nd2m62alxmpo myapp2_wordpress.1 wordpress:latest ubuntu-1610 Running Running 15 minutes ago
cxz3y1cg1m6u myapp2_db.1 mysql:5.7 ubuntu-1610 Running Running 15 minutes ago

Looking for Docker Compose file for Single Node?

 

Cool..I am going to leverage this for my Apache JMeter Setup so that I can push loads from different IPs using Docker containers.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more what’s new upcoming under Docker 17.06 CE release by clicking on this link.

Topology Aware Scheduling under Docker v17.05.0 Swarm Mode Cluster

Estimated Reading Time: 5 minutes 

Docker 17.05.0 Final release went public exactly 2 week back.This community release was the first release as part of new Moby project.  With this release, numerous interesting features  like Multi-Stage Build support to the builder, using build-time ARG in FROM, DEB packaging for Ubuntu 17.04(Zesty Zapus)  has been included. With this latest release, Docker Team brought numerous new features and improvements in terms of Swarm Mode. Example – synchronous service commands, automatic service rollback on failure, improvement over raft transport package, service logs formatting etc. 

swarm_1705

 

Placement Preference under Swarm Mode:

One of the prominent new feature introduced is placement preference  under 17.05.0-CE Swarm Mode . Placement preference feature allows you to divide tasks evenly over different categories of nodes. It allows you to balance tasks between multiple datacenters or availability zones. One can use a placement preference to spread out tasks to multiple datacenters and make the service more resilient in the face of a localized outage. You can use additional placement preferences to further divide tasks over groups of nodes.  Under this blog, we will setup 5-node Swarm Mode cluster on play-with-docker platform and see how to balance them over multiple racks within each datacenter. (Note – This is not real time scenario but based on assumption that nodes are being placed in 3 different racks).

Assumption:  There are 3 datacenter Racks holding respective nodes as shown:

{Rack-1=> Node1, Node2 and Node3},

{Rack-2=> Node4}  &

{Rack-3=> Node5}

 

Creating Swarm Master Node:

Open up Docker Playground to build up Swarm Cluster.

docker swarm init --advertise-addr 10.0.116.3

Screen Shot 2017-05-20 at 7.04.26 AM

 

Adding Worker Nodes to Swarm Cluster

docker swarm join --token <token-id> 10.0.116.3:2377

Screen Shot 2017-05-20 at 7.07.53 AM

Create 3 more instances and add those nodes as worker nodes. This should build up 5 node Swarm Mode cluster.

Screen Shot 2017-05-20 at 7.08.51 AM

 

Setting up Visualizer Tool 

To showcase this demo, I will leverage a fancy popular Visualizer tool.

 

git clone https://github.com/ajeetraina/docker101
cd docker101/play-with-docker/visualizer

 

All you need is to execute docker-compose command to bring up visualizer container:

 

docker-compose up -d

 

Screen Shot 2017-05-20 at 7.09.57 AM

 

Click on port “8080” which gets displayed on top centre of this page and it should display a fancy visualiser depicting Swarm Mode cluster nodes.

Screen Shot 2017-05-20 at 7.13.50 AM

Creating an Overlay Network:

 $docker network create -d overlay collabnet

Screen Shot 2017-05-20 at 7.15.15 AM

 

Let us try to create service with no preference placement or no node labels.

Setting up WordPress DB service:

docker service create --replicas 10 --name wordpressdb1 --network collabnet --env MYSQL_ROOT_PASSWORD=collab123 --env MYSQL_DATABASE=wordpress mysql:latest

Screen Shot 2017-05-20 at 7.19.02 AM

When you run the above command, the swarm will spread the containers evenly node-by-node. Hence, you will see 2-containers per node as shown below:

 Screen Shot 2017-05-20 at 7.22.58 AM 

Setting up WordPress Web Application:

docker service create --env WORDPRESS_DB_HOST=wordpressdb1 --env WORDPRESS_DB_PASSWORD=collab123 --network collabnet --replicas 3 --name wordpressapp --publish 80:80/tcp wordpress:latest

Screen Shot 2017-05-20 at 7.30.55 AM

Visualizer:

Screen Shot 2017-05-20 at 7.34.51 AM 

As per the visualizer, you might end up with uneven distribution of services. Example., Rack-1 holding node-1, node-2 and node-3 looks to have almost equal distribution of services, Rack-2 which holds node3 lack WordPress fronted application.

Here Comes Placement Preference for a rescue…

Under the latest release, Docker team has introduced a new feature called “Placement Preference Scheduling”. Let us spend some time to understand what it actually means. You can set up the service to divide tasks evenly over different categories of nodes. One example of where this can be useful is to balance tasks over a set of datacenters or availability zones. 

This uses --placement-pref with a spread strategy (currently the only supported strategy) to spread tasks evenly over the values of the datacenter node label. In this example, we assume that every node has a datacenter node label attached to it. If there are three different values of this label among nodes in the swarm, one third of the tasks will be placed on the nodes associated with each value. This is true even if there are more nodes with one value than another. For example, consider the following set of nodes:

  • Three nodes with node.labels.datacenter=india
  • One node with node.labels.datacenter=uk
  • One node with node.labels.datacenter=us

Considering the last example, since we are spreading over the values of the datacenter label and the service has 5 replicas, at least 1 replica should be available  in each datacenter. There are three nodes associated with the value “india”, so each one will get one of the three replicas reserved for this value. There is 1 node with the value “uk”, and hence 1 replica for this value will be receiving it. Finally, “us” has a single node that will again get atleast 1 replica of each service reserved.

To understand more clearly, let us assign node labels to Rack nodes as shown below:

Rack-1 : 

Node-1

docker node update --label-add datacenter=india node1

docker node update --label-add datacenter=india node2

docker node update --label-add datacenter=india node3

Rack-2

docker node update --label-add datacenter=uk node4

Rack-3

docker node update --label-add datacenter=us node5

Screen Shot 2017-05-20 at 7.46.33 AM

 

Removing both the services:

docker service rm wordpressdb1 wordpressapp

Let us now pass placement preference parameter to the docker service command:

docker service create --replicas 10 --name wordpressdb1 --network collabnet --placement-pref “spread=node.labels.datacenter” --env MYSQL_ROOT_PASSWORD=collab123 --env MYSQL_DATABASE=wordpress mysql:latest

Screen Shot 2017-05-20 at 8.05.52 AM

 

Visualizer:

Screen Shot 2017-05-20 at 8.05.14 AM

Rack-1(node1+node2+node3) has 4 copies, Rack-2(node4) has 3 copies and Rack-3(node5) has 3 copies.

Let us run WordPress Application service likewise:

docker service create --env WORDPRESS_DB_HOST=wordpressdb1 --env WORDPRESS_DB_PASSWORD=collab123 --placement-pref “spread=node.labels.datacenter” --network collabnet --replicas 3 --name wordpressapp --publish 80:80/tcp wordpress:latest

Screen Shot 2017-05-20 at 8.09.29 AM

Visualizer: As shown below, we have used placement preference feature to ensure that the service containers get distributed across the swarm cluster on both the racks.

Screen Shot 2017-05-20 at 8.10.41 AM

 

As shown above, –placement-pref ensures that the task is spread evenly over the values of the datacenter node label. Currently spread strategy is only supported.Both engine labels and node labels are supported by placement preferences.

Please Note: If you want to try this feature with Docker compose , you will need Compose v3.3 which is slated to arrive under 17.06 release.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more about the latest Docker releases clicking on this link.

 

Docker Service Inspection Filtering & Template Engine under Swarm Mode

Estimated Reading Time: 4 minutesGo programming language has really helped in shaping Docker as a powerful software and enabling fast development for distributed systems. It has been helping developers and operations team to quickly construct programs and tools for Cloud computing environment. Go offers built-in support for JSON  (JavaScript Object Notation) encoding and decoding, including to and from built-in and custom data types.

golang

 

In last 1 year, Docker Swarm Mode has matured enough to become production-ready. The Orchestration platform is quite stable with numerous features like Logging, Secrets Management, Security Scanning, improvement over Scheduling, networking etc. making it more simple to use and scale-out in just few liner commands. With the introduction of new APIs like swarm, node, volume plugins, services etc., Swarm Mode brings dozens of features to control every aspect of swarm cluster sitting on the master node. But when you start building services in the range of 100s & 1000s and that too distributed across another 100s and 1000s of nodes, there arise a need of quick and handy way of filtering the output, especially when you are interested to capture one specific data out of the whole cluster-wide configuration. Here comes ‘a filtering flag’ as a rescue.

swarm11

The filtering flag (-f or --filter) format is a key=value pair which is actually a very powerful weapon for developers & system administrators.If you have ever played around with Docker CLI, you must have used docker inspect command to get metadata on a container/ image. Using it with -f provides you more specific information like IP address, network etc. There are numerous guide on how to use filters with standalone host holding the Docker images but I found lack of guides talking about Swarm Mode filters.

Under this blog post, I have prepared a quick list of consolidated filtering commands and outputs in tabular format for Swarm Mode Cluster(shown below).

I have 3 node Swarm Mode cluster running on one of my Google Cloud Engine. Let us first create a simple wordpress application as shown below:

 

swarm-1

 

We will create a simple wordpress service as shown:

master==>docker service create –env WORDPRESS_DB_HOST=wordpressdb1 –env WORDPRESS_DB_PASSWORD=collab123 –network collabnet –replicas 4 –name wordpressapp –publish 80:80/tcp wordpress:latest

Below is the tabular list of commands and outputs which talks about various filtering mode for docker service inspect command:

 

Inspecting the “wordpress” service:

[wpsm_comparison_table id="3" class=""]

To retrieve the network information for specific service:

[wpsm_comparison_table id="6" class=""]

To list out the port which wordpress is using for specific service:

[wpsm_comparison_table id="7" class=""]

To list out the protocol detail for specific service:

[wpsm_comparison_table id="8" class=""]

To list out the type of Published port:

[wpsm_comparison_table id="9" class=""]

To verify if its DNS RR or Virtual IP(VIP) based for specific service:

[wpsm_comparison_table id="10" class=""]

 

To list out the type of Published port for specific service:

[wpsm_comparison_table id="11" class=""]

To list out the replication factor for a specific service:

[wpsm_comparison_table id="12" class=""]

To list out the environmental variable passed for specific service:

[wpsm_comparison_table id="13" class=""]

To list out the image detail for specific service:

[wpsm_comparison_table id="14" class=""]

To list out the complete networking details for specific service:

[wpsm_comparison_table id="15" class=""]

To list out detailed consolidated  meta data information for the specific service::

[wpsm_comparison_table id="4" class=""]
 
To list out further detailed information of the service :

[wpsm_comparison_table id="5" class=""]

Let us create another service called “wordpressdb1” which is actually a database service under Docker Swarm Mode:

$docker service create –replicas 1 –name wordpressdb1 –network collabnet –env MYSQL_ROOT_PASSWORD=collab123 –env MYSQL_DATABASE=wordpress mysql:latest

Inspecting the service “collabdb1”:

[wpsm_comparison_table id="16" class=""]

Displaying the output in human-friendly way:

[wpsm_comparison_table id="17" class=""]

To list out the VIPs details for specific service:

[wpsm_comparison_table id="18" class=""]

Retrieving the last StatusUpdate details:

[wpsm_comparison_table id="19" class=""]

I have plans to add more examples during the course of time based on experience and exploration. I hope you will find it handy.

 

 

Running Apache JMeter 3.1 Distributed Load Testing Tool using Docker Compose v3.1 on Swarm Mode Cluster

Estimated Reading Time: 6 minutesApache JMeter is a popular open source software used as a load testing tool for analyzing and measuring the performance of web application or multitude of services. It is 100% pure Java application, designed to test performance both on static and dynamic resources and web dynamic applications. It simulates a heavy load on one or multitude of servers, network or object to test its strength/capability or to analyze overall performance under different load types. The various applications/server/protocol includes – HTTP, HTTPS (Java, NodeJS, PHP, ASP.NET), SOAP / REST Web services, FTP,Database via JDBC, LDAP, Message-oriented middleware (MOM) via JMS,Mail – SMTP(S), POP3(S) and IMAP(S), native commands or shell scripts, TCP, Java Objects and many more.

Logo

JMeter is extremely easy to use and doesn’t take time to get familiar with it.It allows concurrent and simultaneous sampling of different functions by a separate thread group. JMeter is highly extensible which means you can write your own tests. JMeter also supports visualization plugins allow you extend your testing.JMeter supports many testing strategies such as Load Testing, Distributed Testing, and Functional Testing. JMeter can simulate multiple users with concurrent threads, create a heavy load against web application under test.
 
 

Arch

 
 
Architecture of JMeter Distributed Load Testing:
 
JMeter Distributed Load Testing uses client-server model. It is a kind of testing which use multiple systems to perform stress testing. Distributed testing is applied for testing web sites and server applications when they are working with multiple clients simultaneously.
It includes:
 
1. Master –  the system running JMeter GUI, which controls the test.
2. Slave –  the system running JMeter-server, which takes commands from the GUI and send requests to the target system(s).
3. Target (System Under Test)– the web server which undergoes stress test.

 

What Docker has to offer for Apache JMeter?

Good Question ! With every new installation of Apache JMeter, you need to download/install JMeter on every new node participating in Distributed Load Testing. Installing JMeter requires dependencies to be installed first like JAVA, default-jre-headless, iputils etc. The complexity begins when you have multitude OS distributions running on your infrastructure. You can always use automation tools or scripts to enable this solution but again there is an extra cost of maintenance and skills required to troubleshoot with the logs when something goes wrong in the middle of the testing.

With Docker, it is just a matter of 1 Docker Compose file and an one-liner command to get the entire infrastructure ready. Under this blog, I am going to show how a single Docker Compose v3.1 file format can help you setup the entire JMeter Distributed Load Testing tool – all working on Docker 17.03 Swarm Mode cluster.

I will leverage 4-node Docker 17.04 Swarm Cluster running on my Google Cloud Engine platform to showcase this solution as shown below:

gcloud

Under this setup, I will be using instance “master101” as master/client node while the rest of worker nodes as “server/slaves” nodes. All of these instances are running Ubuntu 17.04 with Docker 17.03 installed. You are free to choose any latest Linux distributions which supports Docker executables.

First, let us ensure that you have the latest Docker 17.03 running on your machine:

$curl -sSL https://get.docker.com/ | sh

Version

 

Ensure that the latest Docker compose is installed which supports v3.0 file format:

 $curl -L https://github.com/docker/compose/releases/download/1.11.2/docker-compose-`uname -s`-`uname -m` > /usr/local 
   /bin/docker-compose
 $chmod +x /usr/local/bin/docker-compose

Compose_Version

Docker Compose for Apache JMeter Distributed Load Testing

One can use docker stack deploy command to quickly setup JMeter Distributed Testing environment. This command requires docker-compose.yml file which holds the declaration of services (apache-jmeter-master and apache-jmeter-server) respectively. Let us clone the repository as shown below to get started –

Run the below commands on the Swarm Master node –

$git clone https://github.com/ajeetraina/jmeter-docker/

$cd jmeter-docker

Under this directory, you will find docker-compose.yml file which is shown below:

Picture1

 

In the above docker-compose.yml, there are two service definitions – master and server. Through this docker compose file format, we can push master/client service definition to the master node and server specific service definition which has to be pushed to all the slave nodes. The constraints sections takes care of this functionality. This compose file will automatically create an overlay network called jm-network across the cluster.

Let us start the required services out of the docker-compose file as shown below:

$sudo docker stack deploy -c docker-compose.yml myjmeter

Untitled picture

Checking if the services are up and running through docker service ls command:

service_ls

Let us verify if constraints worked correctly or not as shown below:

1_service

2_service

As shown above, the constraints worked well pushing the containers to the right node.

It’s time to enter into one of the container running on the master node as shown below:

exec_cont

Using docker exec command, one can enter into the container and browse to /jmeter/apache-jmeter-3.1/bin directory.

exec_cont2

Pushing the JMX file into the container

   $docker exec -i <container-running-on-master-node> sh -c 'cat > /jmeter/apache-jmeter-3.1/bin/jmeter-docker.jmx' < jmeter-docker.jmx

Starting the Load testing

  $docker exec -it <container-on-master-node> bash
  root@daf39e596b93:/#cd /jmeter/apache-jmeter-3.1/bin
  $./jmeter -n -t jmeter-docker.jmx -R<list of containers running on slave nodes seperated by commas)

Planning to move from Apache JMeter running on VMs to Containers??

This is very important topic which I don’t want to miss out , though we are at the end of this blog post. You might be using Virtual Infrastructure for setting up Apache JMeter Distributed Load tool. Generally, each  VM is assigned with static IP and it does make sense as your System Under Test will see load coming from different static IP. The above docker-compose file uses an overlay network and IPs are in the range of 172.16.x.x which gets assigned automatically and taken care by Docker  Swarm Networking. In case you are looking out for static IP which should get assigned to each containers automatically, then for sure you need macvlan to be implemented. Follow https://github.com/ajeetraina/jmeter-docker/blob/master/static-README.md to achieve this.

In my next blog post, I will cover further interesting stuffs related to JMeter Distributed Load Testing tool. Drop me a message through tweet (@ajeetsraina) if you face any issue with the above solution.

Loved reading it? Feel free to share it.

[clickandtweet handle=”@ajeetsraina” hashtag=”#Jmeter” related=”@docker” layout=”” position=””]Running Apache JMeter 3.1 Distributed Load Testing via Docker Compose on Swarm Mode[/clickandtweet]

References:

https://github.com/ajeetraina/jmeter-docker

http://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.pdf

https://docs.docker.com/compose/compose-file/

 

Introducing new RexRay 0.8 with Docker 17.03 Managed Plugin System for Persistent Storage on Cloud Platforms

Estimated Reading Time: 5 minutesDellEMC Rex-Ray 0.8 Final Release was announced last week. Graduated as top-level project within {code} community, RexRay 0.8 release has been considered as one of the largest releases till date. The new release introduced support for long lists of new storage platforms like S3FS, EBS, EFS, GCEPD & ScaleIO shown below:

rexray010

Public cloud storage is one of the fastest growing sector in storage with leaders like Amazon AWS, Google Cloud Storage and Microsoft Azure. With the release of RexRay 0.8, {code} community took the right approach in targeting the first community-contributed driver starting with Amazon EFS driver and then quickly adding additional community-contributed drivers like Digital Ocean, FittedCloud, Google Cloud Engine (GCEPD) & Microsoft Azure Unmanaged Disk driver.

Introducing New Docker 17.03 Volume Plugin System

With Docker 17.03 release, a new managed plugin system has been introduced. This is quite different from the old Docker plugin system. Plugins are now distributed as Docker images and can be hosted on Docker Hub or on a private registry.A volume plugin enables Docker volumes to persist across multiple Docker hosts.

rex01

In case you are very new to Docker Plugins, they basically extend Docker’s functionality.  A plugin is a process running on the same or a different host as the docker daemon, which registers itself by placing a file on the same docker host in one of the plugin directories described either as a.sock files( UNIX domain sockets placed under /run/docker/plugins), .spec files( text files containing a URL, such as unix:///other.sock or tcp://localhost:8080 placed under /etc/docker/plugins or /usr/lib/docker/plugins)or.json files ( text files containing a full json specification for the plugin placed under /etc/docker/plugin). You can refer this in case you want to develop your own Docker Volume Plugin.

Running RexRay inside Docker container

Yes, you read it correct ! With the introduction of Docker 17.03 New Plugin Managed System, you can now run RexRay inside Docker container flawlessly. Rex-Ray Volume Plugin is written in Go and provides advanced storage functionality for many platforms including VirtualBox, EC2, Google Compute Engine, OpenStack, and EMC.

You can list out available Docker Volume Plugin for various storage platforms using docker search rexray as shown below:

rexray_plugin2

Let us test-drive RexRay Volume Plugin for Swarm Mode cluster for the first time. I have 4-node Swarm Mode cluster running on Google Cloud Platform as shown below:

rex1

Verify that all the cluster nodes are running the latest 17.03.0-ce (Community Edition).

Installing the RexRay Volume plugin is just one-liner command:

rexray_plugininstallation

 

You can inspect the Rex-Ray Volume plugin using docker plugin inspect command:

rexray_inspect    rexray_inspect22

It’s time to create a volume using docker volume create utility :

$sudo docker volume create –driver rexray/gcepd –name storage1 –opt=size=32

rexray_volume1

You can verify if it is visible under GCE Console window:

RexRay_GCE

Let us try running few application which uses RexRay volume plugin as shown:

{code}==>docker run -dit –name mydb -e MYSQL_ROOT_PASSWORD=wordpress -e MYSQL_USER=wordpress -e MYSQL_PASSWORD:wordpress –volume-driver=rexray/gcepd -v dbdata:/var/lib/mysql mysql:5.7

Verify that MySQL service is up and running using docker logs <container-id> command as shown below:

dbdata1

By now, we should be able to see new volume called “dbdata” created and shown under docker volume ls command:

rexray_dbdata

Under GCE console, it should get displayed too:

dbdata_gce

Using Rex-Ray Volume Plugin under Docker 17.03 Swarm Mode

This is the most interesting section of this blog post. RexRay volume plugin worked great for us till now, especially for a single Docker host running multiple number of services.But what if I want to enable RexRay Volume to persist across multiple Docker Hosts(Swarm Mode cluster)? Yes, there is one possible way to achieve this – using Swarm Executor. It executes docker command across the swarm cluster. Credits to Madhu Venugopal @ Docker Team for assisting me testing with this tool.

swarmexec_1

Please remember that this is UNOFFICIAL way of achieving Volume Plugin implementation across swarm cluster.  I found this tool really cool and hope that it gets integrated within Docker official repository.

First, we need to clone this repository:

$git clone https://github.com/mavenugo/swarm-exec

Run the below command to push the plugin across the swarm cluster:

$cd swarm-exec

$./swarm-exec.sh docker plugin install –grant-all-permissions rexray/gcepd GCEPD_TAG=rexray

swarm_exec01

First, let’s quickly  verify the plugin on the master node  as shown below:

swarm_exec2

While verifying it on the worker nodes:

swarm_exec3

rexray_gce11

The docker volume inspect <volname> should display this particular volume as rexray volume driver as shown below:

rexray_gce222

Creating a MySQL service which uses Rex-Ray volume under Swarm Mode cluster:

up_service

Verifying that the service is up and running:

rexray_100

To conclude, the new version of RexRay looks promising and brings support for various Cloud storage platform. It continues to be leading open source container orchestration engine and now with inclusion of Docker 17.03 Managed Plugin architecture, it will definitely reduce the pain for implementing persistent storage solution.