Sysctl Support for Docker Swarm Cluster for the first time in Docker 19.03.0 Pre-Release

Estimated Reading Time: 7 minutes

Docker CE 19.03.0 Beta 1 went public 2 week back. It was the first release which arrived with sysctl support for Docker Swarm Mode for the first time. This is definitely a great news for popular communities like Elastic Stack, Redis etc. as they rely on tuning the kernel parameter to get rid of memory exceptions. For example, Elasticsearch uses a mmapfs directory by default to store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions, hence one need to  increase the limits everytime using sysctl tool. Great to see that Docker Inc. acknowledged the fact that kernel tuning is required sometimes and provides explicit support under Docker 19.03.0 Pre-Release. Great Job !

Wait..Do I really need sysctl?

Say, you have deployed your application on Docker Swarm. it’s pretty simple and it’s working great. Your application is growing day by day and now you just need to scale it. How are you going to do it? The simple answer is: docker service scale app=<number of tasks>.Surely, it is possible today but your containers can quickly hit kernel limits. One of the most popular kernel parameter is net.core.somaxconn. This parameter represents the maximum number of connections that can be queued for acceptance. The default value on Linux is 128, which is rather low.

The Linux kernel is flexible, and you can even modify the way it works on the fly by dynamically changing some of its parameters, thanks to the sysctl command. The sysctl programs allow to limit system-wide resource use. This can help a lot in system administration, e.g. when a user starts too many processes and therefore makes the system unresponsive for other users. Sysctl basically provides an interface that allows you to examine and change several hundred kernel parameters in Linux or BSD. Changes take effect immediately, and there’s even a way to make them persist after a reboot. By using sysctl judiciously, you can optimize your box without having to recompile the kernel and get the results immediately.

Please note that Not all sysctls are namespaced as of Docker 19.03.0 CE Pre-Release. Docker does not support changing sysctls inside of a container that also modify the host system.

Docker does support setting namespaced kernel parameters at runtime & runc honors this. Have a look:

$ docker run --runtime=runc --sysctl net.ipv4.ip_forward=1 -it alpine sh
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
bdf0201b3a05: Pull complete 
Digest: sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913
Status: Downloaded newer image for alpine:latest
/ # sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
/ # 

It is important to note that sysctl support is not new to Docker. The support for sysctl in Docker Compose all started during compose file format v2.1.

For example, to set Kernel parameters in the container, you can use either an array or a dictionary.

sysctls:
  net.core.somaxconn: 1024
  net.ipv4.tcp_syncookies: 0

sysctls:
  - net.core.somaxconn=1024
  - net.ipv4.tcp_syncookies=0

Under this blog post, I will showcase how to use sysctl under 2-Node Docker Swarm Cluster. Let us get started –

Installing a Node with Docker 19.03.0 Beta 1 Test Build on Ubuntu 18.10

Method:I

Downloading the static binary archive. Go to https://download.docker.com/linux/static/stable/ (or change stable to nightly or test), choose your hardware platform, and download the .tgz file relating to the version of Docker CE you want to install.

Captain'sBay==>wget https://download.docker.com/linux/static/test/x86_64/docker-19.03.0-beta1.tgz
--2019-04-10 09:20:01--  https://download.docker.com/linux/static/test/x86_64/docker-19.03.0-beta1.tgz
Resolving download.docker.com (download.docker.com)... 54.230.75.15, 54.230.75.117, 54.230.75.202, ...
Connecting to download.docker.com (download.docker.com)|54.230.75.15|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 62701372 (60M) [application/x-tar]
Saving to: ‘docker-19.03.0-beta1.tgz’
docker-19.03.0-beta1.tgz  100%[=====================================>]  59.80M  10.7MB/s    in 7.1s    
2019-04-10 09:20:09 (8.38 MB/s) - ‘docker-19.03.0-beta1.tgz’ saved [62701372/62701372]
Extract the archive
You can use the tar utility. The dockerd and docker binaries are extracted.

Extract the tar file

Captain'sBay==>tar xzvf docker-19.03.0-beta1.tgz 
docker/
docker/ctr
docker/containerd-shim
docker/dockerd
docker/docker-proxy
docker/runc
docker/containerd
docker/docker-init
docker/docker
Captain'sBay==>

Move the binaries to executable path

Move the binaries to a directory on your executable path It could be such as /usr/bin/. If you skip this step, you must provide the path to the executable when you invoke docker or dockerd commands.

Captain'sBay==>sudo cp -rf docker/* /usr/local/bin/

Start the Docker daemon:

$ sudo dockerd &
Client: Docker Engine - Community
 Version:           19.03.0-beta1
 API version:       1.40
 Go version:        go1.12.1
 Git commit:        62240a9
 Built:             Thu Apr  4 19:15:07 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.0-beta1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.1
  Git commit:       62240a9
  Built:            Thu Apr  4 19:22:34 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
Captain'sBay==>

Testing with hello-world

Captain'sBay==>sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete 
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest
INFO[2019-04-10T09:26:23.338596029Z] shim containerd-shim started                  address="/containerd-shim/m
oby/5b23a7045ca683d888c9d1026451af743b7bf4005c6b8dd92b9e95e125e68134/shim.sock" debug=false pid=2953
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/
For more examples and ideas, visit:
 https://docs.docker.com/get-started/
## Verifying the new `docker context` command

Verifying Docker version

root@DebianBuster:~# docker version
Client:
 Version:           19.03.0-beta1
 API version:       1.40
 Go version:        go1.11.5
 Git commit:        62240a9
 Built:             Thu Apr  4 19:18:53 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.0-beta1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.11.5
  Git commit:       62240a9
  Built:            Thu Apr  4 19:17:35 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
root@DebianBuster:~#

Creating a 2-Node Docker Swarm Mode Cluster

swarm-node-1:~$ sudo docker swarm init --advertise-addr 10.140.0.6 --listen-addr 10.140.0
.6:2377
Swarm initialized: current node (c78wm1g99q1a1g2sxiuawqyps) is now a manager.
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-1bc88158q1v4b4gdof8k0u532bxzdvrgxfztwgj2r443337mja-cmhuu258lu0327
e32l0g4pl47 10.140.0.6:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Run the below command on worker node:

swarm-node2:~$ sudo docker swarm join --token SWMTKN-1-1bc88158q1v4b4gdof8k0u532bxzdvrgxf
ztwgj2r443337mja-cmhuu258lu0327e32l0g4pl47 10.140.0.6:2377
This node joined a swarm as a worker.

Listing the Swarm Mode CLuster

$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
      ENGINE VERSION
rf3xns913p4tlprmu98z2o8hi     swarm-node2         Ready               Active                            
      19.03.0-beta1
isbcijzlrft3ahpbzhgipwr9a *   swarm-node-1        Ready               Active              Leader        
      19.03.0-beta1

Running Multi-service Docker Compose for Redis

Redis is an open source, in-memory data structure store, used as a database, cache and message broker. Redis Commander is an application that allows users to explore a Redis instance through a browser. Let us look at the below Docker compose file for Redis as well as Redis Commander shown below:

version: '3'
services:
  redis:
    hostname: redis
    image: redis

  redis-commander:
    hostname: redis-commander
    image: rediscommander/redis-commander:latest
    restart: always
    environment:
    - REDIS_HOSTS=local:redis:6379
    ports:
    - "8081:8081"

Ensure that Docker Compose is installed on your system using the below commands:

curl -L https://github.com/docker/compose/releases/download/1.24.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Run the below command to bring up Redis application running on Docker Swarm Mode:

sudo docker stack deploy -c docker-compose.yml myapp

$ sudo docker stack deploy -c docker-compose.yml myapp
Ignoring unsupported options: restart
Creating network myapp_default
Creating service myapp_redis-commander
Creating service myapp_redis

Verifying if the services are up and running:

~$ sudo docker stack ls
NAME                SERVICES            ORCHESTRATOR
myapp               2                   Swarm

~$ sudo docker service ls
ID                  NAME                    MODE                REPLICAS            IMAGE               
                    PORTS
ucakpqi7ozg1        myapp_redis             replicated          1/1                 redis:latest        
                    
fxor8v90a4m0        myapp_redis-commander   replicated          0/1                 rediscommander/redis
-commander:latest   *:8081->8081/tcp

Checking the service logs:


$ docker service logs -f myapp3_redis
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:C 17 Apr 2019 06:26:08.006 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:C 17 Apr 2019 06:26:08.006 # Redis version=5.0.4, bits=64, commit=00000000, modified=0, pid=1, just started
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:C 17 Apr 2019 06:26:08.006 # Warning: no config file specified, using the default config. In order to specify a configfile use redis-server /path/to/redis.conf
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:M 17 Apr 2019 06:26:08.009 * Running mode=standalone, port=6379.
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:M 17 Apr 2019 06:26:08.009 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.

As you see above, there is a warning around /proc/sys/net/core/somaxconn lower value set to 128.

Building Docker Compose File using Sysctl parameter

Let us try to build a new Docker compose file with sysctl parameter specified:

Copy the below content and save it as a docker-compose.yml file.

version: '3'
services:
  redis:
    hostname: redis
    image: redis
  sysctls:
    net.core.somaxconn: 1024
  redis-commander:
    image: rediscommander/redis-commander:latest
    restart: always
    environment:
    - REDIS_HOSTS=local:redis:6379
    ports:
    - "8081:8081"

Running Your Redis application

$ sudo docker stack deploy -c docker-compose.yml myapp
Ignoring unsupported options: restart
Creating network myapp_default
Creating service myapp_redis
Creating service myapp_redis-commander

$ sudo docker service ls
ID                  NAME                    MODE                REPLICAS            IMAGE               
                    PORTS
2oxhaychob7s        myapp_redis             replicated          1/1                 redis:latest        
                    
pjdwti7hkg1q        myapp_redis-commander   replicated          1/1                 rediscommander/redis
-commander:latest   *:80->8081/tcp

Verifying the Redis service logs

$ sudo docker service logs -f myapp_redis
myapp_redis.1.mp57syo3okka@swarm-node-1    | 1:C 17 Apr 2019 06:59:44.510 # oO0OoO0OoO0Oo Redis is start
ing oO0OoO0OoO0Oo
myapp_redis.1.mp57syo3okka@swarm-node-1    | 1:C 17 Apr 2019 06:59:44.510 # Redis version=5.0.4, bits=64
, commit=00000000, modified=0, pid=1, just started
myapp_redis.1.mp57syo3okka@swarm-node-1    | 1:M 17 Apr 2019 06:59:44.511 * Running mode=standalone, port=6379.

You can see that the warning around /proc/sys/net/core/somaxconn is no longer being displayed which shows that the sysctls parameter has really worked.

In my next blog post, I will talk around rootless Docker and how to get it tested. Stay tuned !

Top 5 Most Exciting Dockercon EU 2018 Announcements

Estimated Reading Time: 9 minutes

Last week I attended Dockercon 2018 EU which took place at Centre de Convencions Internacional de Barcelona (CCIB) in Barcelona, Spain. With over 3000+ attendees from around the globe, 52 breakout sessions, 11 Community Theatres, 12 workshops, over 100+ total sessions, exciting Hallway Tracks & Hands-on Labs/Trainings, paid trainings, women’s networking event, DockerPals and so on..Dockercon allowed developers, sysadmins, Product Managers & industry evangelists come closer to share their wealth of experience around the container technology. This time I was lucky enough to get chance to Emcee for Docker for Developer Track  for the first time. Not only this, I conducted Hallway Track for OpenUSM project & DockerLabs community contribution effort. Around 20-30 participants  showed up their interest to learn more around this system management, monitoring & Log Analytics tool.

This Dockercon we had Docker Captains Summit for the first time where the entire day was  dedicated to Captains. On Dec #3 ( 10:00 AM till 3:00 PM), we got chance to interact with Docker Staffs, where we put all our queries around Docker Future roadmap. It was amazing to meet all young Captains who joined us this year as well as getting familiar to what they have been contributing to during the initial introductory rounds.

This Dockercon, there has been couple of exciting announcements. 3 of the new features were targeted at Docker Community Edition, while the two were for Docker Enterprise customers. Here’s a rundown of what I think are the most 5 exciting announcements made last week –

#1. Announcement of Cloud Native Application Bundles(CNAB)

Microsoft and Docker have captured a great piece of attention with announcement around CNAB – Cloud Native Application Bundles.

What is CNAB? 

Cloud Native Application Bundles (CNAB) are a standard packaging format for multi-component distributed applications. It allows packages to target different runtimes and architectures. It empowers application distributors to package applications for deployment on a wide variety of cloud platforms, cloud providers, and cloud services. It also provides the capabilities necessary for delivering multi-container applications in disconnected environments.

Is it platform-specific tool?

CNAB is not a platform-specific tool. While it uses containers for encapsulating installation logic, it remains un-opinionated about what cloud environment it runs in. CNAB developers can bundle applications targeting environments spanning IaaS (like OpenStack or Azure), container orchestrators (like Kubernetes or Nomad), container runtimes (like local Docker or ACI), and cloud platform services (like object storage or Database as a Service). CNAB can also be used for packaging other distributed applications, such as IoT or edge computing. In nutshell, CNAB are a package format specification that describes a technology for bundling, installing, and managing distributed applications, that are by design, cloud agnostic.

Why do we need CNAB?

The current distributed computing landscape involves a combination of executable units and supporting API-based services. Executable units include Virtual Machines (VMs), Containers (e.g. Docker and OCI) and Functions-as-a-Service (FaaS), as well as higher-level PaaS services. Along with these executable units, many managed cloud services (from load balancers to databases) are provisioned and interconnected via REST (and similar network-accessible) APIs. The overall goal of CNAB is to provide a packaging format that can enable application providers and developers with a way of installing a multi-component application into a distributed computing environment, supporting all of the above types.


Is it open source? Tell me more about CNAB format?

It is an open source, cloud-agnostic specification for packaging and running distributed applications. It is a nascent specification that offers a way to repackage distributed computing apps

The CNAB format is a packaging format for a broad range of distributed applications. It specifies a pairing of a bundle definition(bundle.json) to define the app, and an invocation image to install the app.

The bundle definition is a single file that contains the following information:

  • Information about the bundle, such as name, bundle version, description, and keywords
  • Information about locating and running the invocation image (the installer program)
  • A list of user-overridable parameters that this package recognizes
  • The list of executable images that this bundle will install
  • A list of credential paths or environment variables that this bundle requires to execute

What’s Docker future plan to do with CNAB?

This project was incubated by Microsoft and Docker 1 year back. The first implementation of the spec is an experimental utility called Docker App, which Docker officially rolled out this Dockercon and expected to be integrated with Docker Enterprise in near future.  Microsoft and Docker plan to donate CNAB to an open source foundation publicly which is expected to happen early next year.

If you have no patience, head over Docker App CNAB examples recently posted by Gareth Rushgrove, Docker Employee, which is accessible via https://github.com/garethr/docker-app-cnab-examples

The examples in this repository show some basic examples of using docker-app, in particular showing some of the CNAB integration details. Check it out –

 #2. Support for using Docker Compose on Kubernetes.

On the 2nd day of Dockercon, Docker Inc. open sourced Compose on Kubernetes project. Docker Enterprise Edition already had this capability enabled starting Compose File version 3.3 where one can use the same docker-compose.yml file for Swarm deployment as well as one can specify Kubernetes workloads whenever stack is deployed. 

What benefit does this bring to Community Developers?

By making it open source, Docker, Inc has really paved a way of infinite possibilities around simplified way of deploying Kubernetes application. Docker Swarm gained popularity because of its simplified approach of application deployment using docker-compose.yml file. Now the community developers can use the same YAML file to deploy their K8s application. 

Imagine, you are using Docker Desktop on your Macbook. Docker Desktop provides capability of running both Swarm & Kubernetes. You have context set to GKE cluster which is running on Google Cloud Platform. You just deployed your app using docker-compose.yml on your local Macbook. Now you want to deploy it in the same way but this time on your GKE cluster.  Just use docker stack deploy command to deploy it to GKE cluster. Interesting, Isn’t it?

How does Compose on Kubernetes architecture look like?

Compose on Kubernetes is made up of server-side and client-side components. This architecture was chosen so that the entire life cycle of a stack can be managed. The following image is a high-level diagram of the architecture:

Compose on Kubernetes architecture

If you’re interested to learn further, I would suggest you to visit this link.

How can I test it now?

First we need to install the Compose on Kubernetes controller into your Kubernetes cluster(which could be GKE/AKS). You can download the latest binary(as of 12/13/2018) via https://github.com/docker/compose-on-kubernetes/releases/tag/v0.4.16 .

This controller uses the standard Kubernetes extension points to introduce the `Stack` to the Kubernetes API. You can use any Kubernetes cluster you like, but if you don’t already have one available then remember that Docker Desktop comes with Kubernetes and the Compose controller built-in, and enabling it is as simple as ticking a box in the settings

Check out the latest doc which shows how to make it work with AKS here.

#3. Introducing Docker Desktop Enterprise

The 3rd Big announcement was an introduction to Docker Desktop Enterprise. With this, Docker Inc. made a new addition to their desktop product portfolio which currently includes the free Docker Desktop Community products for MacOS and Windows. Docker Desktop is the simplest way to get started with container-based development on both Windows 10 and macOS with a set of features now available for the enterprise.

Desktop Comparison Table

How will Docker Desktop Enterprise be different from Docker Desktop Community Edition?

Good question. Docker Desktop has Docker Engine and Kubernetes built-in and with the addition of swappable version packs you can now synchronize your desktop development environment with the same Docker API and Kubernetes versions that are used in production with Docker Enterprise. You get the assurance that your application will not break due to incompatible API calls, and if you have multiple downstream environments running different versions of the APIs, you can quickly change your desktop configuration with the click of a button.

Not only this, with Docker Desktop Enterprise, you get access to the Application Designer which is a new workflow that provides production-ready application and service templates that let you get coding quickly, with the reassurance that your application meets architectural standards

source ~ Docker, Inc


For those who are interested in Docker Desktop Enterprise – Please note that it is expected to be available for preview in January & General Availability is slated to happen during 1H 2019.

#4. From Zero to Docker in Seconds with “docker assemble” CLI

This time, Docker Team announced a very interesting docker subcommand rightly named as “assemble” to the public. Ann Rahma and Gareth Rushgrove from Docker, Inc. announced assemble, a new command that generates optimized images from non dockerized apps. It will get you from source to an optimized Docker images in seconds.

Here are few of interesting facts around docker assemble utility:

  • Docker assemble has capability to build an image without a Dockerfile, all about auto detecting the code framework.
  • It generates docker images (and lot more) from your code with single command and zero effort! which mean no more dockerfile needed for your app till you have a config (.pom file there). 
  • It can analyze your applications, dependencies, and caches, and give you a sweet Docker image without having to author your own Dockerfiles.
  • It is built on top of buildKit, will auto detect framework, versions etc. from a config file (.pom file) and automatically add dependencies to the image label, optimize image size and push.
  • Docker Assemble can also figure out what ports need to be published and what healthchecks are relevant.
  • The dockerassemble builds app without configuration files, without Dockerfile, just a git repository to deploy

Is it an open source project?

It’s an enterprise feature for now — not in the community version. It is available for a couple languages and frameworks (like Java as demonstrated on Dockercon stage). 

How is it different from buildpack?

By reading all through its feature, Docker assemble might look very similar to buildpacks  as it overlap with some of the stuff docker-assemble does. But the huge benefit with assemble is that it’s more than just an image (also ports, healthchecks, volume mounts, etc), and it’s integrated into the enterprise toolchain. The docker-assemble is sort of an enterprise-grade buildpack to help with digitalization.

Keep eye on my next blog post to get more detail around the fancy docker assemblecommand.

#5. Docker-app & CNAB together for the first time

On the 2nd day of Dockercon, Docker confirmed that they are the first to implement CNAB for containerized applications and will be expanding it across the Docker platform to support new application development, deployment and lifecycle management. Initially CNAB support will be released as part of our docker-app experimental tool for building, packaging and managing cloud-native applications. With this, Docker now lets you package CNAB bundles as Docker images, so you can distribute and share through Docker registry tools including Docker Hub and Docker Trusted Registry. Additionally, Docker will enable organizations to deploy and manage CNAB-based applications in Docker Enterprise in the upcoming months.

Can I test the preview binaries of docker-app which comes with CNAB support?

Yes, you can find some preview binaries of docker-app with CNAB support here.The latest release of Docker App is one such tool that implements the current CNAB spec. Tt can be used to both build CNAB bundles for Compose (which can then be used with any other CNAB client), and also to install, upgrade and uninstall any other CNAB bundle.

In case you have no patience, head over to this recently added example of how to deploy a Helm chart

You can visit https://github.com/docker/app/releases/tag/cnab-dockercon-preview for accessing preview build.

I hope you found this blog helpful. In my next blog series, I will deep-dive around each of these announcements in terms of implementation and enablements.

A First Look at Docker Application Package “docker-app”

Estimated Reading Time: 9 minutes 

Did you know? There are more than 300,000 Docker Compose files on GitHub.

Docker Compose is a tool for defining and running multi-container Docker applications. It  is an amazing developer tool to create the development environment for your application stack. It allows you to define each component of your application following a clear and simple syntax in YAML files. It works in all environments: production, staging, development, testing, as well as CI workflows. Though Compose files are easy to describe a set of related services but there are couple of problems which has emerged in the past. One of the major concern has been around multiple environments to deploy the application with small configuration differences.

Consider a scenario where you have separate development, test, and production environments for your Web application. Under development environment, your team might be spending time in building up Web application(say, WordPress), developing WP Plugins and templates, debugging the issue etc.  When you are in development you’ll probably want to check your code changes in real-time. The usual way to do this is mounting a volume with your source code in the container that has the runtime of your application. But for production this works differently. Before you host your web application in production environment, you might want to turn-off the debug mode and host it under the right port so as to test your application usability and accessibility. In production you have a cluster with multiple nodes, and in most of the case volume is local to the node where your container (or service) is running, then you cannot mount the source code without complex stuff that involve code synchronization, signals, etc. In nutshell, this might require multiple Docker compose files for each environment and as your number of service applications increases, it becomes more cumbersome to manage those pieces of Compose files. Hence, we need a tool which can ease the way Compose files can be shareable across  different environment seamlessly.

To solve this problem, Docker, Inc recently announced a new tool called “docker-app”(Application Packages) which makes “Compose files more reusable and shareable”. This tool not only makes Compose file shareable but provide us with simplified approach to share multi-service application (not just Docker Image) directly on Dockerhub.

 

 

Under this blog post, I will showcase how docker-app tool makes it easier to use Docker Compose for sharing and collaboration and then pushing it directly to DockerHub. Let us get started-

Prerequisite:

  • Click on Icon near to Instance to choose 3 Managers & 2 Worker Nodes

Deploy 5 Node Swarm Mode Cluster

$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
juld0kwbajyn11gx3bon9bsct *   manager1            Ready               Active              Leader              18.03.1-ce
uu675q2209xotom4vys0el5jw     manager2            Ready               Active              Reachable           18.03.1-ce
05jewa2brfkvgzklpvlze01rr     manager3            Ready               Active              Reachable           18.03.1-ce
n3frm1rv4gn93his3511llm6r     worker1             Ready               Active                                  18.03.1-ce
50vsx5nvwx5rbkxob2ua1c6dr     worker2             Ready               Active                                  18.03.1-ce

Cloning the Repository

$ git clone https://github.com/ajeetraina/app
Cloning into 'app'...remote: Counting objects: 14147, done.
remote: Total 14147 (delta 0), reused 0 (delta 0), pack-reused 14147Receiving objects: 100% (14147/14147), 17.32 MiB | 18.43 MiB/s, done.
Resolving deltas: 100% (5152/5152), done.

Installing docker-app

wget https://github.com/docker/app/releases/download/v0.3.0/docker-app-linux.tar.gz
tar xf docker-app-linux.tar.gz
cp docker-app-linux /usr/local/bin/docker-app

OR

$ ./install.sh
Connecting to github.com (192.30.253.112:443)
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (52.216.227.152:443)
docker-app-linux.tar 100% |**************************************************************|  8780k  0:00:00 ETA
[manager1] (local) root@192.168.0.13 ~/app
$ 

Verify docker-app version

$ docker-app version
Version:      v0.3.0
Git commit:   fba6a09
Built:        Fri Jun 29 13:09:30 2018
OS/Arch:      linux/amd64
Experimental: off
Renderers:    none

The docker-app tool comes with various options as shown below:

$ docker-app
Build and deploy Docker applications.

Usage:
  docker-app [command]

Available Commands:
  deploy      Deploy or update an application
  helm        Generate a Helm chart
  help        Help about any command
  init        Start building a Docker application
  inspect     Shows metadata and settings for a given application
  ls          List applications.
  merge       Merge the application as a single file multi-document YAML
  push        Push the application to a registry
  render      Render the Compose file for the application
  save        Save the application as an image to the docker daemon(in preparation for push)
  split       Split a single-file application into multiple files
  version     Print version information

Flags:
      --debug   Enable debug mode
  -h, --help    help for docker-app

Use "docker-app [command] --help" for more information about a command.
[manager1] (local) root@192.168.0.48 ~/app

WordPress Application under dev & Prod environment

If you browse to app/examples/wordpress directory under GitHub Repo, you will see that there is a folder called wordpress.dockerapp that contains three YAML documents:

  • metadatas
  • the Compose file
  • settings for your application

Okay, Fine ! But how you created those files?

The docker-app tool comes with an option “init” which initialize any application with the above  3 YAML files and the directory structure can be initialized with the below command:

docker-app init --single-file wordpress

I have already created a directory structure for my environment and you can find few examples under this directory.

Listing the WordPress Application package related files/directories

$ ls
README.md            install-wp           with-secrets.yml
devel                prod                 wordpress.dockerapp

As you see above, I have created each folder for my environment – dev and prod. Under these directories, I have created prod-settings.yml and dev-settings.yml. You can view the content via this link.

WordPress Application Package for Dev Environ

I can pass “-f” <YAML> parameter to docker-app tool to render the respective environment settings seamlessly as shown below:

$ docker-app render wordpress -f devel/dev-settings.yml
version: "3.6"
services:
  mysql:
    deploy:
      mode: replicated
      replicas: 1
      endpoint_mode: dnsrr
    environment:
      MYSQL_DATABASE: wordpressdata
      MYSQL_PASSWORD: wordpress
      MYSQL_ROOT_PASSWORD: wordpress101
      MYSQL_USER: wordpress
    image: mysql:5.6
    networks:
      overlay: null
    volumes:
    - type: volume
      source: db_data
      target: /var/lib/mysql
  wordpress:
    depends_on:
    - mysql
    deploy:
      mode: replicated
      replicas: 1
      endpoint_mode: vip
    environment:
      WORDPRESS_DB_HOST: mysql
      WORDPRESS_DB_NAME: wordpressdata
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DEBUG: "true"
    image: wordpress
    networks:
      overlay: null
    ports:
    - mode: ingress
      target: 80
      published: 8082
      protocol: tcp
networks:
  overlay: {}
volumes:
  db_data:
    name: db_data

WordPress Application Package for Prod

Under Prod environment, I have the following content under prod/prod-settings.yml as shown :

debug: false
wordpress:
  port: 80

For production environment, it is obvious that I want my application to be exposed under the standard port:80. Post rendering, you should be able to see port:80 exposed as shown below in the snippet:

 image: wordpress
    networks:
      overlay: null
    ports:
    - mode: ingress
      target: 80
      published: 80
      protocol: tcp
networks:
  overlay: {}
volumes:
  db_data:
    name: db_data

 

Inspect the WordPress App

$ docker-app inspect wordpress
wordpress 1.0.0
Maintained by: ajeetraina <ajeetraina@gmail.com>

Welcome to Collabnix

Setting                       Default
-------                       -------
debug                         true
mysql.database                wordpressdata
mysql.image.version           5.6
mysql.rootpass                wordpress101
mysql.scale.endpoint_mode     dnsrr
mysql.scale.mode              replicated
mysql.scale.replicas          1
mysql.user.name               wordpress
mysql.user.password           wordpress
volumes.db_data.name          db_data
wordpress.port                8081
wordpress.scale.endpoint_mode vip
wordpress.scale.mode          replicated
wordpress.scale.replicas      1
[manager1] (local) root@192.168.0.13 ~/app/examples/wordpress
$

Deploying the WordPress App

$ docker-app deploy wordpress
Creating network wordpress_overlay
Creating service wordpress_mysql
Creating service wordpress_wordpress

Switching to Dev Environ

If I want to switch back to Dev environment, all I need is to pass the dev specific YAML file using “-f” parameter  as shown below:

$docker-app deploy wordpress -f devel/dev-settings.yml

docker-app

Switching to Prod Environ

$docker-app deploy wordpress -f prod/prod-settings.yml

docker-app

[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app deploy -f devel/dev-settings.yml
Updating service wordpress_wordpress (id: l95b4s6xi7q5mg7vj26lhzslb)
Updating service wordpress_mysql (id: lhr4h2uaer861zz1b04pst5sh)
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app deploy -f prod/prod-settings.yml
Updating service wordpress_wordpress (id: l95b4s6xi7q5mg7vj26lhzslb)
Updating service wordpress_mysql (id: lhr4h2uaer861zz1b04pst5sh)
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Pushing Application Package to Dockerhub

So I have my application ready to be pushed to Dockerhub. Yes, you heard it right. I said, Application packages and NOT Docker Image.

Let me first authenticate myself before I push it to Dockerhub registry:

$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to
 https://hub.docker.com to create one.
Username: ajeetraina
Password:
Login Succeeded
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Saving this Application Package as Docker Image

The docker-app CLI is feature-rich and allows to save the entire application as a Docker Image. Let’s try it out –

[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app save wordpress
Saved application as image: wordpress.dockerapp:1.0.0
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Listing out the images

$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
wordpress.dockerapp   1.0.0               c1ec4d18c16c        47 seconds ago      1.62kB
mysql                 5.6                 97fdbdd65c6a        3 days ago          256MB
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Listing out the services

$ docker stack services wordpress
ID                  NAME                  MODE                REPLICAS            IMAGE               PORTS
l95b4s6xi7q5        wordpress_wordpress   replicated          1/1                 wordpress:latest    *:80->80
/tcp
lhr4h2uaer86        wordpress_mysql       replicated          1/1                 mysql:5.6
[manager1] (local) root@192.168.0.48 ~/docker101/play-with-docker/visualizer

Using docker-app ls command to list out the application packages

The ‘ls’ command has been recently introduced under v0.3.0. Let us try it once –

$ docker-app ls
REPOSITORY            TAG                 IMAGE ID            CREATED              SIZE
wordpress.dockerapp   1.0.1               299fb78857cb        About a minute ago   1.62kB
wordpress.dockerapp   1.0.0               c1ec4d18c16c        16 minutes ago       1.62kB

Pusing it to Dockerhub

$ docker-app push --namespace ajeetraina --tag 1.0.1
The push refers to repository [docker.io/ajeetraina/wordpress.dockerapp]
51cfe2cfc2a8: Pushed
1.0.1: digest: sha256:14145fc6e743f09f92177a372b4a4851796ab6b8dc8fe49a0882fc5b5c1be4f9 size: 524

Say, you built WordPress application package and pushed it to Dockerhub. Now one of your colleague want to pull it on his development system and deploy in his environment.

Pulling it from Dockerhub

$ docker pull ajeetraina/wordpress.dockerapp:1.0.1
1.0.1: Pulling from ajeetraina/wordpress.dockerapp
a59931d48895: Pull complete
Digest: sha256:14145fc6e743f09f92177a372b4a4851796ab6b8dc8fe49a0882fc5b5c1be4f9
Status: Downloaded newer image for ajeetraina/wordpress.dockerapp:1.0.1
[manager3] (local) root@192.168.0.24 ~/app
$ docker images
REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE
ajeetraina/wordpress.dockerapp   1.0.1               299fb78857cb        8 minutes ago       1.62kB
[manager3] (local) root@192.168.0.24 ~/app
$

Deploying the Application

$ docker images
REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE
ajeetraina/wordpress.dockerapp   1.0.1               299fb78857cb        9 minutes ago       1.62kB
[manager3] (local) root@192.168.0.24 ~/app
$ docker-app deploy ajeetraina/wordpress
Creating network wordpress_overlay
Creating service wordpress_mysql
Creating service wordpress_wordpress
[manager3] (local) root@192.168.0.24 ~/app
$

Using docker-app merge option

Docker Team has introduced docker-app merge option under the new 0.3.0 release.

[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app merge -o mywordpress
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ ls
README.md            install-wp           prod                 wordpress.dockerapp
devel                mywordpress          with-secrets.yml
$ cat mywordpress
version: 1.0.1
name: wordpress
description: "Welcome to Collabnix"
maintainers:
  - name: ajeetraina
    email: ajeetraina@gmail.com
targets:
  swarm: true
  kubernetes: true

--
version: "3.6"

services:

  mysql:
    image: mysql:${mysql.image.version}
    environment:
      MYSQL_ROOT_PASSWORD: ${mysql.rootpass}
      MYSQL_DATABASE: ${mysql.database}
      MYSQL_USER: ${mysql.user.name}
      MYSQL_PASSWORD: ${mysql.user.password}
    volumes:
       - source: db_data
         target: /var/lib/mysql
         type: volume
    networks:
       - overlay
    deploy:
      mode: ${mysql.scale.mode}
      replicas: ${mysql.scale.replicas}
      endpoint_mode: ${mysql.scale.endpoint_mode}

  wordpress:
    image: wordpress
    environment:
      WORDPRESS_DB_USER: ${mysql.user.name}
      WORDPRESS_DB_PASSWORD: ${mysql.user.password}
      WORDPRESS_DB_NAME: ${mysql.database}
      WORDPRESS_DB_HOST: mysql
      WORDPRESS_DEBUG: ${debug}
    ports:
      - "${wordpress.port}:80"
    networks:
      - overlay
    deploy:
      mode: ${wordpress.scale.mode}
      replicas: ${wordpress.scale.replicas}
      endpoint_mode: ${wordpress.scale.endpoint_mode}
    depends_on:
      - mysql

volumes:
  db_data:
     name: ${volumes.db_data.name}

networks:
  overlay:

--
debug: true
mysql:
  image:
    version: 5.6
  rootpass: wordpress101
  database: wordpressdata
  user:
    name: wordpress
    password: wordpress
  scale:
    endpoint_mode: dnsrr
    mode: replicated
    replicas: 1
wordpress:
  scale:
    mode: replicated
    replicas: 1
    endpoint_mode: vip
  port: 8081
volumes:
  db_data:
    name: db_data

docker-app comes with a few other helpful commands as well, in particular the ability to create Helm Charts from your Docker Applications. This can be useful if you’re adopting Kubernetes, and standardising on Helm to manage the lifecycle of your application components, but want to maintain the simplicity of Compose when writing you applications. This also makes it easy to run the same applications locally just using Docker, if you don’t want to be running a full Kubernetes cluster.

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you want to keep track of latest Docker related information, follow me at https://www.linkedin.com/in/ajeetsraina/.