New Docker CLI API Support for NVIDIA GPUs under Docker Engine 19.03.0 Pre-Release

Estimated Reading Time: 7 minutes

Let’s talk about Docker in a GPU-Accelerated Data Center…

Docker is the leading container platform which provides both hardware and software encapsulation by allowing multiple containers to run on the same system at the same time each with their own set of resources (CPU, memory, etc) and their own dedicated set of dependencies (library version, environment variables, etc.). Docker  can now be used to containerize GPU-accelerated applications. In case you’re new to GPU-accelerated computing, it is basically the use of graphics processing unit  to accelerates high performance computing workloads and applications. This means you can easily containerize and isolate accelerated application without any modifications and deploy it on any supported GPU-enabled infrastructure.

Yes, you heard it right. Today Docker does natively support NVIDIA GPUs within containers. This is possible with the latest Docker 19.03.0 Beta 3 Release which is the latest pre-release and is available for download here. With this release, Docker  can now be flawlessly be used to containerize GPU-accelerated applications.

Let’s go back to 2017…

2 year back, I wrote a blog post titled “Running NVIDIA Docker in a GPU Accelerated Data center”. The nvidia-docker is an open source project hosted on GITHUB and it provides driver-agnostic CUDA images  & docker command line wrapper that mounts the user mode components of the driver and the GPUs (character devices) into the container at launch. With this enablement, the NVIDIA Docker plugin enabled deployment of GPU-accelerated applications across any Linux GPU server with NVIDIA Docker support. Under the same blog post, I showcased how to get started with nvidia-docker to interact with NVIDIA GPU system and then look at few of interesting applications which can be build for GPU-accelerated data center.

With the recent 19.03.0 Beta Release, now you don’t need to spend time in downloading the NVIDIA-DOCKER plugin and rely on nvidia-wrapper to launch GPU containers. All you can now use –gpus option with docker run CLI to allow containers to use GPU devices seamlessly.

Under this blog post, I will showcase how to get started with this new CLI API for NVIDIA GPU.

Prerequisite:

  • Ubuntu 18.04 instance running on Google Cloud Platform
  • Verify that NVIDIA card is detected
$ lspci -vv | grep -i nvidia
00:04.0 3D controller: NVIDIA Corporation GP100GL [Tesla P100 PCIe 16GB] (rev a1)
        Subsystem: NVIDIA Corporation GP100GL [Tesla P100 PCIe 16GB]
        Kernel modules: nvidiafb

Installing NVIDIA drivers first

$ apt-get install ubuntu-drivers-common \
	&& sudo ubuntu-drivers autoinstall
- Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.18.0-1009-gcp/updates/dkms/

nvidia-uvm.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.18.0-1009-gcp/updates/dkms/

depmod...

DKMS: install completed.
Setting up xserver-xorg-video-nvidia-390 (390.116-0ubuntu0.18.10.1) ...
Processing triggers for libc-bin (2.28-0ubuntu1) ...
Processing triggers for systemd (239-7ubuntu10.13) ...
Setting up nvidia-driver-390 (390.116-0ubuntu0.18.10.1) ...
Setting up adwaita-icon-theme (3.30.0-0ubuntu1) ...
update-alternatives: using /usr/share/icons/Adwaita/cursor.theme to provide /usr/share/icons/default/index.theme (x-cursor-theme) in auto mode
Setting up humanity-icon-theme (0.6.15) ...
Setting up libgtk-3-0:amd64 (3.24.4-0ubuntu1.1) ...
Setting up libgtk-3-bin (3.24.4-0ubuntu1.1) ...
Setting up policykit-1-gnome (0.105-6ubuntu2) ...
Setting up screen-resolution-extra (0.17.3build1) ...
Setting up ubuntu-mono (16.10+18.10.20181005-0ubuntu1) ...
Setting up nvidia-settings (390.77-0ubuntu1) ...
Processing triggers for libgdk-pixbuf2.0-0:amd64 (2.38.0+dfsg-6) ...
Processing triggers for initramfs-tools (0.131ubuntu15.1) ...
update-initramfs: Generating /boot/initrd.img-4.18.0-1009-gcp
cryptsetup: WARNING: The initramfs image may not contain cryptsetup binaries 
    nor crypto modules. If that's on purpose, you may want to uninstall the 
    'cryptsetup-initramfs' package in order to disable the cryptsetup initramfs 
    integration and avoid this warning.
Processing triggers for libc-bin (2.28-0ubuntu1) ...
Processing triggers for dbus (1.12.10-1ubuntu2) ...

Go ahead and reboot the system

$ reboot

Follow the below steps once the Ubuntu instance comes back.

Installing NVIDIA Container Runtime

Create a file named nvidia-container-runtime-script.sh and save it

$ cat nvidia-container-runtime-script.sh
 
curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | \
  sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
sudo apt-get update

Execute the script

sh nvidia-container-runtime-script.sh

OK
deb https://nvidia.github.io/libnvidia-container/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/$(ARCH) /
Hit:1 http://archive.canonical.com/ubuntu bionic InRelease
Get:2 https://nvidia.github.io/libnvidia-container/ubuntu18.04/amd64  InRelease [1139 B]                
Get:3 https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/amd64  InRelease [1136 B]           
Hit:4 http://security.ubuntu.com/ubuntu bionic-security InRelease                                       
Get:5 https://nvidia.github.io/libnvidia-container/ubuntu18.04/amd64  Packages [4076 B]                 
Get:6 https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/amd64  Packages [3084 B]            
Hit:7 http://us-east4-c.gce.clouds.archive.ubuntu.com/ubuntu bionic InRelease
Hit:8 http://us-east4-c.gce.clouds.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:9 http://us-east4-c.gce.clouds.archive.ubuntu.com/ubuntu bionic-backports InRelease
Fetched 9435 B in 1s (17.8 kB/s)                   
Reading package lists... Done
$ apt-get install nvidia-container-runtime
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  grub-pc-bin libnuma1
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
Get:1 https://nvidia.github.io/libnvidia-container/ubuntu18.04/amd64  libnvidia-container1 1.0.2-1 [59.1 kB]
Get:2 https://nvidia.github.io/libnvidia-container/ubuntu18.04/amd64  libnvidia-container-tools 1.0.2-1 [15.4 kB]
Get:3 https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/amd64  nvidia-container-runtime-hook 1.4.0-1 [575 kB]

...
Unpacking nvidia-container-runtime (2.0.0+docker18.09.6-3) ...
Setting up libnvidia-container1:amd64 (1.0.2-1) ...
Setting up libnvidia-container-tools (1.0.2-1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Setting up nvidia-container-runtime-hook (1.4.0-1) ...
Setting up nvidia-container-runtime (2.0.0+docker18.09.6-3) ...
which nvidia-container-runtime-hook
/usr/bin/nvidia-container-runtime-hook

Installing Docker 19.03 Beta 3 Test Build

curl -fsSL https://test.docker.com -o test-docker.sh 

Execute the script

sh test-docker.sh

Verifying Docker Installation

$ docker version
Client:
 Version:           19.03.0-beta3
 API version:       1.40
 Go version:        go1.12.4
 Git commit:        c55e026
 Built:             Thu Apr 25 02:58:59 2019
 OS/Arch:           linux/amd64
 Experimental:      false
Server:
 Engine:
  Version:          19.03.0-beta3
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.4
  Git commit:       c55e026
  Built:            Thu Apr 25 02:57:32 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Verifying –gpus option under docker run

$ docker run --help | grep -i gpus
      --gpus gpu-request               GPU devices to add to the container ('all' to pass all GPUs)

Running a Ubuntu container which leverages GPUs

 $ docker run -it --rm --gpus all ubuntu nvidia-smi
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
f476d66f5408: Pull complete 
8882c27f669e: Pull complete 
d9af21273955: Pull complete 
f5029279ec12: Pull complete 
Digest: sha256:d26d529daa4d8567167181d9d569f2a85da3c5ecaf539cace2c6223355d69981
Status: Downloaded newer image for ubuntu:latest
Tue May  7 15:52:15 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.116                Driver Version: 390.116                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla P4            Off  | 00000000:00:04.0 Off |                    0 |
| N/A   39C    P0    22W /  75W |      0MiB /  7611MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
:~$ 

Troubleshooting:

Did you encounter the below error message:

$ docker run -it --rm --gpus all debian
docker: Error response from daemon: linux runtime spec devices: could not select device driver "" with capabilities: [[gpu]].

The above error means that Nvidia could not properly register with Docker. What it actually mean is the drivers are not properly installed on the host. This could also mean the nvidia container tools were installed without restarting the docker daemon: you need to restart the docker daemon.

I suggest you to go back and verify if nvidia-container-runtime is installed or not OR restart the Docker daemon.

Listing out GPU devices

$ docker run -it --rm --gpus all ubuntu nvidia-smi -L
GPU 0: Tesla P4 (UUID: GPU-fa974b1d-3c17-ed92-28d0-805c6d089601)
$ docker run -it --rm --gpus all ubuntu nvidia-smi  --query-gpu=index,name,uui
d,serial --format=csv
index, name, uuid, serial
0, Tesla P4, GPU-fa974b1d-3c17-ed92-28d0-805c6d089601, 0325017070224

A Quick Look at NVIDIA Deep Learning..

The NVIDIA Deep Learning GPU Training System, a.k.a DIGITS is a webapp for training deep learning models. It puts  the power of deep learning into the hands of engineers & data scientists. It can be used to rapidly train the highly accurate deep neural network (DNNs) for image classification, segmentation and object detection tasks.The currently supported frameworks are: Caffe, Torch, and Tensorflow.

DIGITS simplifies common deep learning tasks such as managing data, designing and training neural networks on multi-GPU systems, monitoring performance in real time with advanced visualizations, and selecting the best performing model from the results browser for deployment. DIGITS is completely interactive so that data scientists can focus on designing and training networks rather than programming and debugging.

To test-drive DIGITS, you can get it up and running in a single Docker container:

$docker run -itd --gpus all -p 5000:5000 nvidia/digits

You can open up web browser and verify if its running on the below address:

w3m http://<dockerhostip>:5000

Verifying again with nvidia-smi

$ docker run -it --rm --gpus all ubuntu nvidia-smi
Tue May  7 16:27:37 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.116                Driver Version: 390.116                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla P4            Off  | 00000000:00:04.0 Off |                    0 |
| N/A   51C    P0    24W /  75W |    129MiB /  7611MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

If you want to see it in action, here’s a quick video

Can I use Docker compose for NVIDIA GPU?

Not Yet. I have raised a feature request under this link few minutes back.

Special Thanks to Tibor Vass, Docker Staff Engineer for reviewing this blog.

Sysctl Support for Docker Swarm Cluster for the first time in Docker 19.03.0 Pre-Release

Estimated Reading Time: 7 minutes

Docker CE 19.03.0 Beta 1 went public 2 week back. It was the first release which arrived with sysctl support for Docker Swarm Mode for the first time. This is definitely a great news for popular communities like Elastic Stack, Redis etc. as they rely on tuning the kernel parameter to get rid of memory exceptions. For example, Elasticsearch uses a mmapfs directory by default to store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions, hence one need to  increase the limits everytime using sysctl tool. Great to see that Docker Inc. acknowledged the fact that kernel tuning is required sometimes and provides explicit support under Docker 19.03.0 Pre-Release. Great Job !

Wait..Do I really need sysctl?

Say, you have deployed your application on Docker Swarm. it’s pretty simple and it’s working great. Your application is growing day by day and now you just need to scale it. How are you going to do it? The simple answer is: docker service scale app=<number of tasks>.Surely, it is possible today but your containers can quickly hit kernel limits. One of the most popular kernel parameter is net.core.somaxconn. This parameter represents the maximum number of connections that can be queued for acceptance. The default value on Linux is 128, which is rather low.

The Linux kernel is flexible, and you can even modify the way it works on the fly by dynamically changing some of its parameters, thanks to the sysctl command. The sysctl programs allow to limit system-wide resource use. This can help a lot in system administration, e.g. when a user starts too many processes and therefore makes the system unresponsive for other users. Sysctl basically provides an interface that allows you to examine and change several hundred kernel parameters in Linux or BSD. Changes take effect immediately, and there’s even a way to make them persist after a reboot. By using sysctl judiciously, you can optimize your box without having to recompile the kernel and get the results immediately.

Please note that Not all sysctls are namespaced as of Docker 19.03.0 CE Pre-Release. Docker does not support changing sysctls inside of a container that also modify the host system.

Docker does support setting namespaced kernel parameters at runtime & runc honors this. Have a look:

$ docker run --runtime=runc --sysctl net.ipv4.ip_forward=1 -it alpine sh
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
bdf0201b3a05: Pull complete 
Digest: sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913
Status: Downloaded newer image for alpine:latest
/ # sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
/ # 

It is important to note that sysctl support is not new to Docker. The support for sysctl in Docker Compose all started during compose file format v2.1.

For example, to set Kernel parameters in the container, you can use either an array or a dictionary.

sysctls:
  net.core.somaxconn: 1024
  net.ipv4.tcp_syncookies: 0

sysctls:
  - net.core.somaxconn=1024
  - net.ipv4.tcp_syncookies=0

Under this blog post, I will showcase how to use sysctl under 2-Node Docker Swarm Cluster. Let us get started –

Installing a Node with Docker 19.03.0 Beta 1 Test Build on Ubuntu 18.10

Method:I

Downloading the static binary archive. Go to https://download.docker.com/linux/static/stable/ (or change stable to nightly or test), choose your hardware platform, and download the .tgz file relating to the version of Docker CE you want to install.

Captain'sBay==>wget https://download.docker.com/linux/static/test/x86_64/docker-19.03.0-beta1.tgz
--2019-04-10 09:20:01--  https://download.docker.com/linux/static/test/x86_64/docker-19.03.0-beta1.tgz
Resolving download.docker.com (download.docker.com)... 54.230.75.15, 54.230.75.117, 54.230.75.202, ...
Connecting to download.docker.com (download.docker.com)|54.230.75.15|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 62701372 (60M) [application/x-tar]
Saving to: ‘docker-19.03.0-beta1.tgz’
docker-19.03.0-beta1.tgz  100%[=====================================>]  59.80M  10.7MB/s    in 7.1s    
2019-04-10 09:20:09 (8.38 MB/s) - ‘docker-19.03.0-beta1.tgz’ saved [62701372/62701372]
Extract the archive
You can use the tar utility. The dockerd and docker binaries are extracted.

Extract the tar file

Captain'sBay==>tar xzvf docker-19.03.0-beta1.tgz 
docker/
docker/ctr
docker/containerd-shim
docker/dockerd
docker/docker-proxy
docker/runc
docker/containerd
docker/docker-init
docker/docker
Captain'sBay==>

Move the binaries to executable path

Move the binaries to a directory on your executable path It could be such as /usr/bin/. If you skip this step, you must provide the path to the executable when you invoke docker or dockerd commands.

Captain'sBay==>sudo cp -rf docker/* /usr/local/bin/

Start the Docker daemon:

$ sudo dockerd &
Client: Docker Engine - Community
 Version:           19.03.0-beta1
 API version:       1.40
 Go version:        go1.12.1
 Git commit:        62240a9
 Built:             Thu Apr  4 19:15:07 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.0-beta1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.1
  Git commit:       62240a9
  Built:            Thu Apr  4 19:22:34 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
Captain'sBay==>

Testing with hello-world

Captain'sBay==>sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete 
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest
INFO[2019-04-10T09:26:23.338596029Z] shim containerd-shim started                  address="/containerd-shim/m
oby/5b23a7045ca683d888c9d1026451af743b7bf4005c6b8dd92b9e95e125e68134/shim.sock" debug=false pid=2953
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/
For more examples and ideas, visit:
 https://docs.docker.com/get-started/
## Verifying the new `docker context` command

Verifying Docker version

root@DebianBuster:~# docker version
Client:
 Version:           19.03.0-beta1
 API version:       1.40
 Go version:        go1.11.5
 Git commit:        62240a9
 Built:             Thu Apr  4 19:18:53 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.0-beta1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.11.5
  Git commit:       62240a9
  Built:            Thu Apr  4 19:17:35 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
root@DebianBuster:~#

Creating a 2-Node Docker Swarm Mode Cluster

swarm-node-1:~$ sudo docker swarm init --advertise-addr 10.140.0.6 --listen-addr 10.140.0
.6:2377
Swarm initialized: current node (c78wm1g99q1a1g2sxiuawqyps) is now a manager.
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-1bc88158q1v4b4gdof8k0u532bxzdvrgxfztwgj2r443337mja-cmhuu258lu0327
e32l0g4pl47 10.140.0.6:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Run the below command on worker node:

swarm-node2:~$ sudo docker swarm join --token SWMTKN-1-1bc88158q1v4b4gdof8k0u532bxzdvrgxf
ztwgj2r443337mja-cmhuu258lu0327e32l0g4pl47 10.140.0.6:2377
This node joined a swarm as a worker.

Listing the Swarm Mode CLuster

$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
      ENGINE VERSION
rf3xns913p4tlprmu98z2o8hi     swarm-node2         Ready               Active                            
      19.03.0-beta1
isbcijzlrft3ahpbzhgipwr9a *   swarm-node-1        Ready               Active              Leader        
      19.03.0-beta1

Running Multi-service Docker Compose for Redis

Redis is an open source, in-memory data structure store, used as a database, cache and message broker. Redis Commander is an application that allows users to explore a Redis instance through a browser. Let us look at the below Docker compose file for Redis as well as Redis Commander shown below:

version: '3'
services:
  redis:
    hostname: redis
    image: redis

  redis-commander:
    hostname: redis-commander
    image: rediscommander/redis-commander:latest
    restart: always
    environment:
    - REDIS_HOSTS=local:redis:6379
    ports:
    - "8081:8081"

Ensure that Docker Compose is installed on your system using the below commands:

curl -L https://github.com/docker/compose/releases/download/1.24.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Run the below command to bring up Redis application running on Docker Swarm Mode:

sudo docker stack deploy -c docker-compose.yml myapp

$ sudo docker stack deploy -c docker-compose.yml myapp
Ignoring unsupported options: restart
Creating network myapp_default
Creating service myapp_redis-commander
Creating service myapp_redis

Verifying if the services are up and running:

~$ sudo docker stack ls
NAME                SERVICES            ORCHESTRATOR
myapp               2                   Swarm

~$ sudo docker service ls
ID                  NAME                    MODE                REPLICAS            IMAGE               
                    PORTS
ucakpqi7ozg1        myapp_redis             replicated          1/1                 redis:latest        
                    
fxor8v90a4m0        myapp_redis-commander   replicated          0/1                 rediscommander/redis
-commander:latest   *:8081->8081/tcp

Checking the service logs:


$ docker service logs -f myapp3_redis
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:C 17 Apr 2019 06:26:08.006 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:C 17 Apr 2019 06:26:08.006 # Redis version=5.0.4, bits=64, commit=00000000, modified=0, pid=1, just started
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:C 17 Apr 2019 06:26:08.006 # Warning: no config file specified, using the default config. In order to specify a configfile use redis-server /path/to/redis.conf
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:M 17 Apr 2019 06:26:08.009 * Running mode=standalone, port=6379.
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:M 17 Apr 2019 06:26:08.009 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.

As you see above, there is a warning around /proc/sys/net/core/somaxconn lower value set to 128.

Building Docker Compose File using Sysctl parameter

Let us try to build a new Docker compose file with sysctl parameter specified:

Copy the below content and save it as a docker-compose.yml file.

version: '3'
services:
  redis:
    hostname: redis
    image: redis
  sysctls:
    net.core.somaxconn: 1024
  redis-commander:
    image: rediscommander/redis-commander:latest
    restart: always
    environment:
    - REDIS_HOSTS=local:redis:6379
    ports:
    - "8081:8081"

Running Your Redis application

$ sudo docker stack deploy -c docker-compose.yml myapp
Ignoring unsupported options: restart
Creating network myapp_default
Creating service myapp_redis
Creating service myapp_redis-commander

$ sudo docker service ls
ID                  NAME                    MODE                REPLICAS            IMAGE               
                    PORTS
2oxhaychob7s        myapp_redis             replicated          1/1                 redis:latest        
                    
pjdwti7hkg1q        myapp_redis-commander   replicated          1/1                 rediscommander/redis
-commander:latest   *:80->8081/tcp

Verifying the Redis service logs

$ sudo docker service logs -f myapp_redis
myapp_redis.1.mp57syo3okka@swarm-node-1    | 1:C 17 Apr 2019 06:59:44.510 # oO0OoO0OoO0Oo Redis is start
ing oO0OoO0OoO0Oo
myapp_redis.1.mp57syo3okka@swarm-node-1    | 1:C 17 Apr 2019 06:59:44.510 # Redis version=5.0.4, bits=64
, commit=00000000, modified=0, pid=1, just started
myapp_redis.1.mp57syo3okka@swarm-node-1    | 1:M 17 Apr 2019 06:59:44.511 * Running mode=standalone, port=6379.

You can see that the warning around /proc/sys/net/core/somaxconn is no longer being displayed which shows that the sysctls parameter has really worked.

In my next blog post, I will talk around rootless Docker and how to get it tested. Stay tuned !

Switching Docker 18.09 Community Edition to Enterprise Engine with no downtime

Estimated Reading Time: 3 minutes 

Under the newer Docker Engine 18.09 release,  a new feature called CE-EE Node Activate has been introduced. It allows a user to perform an in-place seamless activation of the Enterprise engine feature set on an existing Community Edition (CE) node through the Docker command line.CE-EE Node Activate applies a license, and switch the Docker engine to the Enterprise engine binary.

Pre-requisite:

  • Docker Community Edition (CE) version must be 18.09 or higher.
  • All of the Docker packages must be installed: docker-cli, docker-server, and containerd.
  • Node-level Engine activation between CE and EE is only supported in the same version of Docker Enterprise Engine for Docker

Tested Infrastructure

Platform Number of Instance Reading Time
Google Cloud Platform 1 5 min

Pre-requisite

  • Create an account with Google Cloud Engine (Free Tier)
  • Pick up Ubuntu 18.10 as OS instance

Installing Docker Community Editon 18.09

Verifying Ubuntu 18.10 release

$ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.10 (Cosmic Cuttlefish)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.10"
VERSION_ID="18.10"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=cosmic
UBUNTU_CODENAME=cosmic

Installing Docker 18.09 Release

sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic test"
sudo apt install docker-ce
~$ sudo docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:49:01 2018
 OS/Arch:           linux/amd64
 Experimental:      false
Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:16:44 2018
  OS/Arch:          linux/amd64
  Experimental:     false

Running Nginx Docker container

$ sudo docker run -d -p 80:80 nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
a5a6f2f73cd8: Pull complete 
67da5fbcb7a0: Pull complete 
e82455fa5628: Pull complete 
Digest: sha256:31b8e90a349d1fce7621f5a5a08e4fc519b634f7d3feb09d53fac9b12aa4d991
Status: Downloaded newer image for nginx:latest
ba4a5822d7c991c04418b2fbbcadb86057eef4d98ba3f930bff569ac8058468e
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES
ba4a5822d7c9        nginx               "nginx -g 'daemon of…"   5 seconds ago       Up 3 seconds        0.0.0.0:80->80/tcp   peaceful_swanson

Verifying Nginx Docker container Up and Running

~$ sudo curl localhost:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Connect your system to DockerHub Account

$sudo docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: ajeetraina
Password: 
WARNING! Your password will be stored unencrypted in /home/joginderkour1950/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
 

Downloading Your Docker Enterprise License

  • Go to https://store.docker.com/my-content site.
  • Login with your Docker ID.
  • Under your profile page, click on “My Content”
  • Click on “Setup” to get your Docker Enterprise License
  • Download your Docker Enterprise License in your system
  • Copy the content of .lic file
  • Create a file called mylicense.lic on your Ubuntu sytem and save it in some location.

Activate the EE license. You must use sudo even if your user is part of the docker group.

$ sudo docker engine activate --license mylicense.lic
License: Quantity: 10 Nodes     Expiration date: 2018-12-10     License is currently active
18.09.0: resolved 
267a9a121ee1: done 
4365cd59d876: done [==================================================>]  1.161kB/1.161kB
7ec4ee35c404: done [==================================================>]   4.55MB/4.55MB
3c60d2c9ddf3: done [==================================================>]  25.71MB/25.71MB
55fa4079a8ab: done [==================================================>]  1.122MB/1.122MB
c5a93cbd4679: done [==================================================>]  333.9kB/333.9kB
e661b0f8ba29: done [==================================================>]   4.82kB/4.82kB
Successfully activated engine.
Restart docker with 'systemctl restart docker' to complete the activation.

Restarting the Docker service

$ sudo systemctl restart docker

Verifying Docker Enterprise Version

$ sudo docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:49:01 2018
 OS/Arch:           linux/amd64
 Experimental:      false
Server: Docker Engine - Enterprise
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       33a45cd
  Built:            Wed Nov  7 00:17:07 2018
  OS/Arch:          linux/amd64
  Experimental:     false

Verifying if Nginx container is still running

$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES
ba4a5822d7c9        nginx               "nginx -g 'daemon of…"   6 minutes ago       Up 6 minutes        0.0.0.0:80->80/tcp   peaceful_swanson