Getting Started with Multi-Node Kubernetes Cluster using LinuxKit

Here’s a BIG news for the entire Container Community – “Kubernetes Support is coming to Docker Platform“. What does this mean? This means that developers and operators can now build apps with Docker and seamlessly test and deploy them using both Docker Swarm and Kubernetes. This announcement was made on the first day of Dockercon EU 2017 Copenhagen where Solomon Hykes, Founder of Docker invited Kubernetes Co-Founder  Tim Hockin on the stage for the first time. Tim welcomed the Docker family to the vibrant Kubernetes world. 

In case you missed out the news, here are the important takeaways from Docker-Kubernetes announcement –

  • Kubernetes support will be added to Docker Enterprise Edition for the first time.
  • Kubernetes support will be added optionally to Docker Community Edition for Mac and Windows.
  • Kubernetes and Docker Swarm both orchestrators will be available under Docker Platform.
  • Kubernetes integration will be made available under Moby projects too.

Docker Inc firmly believes that bringing Kubernetes to Docker will simplify and advance the management of Kubernetes for developers, enterprise IT and hackers to deliver the advanced capabilities of Docker to a broader set of applications.



In case you want to be first to test drive, click here to sign up and be notified when the beta is ready.

Please be aware that the beta program is expected to be ready at the end of 2017.

Can’t wait for beta program? Here’s a quick guide on how to get Multi-Node Kubernetes cluster up and running using LinuxKit.

Under this blog post, we will see how to build 5-Node Kubernetes Cluster using LinuxKit. Here’s the bonus – We will try to deploy WordPress application on top of Kubernetes Cluster. I will be building it on my macOS Sierra 10.12.6 system running the latest Docker for Mac 17.09.0 Community Edition up and running.


  1. Ensure that the latest Docker release is up and running.

       2. Clone the LinuxKit repository:

sudo git clone
cd projects/kubernetes/

3. Execute the below script to build Kubernetes OS images using LinuxKit & Moby:

sudo chmod +x

All I have done is put all the manual steps under a script as shown below:



This should build up Kubernetes OS image.

Run the below command to know the IP address of the kubelet container:

ip adds show dev eth0


You can check the list of tasks running as shown below:

Once this kubelet is ready, it’s time to login into it directly from a new terminal:

sudo ./

So you have already SSH into the kubelet container rightly named “LinuxKit Kubernetes Project”.

We are going to mark it as “Master Node”.

Initializing the Kubernetes Master Node

Now it’s time to manually initialize master node with Kubeadm command as shown below:


Wait for few minutes till kubeadm command completes. Once done, you can use kubeadm join argument as shown below:


Your Kubernetes master node gets initialized successfully as shown below:



It’s time to run the below list of commands command to join the list of worker nodes:

sudo ./ 1 --token 4cec03.1a7ccb44115f427a
sudo ./ 2 --token 4cec03.1a7ccb44115f427a
sudo ./ 3 --token 4cec03.1a7ccb44115f427a
sudo ./ 4 --token 4cec03.1a7ccb44115f427a


It brings up 5-Node Kubernetes cluster as shown below:


Deploying WordPress Application on Kubernetes Cluster:

Let us try to deploy a WordPress application on top of this Kubernetes Cluster.

Head over to the below directory:

cd linuxkit/projects/kubernetes/wordpress


Creating a Persistent Volume

kubectl create -f local-volumes.yaml

The content of local-volumes.yml look like:

This creates 2 persistent volumes:

Verifying the volumes:


Creating a Secret for MySQL Password

kubectl create secret generic mysql-pass --from-literal=password=mysql123

Verifying the secrets using the below command:
kubectl get secrets


Deploying MySQL:

kubectl create -f mysql-deployment.yaml


Verify that the Pod is running by running the following command:

kubectl get pods


Deploying WordPress:

kubectl create -f wordpress-deployment.yaml

By now, you should be able to access WordPress UI using the web browser.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.



Docker, Prometheus & Pushgateway for NVIDIA GPU Metrics & Monitoring

In my last blog post, I talked about how to get started with NVIDIA docker & interaction with NVIDIA GPU system. I demonstrated NVIDIA Deep Learning GPU Training System, a.k.a DIGITS by running it inside Docker container. ICYMI –  DIGITS is essentially a webapp for training deep learning models and is used to rapidly train the highly accurate deep neural network (DNNs) for image classification, segmentation and object detection tasks.The currently supported frameworks are: Caffe, Torch, and Tensorflow. It simplifies common deep learning tasks such as managing data, designing and training neural networks on multi-GPU systems, monitoring performance in real time with advanced visualizations, and selecting the best performing model from the results browser for deployment. 


In a typical HPC environment where you run 100 and 100s of NVIDIA GPU equipped cluster of nodes, it becomes important to monitor those systems to gain insight of the performance metrics, memory usage, temperature and utilization. . Tools like Ganglia & Nagios etc. are very popular due to their scalable  & distributed monitoring architecture for high-performance computing systems such as clusters and Grids. It leverages widely used technologies such as XML for data representation, XDR for compact, portable data transport, and RRDtool for data storage and visualization. But with the advent of container technology, there is a need of modern monitoring tools and solutions which works well with Docker & Microservices. 

It’s all modern world of Prometheus Stack…

Prometheus is 100% open-source service monitoring system and time series database written in Go.It is a full monitoring and trending system that includes built-in and active scraping, storing, querying, graphing, and alerting based on time series data. It has knowledge about what the world should look like (which endpoints should exist, what time series patterns mean trouble, etc.), and actively tries to find faults.

How is it different from Nagios?

Though both serves a purpose of monitoring, Prometheus wins this debate with the below major points – 

  • Nagios is host-based. Each host can have one or more services, which has one check.There is no notion of labels or a query language. But Prometheus comes with its robust query language called “PromQL”. Prometheus provides a functional expression language that lets the user select and aggregate time series data in real time. The result of an expression can either be shown as a graph, viewed as tabular data in Prometheus’s expression browser, or consumed by external systems via the HTTP API.
  • Nagios is suitable for basic monitoring of small and/or static systems where blackbox probing is sufficient. But if you want to do whitebox monitoring, or have a dynamic or cloud based environment then Prometheus is a good choice.
  • Nagios is primarily just about alerting based on the exit codes of scripts. These are called “checks”. There is silencing of individual alerts, however no grouping, routing or deduplication.

Let’s talk about Prometheus Pushgateway..

Occasionally you will need to monitor components which cannot be scraped. They might live behind a firewall, or they might be too short-lived to expose data reliably via the pull model. The Prometheus Pushgateway allows you to push time series from these components to an intermediary job which Prometheus can scrape. Combined with Prometheus’s simple text-based exposition format, this makes it easy to instrument even shell scripts without a client library.

The Prometheus Pushgateway allow ephemeral and batch jobs to expose their metrics to Prometheus. Since these kinds of jobs may not exist long enough to be scraped, they can instead push their metrics to a Pushgateway. The Pushgateway then exposes these metrics to Prometheus. It is important to understand that the Pushgateway is explicitly not an aggregator or distributed counter but rather a metrics cache. It does not have statsd-like semantics. The metrics pushed are exactly the same as you would present for scraping in a permanently running program.For machine-level metrics, the textfile collector of the Node exporter is usually more appropriate. The Pushgateway is intended for service-level metrics. It is not an event store

Under this blog post, I will showcase how NVIDIA Docker, Prometheus & Pushgateway come together to  push NVIDIA GPU metrics to Prometheus Stack.

Infrastructure Setup:

  • Docker Version: 17.06
  • OS: Ubuntu 16.04 LTS
  • Environment : Managed Server Instance with GPU
  • GPU: GeForce GTX 1080 Graphics card

Cloning the GITHUB Repository

Run the below command to clone the below repository to your Ubuntu 16.04 system equipped with GPU card:

git clone

Script to bring up Prometheus Stack(Includes Grafana)

Change to nvidia-prometheus-stats directory with proper execute permission & then execute the ‘’ script as shown below:

cd nvidia-prometheus-stats
sudo chmod +x
sudo sh

This script will bring up 3 containers in sequence – Pushgateway, Prometheus & Grafana

Executing GPU Metrics Script:

NVIDIA provides a python module for monitoring NVIDIA GPUs using the newly released Python bindings for NVML (NVIDIA Management Library). These bindings are under BSD license and allow simplified access to GPU metrics like temperature, memory usage, and utilization.

Next, under the same directory, you will find a python script called “”.

Execute the script (after IP under line number – 124 as per your host machine) as shown below:

sudo python

That’s it. It is time to open up Prometheus & Grafana UI under http://<IP-address>:9090

Just type gpu under the Expression section and you will see the list of GPU metrics automatically turned up as shown below:

Accessing the targets

Go to Status > Targets to see what targets are accessible. The Status should show up UP.

Click on Push gateway Endpoint to access the GPU metrics in details as shown:

Accessing Grafana

 You can access Grafana through the below link:



Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.



Building a secure Docker Host VM on VMware ESXi using LinuxKit & Moby

Post Dockercon 2017 @ Austin TX,  I raised a feature request titled “LinuxKit command to push vmware.vmdk to remote ESXi datastore”. Within few weeks time, the feature was introduced by LinuxKit team. A Special thanks goes to Daniel Finneran who worked hard to get this feature merged into the LinuxKit main branch.


LinuxKit project is 5 month old now. It has already bagged 3100+ stars, added up 69 contributors and 350+ forks till date. If you are pretty new, LinuxKit is not a full host operating system, as it primarily has two jobs: run containerd containers, and be secure. It uses modern kernels, and updates frequently following new releases. As such, the system does not contain extraneous packages or drivers by default. Because LinuxKit is customizable, it is up to individual operators to include any additional bits they may require.

LinuxKit is undoubtedly Secure

The core system components included in LinuxKit userspace are key to security, and written in type safe languages, such as RustGo and OCaml, and run with maximum privilege separation and isolation. The project is currently leveraging MirageOS to construct unikernels to achieve this, and that progress can be tracked here: as of this writing, dhcp is the first such type safe program. There is ongoing work to remove more C components, and to improve, fuzz test and isolate the base daemons. Further rationale about the decision to rewrite system daemons in MirageOS is explained at length in this document. I am planning to come up with blog post to brief on “LinuxKit Security” aspect. Keep an eye on this space in future..

Let’s talk about building a secure Docker Host VM…

I am a great fan of VMware PowerCLI. I have been using it since the time I was working full time in VMware Inc.(during 2010-2011 timeframe). Today the most quickest way to get VMware PowerCLI up and running is by using PhotonOS based Docker Image. Just one Docker CLI and you are already inside Photon OS running PowerShell & PowerCLI to connect to remote ESXi to build up VMware Infrastructure. Still this mightn’t give you a secure Docker host environment. If you are really interested to build a secure, portable and lean Docker Host operating system, LinuxKit is the right tool. But how?

Under this blog post, I am going to show how Moby & LinuxKit can help you in building a secure Docker 17.07 Host VM on top of VMware ESXi.


  • VMware vSphere ESXi 6.x
  • Linux or MacOS with Go packages installed
  • Docker 17.06/17.07 installed on the system

The below commands has been executed on one of my local Ubuntu 16.04 LTS system which can reach out to ESXi system flawlessly.

Cloning the LinuxKit Repository:

git clone

Building Moby & LinuxKit:

cd linuxkit

Configuring the right PATH for Moby & LinuxKit

cp bin/moby /usr/local/bin
cd bin/linuxkit /usr/local/bin

A Peep into vmware.yml File

The first 3 lines shows modern, securely configured kernel. The init section spins up containerd to run services. The onboot section allows dhcpd for networking. The services includes getty service container for shell, runs nginx service container. The trust section indicates all images signed and verified.

Building VMware ISO Image using Moby

moby build -output iso-bios -name vmware vmware.yml


Pushing VMware ISO Image to remote ESXi datastore

linuxkit push vcenter -datastore=datastore1 -url https://root:xxx@100.98.x.x/sdk -folder=linuxkit vmware.iso

Usage of linuxkit push vcenter :-


Running a secure VM directly from the ESXi datastore using LinuxKit

dell@redfish-ubuntu:~/linuxkit/examples$ sudo linuxkit run vcenter -cpus 8 -datastore datastore1 -mem 2048 -network ‘VM Network’ -hostname -powerOn -url  https://root:xxx@100.98.x.x/sdk vmware.iso
Creating new LinuxKit Virtual Machine
Adding ISO to the Virtual Machine
Adding VM Networking
Powering on LinuxKit VM

Now let us verify that VM is up and running using either VMware vSphere Client or SDK URL.

You will find that VM is already booted up with the latest Docker 17.07 platform up and running.

Building a Docker Host VM using Moby & LinuxKit

In case you want to build a Docker Host VM, you can refer to the below vmware.yml file:

Just re-run the below command to get the new VM image:

moby build -output iso-bios -name vmware docker-vmware.yml

Follow the above steps to push it to remote datastore and run it using LinuxKit. Hence, you have a secured Docker 17.07 Host ready to build Docker Images and build up application stack.

How about building Photon OS based Docker Image using Moby & LinuxKit? Once you build it and push it to VM , its all ready to build Virtual Infrastructure. Interesting, Isn’t it?

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.




Running NVIDIA Docker in the GPU-Accelerated Data Center

It’s time to look at GPUs inside Docker container..

Docker is the leading container platform which provides both hardware and software encapsulation by allowing multiple containers to run on the same system at the same time each with their own set of resources (CPU, memory, etc) and their own dedicated set of dependencies (library version, environment variables, etc.). Docker  can now be used to containerize GPU-accelerated applications. In case you’re new to GPU-accelerated computing, it is basically the use of graphics processing unit  to accelerates high performance computing workloads and applications. This means you can easily containerize and isolate accelerated application without any modifications and deploy it on any supported GPU-enabled infrastructure. 

Docker does not natively support NVIDIA GPUs within containers. Though there are available workaround like fully installing the NVIDIA drivers inside the container and map in the character devices corresponding to the NVIDIA GPUs (e.g. /dev/nvidia0) on launch but still it is not recommended.

Here comes nvidia-docker plugin for a rescue…

The nvidia-docker is an open source project hosted on GITHUB and it provides driver-agnostic CUDA images  & docker command line wrapper that mounts the user mode components of the driver and the GPUs (character devices) into the container at launch. With this enablement, the NVIDIA Docker plugin enables deployment of GPU-accelerated applications across any Linux GPU server with NVIDIA Docker support. What does this mean? – Using Docker, we can develop and prototype GPU applications on a workstation, and then ship and run those applications anywhere that supports GPU containers. Earlier this year, the nvidia-docker 1.0.1 release announced the support for Docker 17.03 Community & Enterprise Edition both.

Some of the key notable benefits includes –

  • Legacy accelerated compute apps can be containerized and deployed on newer systems, on premise, or in the cloud.
  • Ease of Deployment
  • Isolation of Resource
  • Bare Metal Performance
  • Facilitate Collaboration
  • Run access heterogeneous CUDA toolkit environments (sharing the host driver)
  • Specific GPU resources can be allocated to container for better isolation and performance.
  • You can easily share, collaborate, and test applications across different environments.
  • Portable and reproducible builds

                                                                                                                                                               ~source: Nvidia


Let’s talk about libnvidia-container a bit..

libnvidia is NVIDIA container runtime library. The repository  provides a library and a simple CLI utility to automatically configure GNU/Linux containers leveraging NVIDIA hardware.The implementation relies on kernel primitives and is designed to be agnostic of the container runtime. Basic features includes –

  • Integrates with the container internals
  • Agnostic of the container runtime
  • Drop-in GPU support for runtime developers
  • Better stability, follows driver releases
  • Brings features seamlessly (Graphics, Display, Exclusive mode, VM, etc.)

                                                                                                                                                                             ~ source: NVIDIA

Under this blog post, I will show you how to get started with nvidia-docker to interact with NVIDIA GPU system and then look at few of interesting applications which can be build for GPU-accelerated data center. Let us get started – 

Infrastructure Setup:

Docker Version: 17.06

OS: Ubuntu 16.04 LTS

Environment : Manager Server Instance with GPU

GPU: GeForce GTX 1080 Graphics card


  • Verify that GPU card is equipped in your hardware:


  • Install nvidia-docker & nvidia-docker-plugin under Ubuntu 16.04 using wget as shown below:
sudo wget -P /tmp
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb

Initializing nvidia-docker service:

ajit@Ubuntu-1604-xenial-64-minimal:~$ systemctl status nvidia-docker
 nvidia-docker.service -- NVIDIA Docker
plugin Loaded: loaded (/lib/systemd/system/nvidia-docker.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2017-08-20 10:52:43 CEST; 6 days ago
Docs: Main
PID: 19921 (nvidia-docker-p)
Tasks: 13
Memory: 12.3M
CPU: 5.046s
CGroup: /system.slice/nvidia-docker.service
                └─19921 /usr/bin/nvidia-docker-plugin -s /var/lib/nvidia-docker

Whenever nvidia-docker is installed, it creates a Docker volume and mounts the devices into a docker container automatically.

Did you know?

It is possible to avoid replying on nvidia-wrapper to launch GPU containers using ONLY docker and that can be done by using the REST API directly as shown below:

docker run -ti --rm `curl -s http://localhost:3476/docker/cli` nvidia/cuda nvidia-smi

NVIDIA’s System Management Interface

If you want to know the status of your NVIDIA GPU, then nvidia-smi is the handy command which can be run using nvidia-cuda container. This is generally useful when  you’re having trouble getting your NVIDIA GPUs to run GPGPU code.

nvidia-docker run --rm nvidia/cuda nvidia-smi


Listing all NVIDIA Devices:

nvidia-docker run --rm nvidia/cuda nvidia-smi -L
GPU 0: GeForce GTX 1080 (UUID: GPU-70ecf884-c4fb-159b-a67e-26b4ce96681d)

Listing all available data on the particular GPU:

nvidia-docker run --rm nvidia/cuda nvidia-smi -i 0 -q

Listing details for each GPU:

nvidia-docker run --rm nvidia/cuda nvidia-smi --query-gpu=index,name,uuid,serial --format=csv
index, name, uuid, serial0, GeForce GTX 1080, GPU-70ecf884-c4fb-159b-a67e-26b4ce96681d, [Not Supported]

Listing the available clock speeds:

nvidia-docker run --rm nvidia/cuda nvidia-smi -q -d SUPPORTED_CLOCKS

Building & Testing NVIDIA-Docker Images

If you look at samples/ folder under the nvidia-docker repository , there are couple of  images that can be used to quickly test nvidia-docker on your machine. Unfortunately, the samples are not available on the Docker Hub, hence you will need to build the images locally. I have built few of them which I am going to showcase:

cd /nvidia-docker/samples/ubuntu-16.04/deviceQuery/
docker build -t ajeetraina/nvidia-devicequery .


Running the DeviceQuery container

You can leverage ajeetraina/nvidia-devicequery container directly as shown below:



Listing the current GPU clock speed, default clock speed & maximum possible clock speed:

nvidia-docker run --rm nvidia/cuda nvidia-smi -q -d CLOCK

Retrieving the System Topology:

The topology refers to how the PCI-Express devices (GPUs, InfiniBand HCAs, storage controllers, etc.) connect to each other and to the system’s CPUs.  This can be retrieved as follow:


A Quick Look at NVIDIA Deep Learning..

The NVIDIA Deep Learning GPU Training System, a.k.a DIGITS is a webapp for training deep learning models. It puts  the power of deep learning into the hands of engineers & data scientists. It can be used to rapidly train the highly accurate deep neural network (DNNs) for image classification, segmentation and object detection tasks.The currently supported frameworks are: Caffe, Torch, and Tensorflow.

DIGITS simplifies common deep learning tasks such as managing data, designing and training neural networks on multi-GPU systems, monitoring performance in real time with advanced visualizations, and selecting the best performing model from the results browser for deployment. DIGITS is completely interactive so that data scientists can focus on designing and training networks rather than programming and debugging.

To test-drive DIGITS, you can get it up and running in a single Docker container:

ajit@Ubuntu-1604-xenial-64-minimal:~$ NV_GPU=0 nvidia-docker run --name digits -d -p 5000:5000 nvidia/digits

In the above command, NV_GPU is a method of assigning GPU resources to a container which is critical for leveraging DOCKER in a Multi GPU System. This passes GPU ID 0 from the host system to the container as resources. Note that if you passed GPU ID 2,3 for example, the container would still see the GPUs as ID 0,1 inside the container, with the PCI ID of 2,3 from the host system. As I have a single GPU card, I just passed it as NV_GPU=0.

You can open up web browser and verify if its running on the below address:

w3m http://<dockerhostip>:5000

The below is the snippet from my w3m text browser:


How about Docker Compose? Is it supported?

Yes, of course.  

Let us see how Docker compose works for nvidia-docker. 

  • First we need to figure out the nvidia driver version 


As shown above, the nvidia driver version displays 375.66.

  • Create a docker volume that uses the nvidia-docker plugin.
docker volume create --name=nvidia_driver_375.66 -d nvidia-dockernvidia_driver_375.66

Verify it with the below command:

sudo docker volume ls
DRIVER VOLUME NAMElocal 15dd59ba1017ca5b822473fb3ed8a341b4e29e59f3725565c00810ddd5619878local
local nvidia_driver_375.66

Now let us look at docker-compose YAML file shown below:

If you have ever worked with docker-compose, you can easily understand what each line specifies. I specified /dev/nvidia0 as I had a single GPU card, capture the correct volume driver name which we specified in the last step.

Just initiate the docker-compose as shown below:

docker-compose up

This will start a container which ran nvidia-smi and then exited immediately. To keep it running, one can add tty: true inside Docker-compose file.

Let us see another interesting topic…TensorFlow

I have a sample TensorFlow based docker-compose file as shown below:

Verify if the container is up and running –


In the next blog post, I will showcase how to push the GPU metrics to prometheus & cAdvisor.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel



Hybrid Docker Swarm Mode Cluster with Multi-OS Application Deployment

Here comes the most awaited feature of 2017 – “Building Docker Swarm cluster which includes all Windows cluster, or a hybrid cluster of Linux and Windows machines, or all Linux systems”.  With the inclusion of the recent Docker 17.03 overlay networking feature for Windows Server 2016, it’s now possible to create swarm clusters that include both Windows and Linux environment across the both public and private cloud infrastructure.

Today most of the enterprises manage a diverse set of applications or workloads that might includes both traditional applications & microservices, built on either or both of Linux and Windows platform. Docker now provides a way to modernize all these different applications by packaging them in a standard format which does not require software development teams to change their code. Organizations can containerize traditional apps and microservices and deploy them in the same cluster, which includes both Linux & Windows environment.

Building Hybrid Docker Swarm Mode Cluster Environment

Under this blog post, I will showcase how to build the hybrid Docker Swarm Mode cluster. I will be leveraging Google Cloud Platform where I have 2 Windows Server 2016 & 1 Ubuntu 16.04 instance up and running.

Setting up Windows Server 2016 with Docker 17.03 Enterprise Editio

  • Bring up Windows Server 2016 instance on Google Cloud Platform 
  • Setup Windows password using the below command and then proceed to download the RDP file
gcloud beta compute --project “psychic-cascade-175904” reset-windows-password “windows2016” --zone “asia-east1-a”
ip_address: 35.194.X.X
password: XXXXXXX
username: <yourusername>
  • Open up the RDP file downloaded under step-2 and connect to the instance remotely.
  • Just 3 commands and docker should be up and running on Windows Server 2016
Install-Module -Name DockerMsftProvider -Force
Install-Package -Name docker -ProviderName DockerMsftProvider -Force
Restart-Computer -Force


This should restart the Windows instance and you should be able to see Docker up and running once it comes back.

One can easily verify if Docker 17.03.2 EE  is up and running using the below command:


Configuring Windows Server 2016 as Docker Swarm Manager Node:

Run the below command to make this node as a Swarm master node:

docker swarm init --listen-addr --advertise-addr


Listing the Swarm Mode cluster nodes:

As of now, there is only 1 manager node which gets listed.

Joining Windows Server 2016 as the first Worker Node

Install Docker 17.03 EE on the 2nd Windows Server 2016 instance and issue the below command to join it as worker node:

docker swarm join --token SWMTKN-1-4ia5wbutzfoimx5xm7eujadpa6vuivksmgijk4dm56ppw5u3ib-6hvdrvee3vlnlg8oftnnj80dw


Listing out the Swarm Mode cluster:


Adding Linux System to the Cluster

Login to Ubuntu 16.04 instance, install Docker 17.06  and then issue the below command to join it as 3rd node to the existing cluster:

worker@ubuntu1604:~$ sudo docker swarm join --token SWMTKN-1-4ia5wbutzfoimx5xm7eujadpa6vuivksmgijk4dm56ppw5u3ib-6hvdrvee3vlnlg8oftnnj80dw
This node joined a swarm as a worker.

Wow ! This builds up our first hybrid Swarm Mode cluster which includes 2 Windows nodes & 1 Linux Nodes.

A Quick way of verifying the OS type:


Create an Overlay Network For Swarm Cluster

Before we start deploying application, we need to create an overlay network which spans across the cluster nodes. By default, it shows up the below listed network drivers:

As shown above, it displays 3 different networks – nat, none and ingress. Let us create a new overlay network ‘collabnet’ using the below command:

docker network create -d overlay collabnet


Creating our First Windows-based Service container

It’s time to create our first service. I will pick up MSSQL container which will use ‘collabnet’ overlay network and should get deployed only on Windows platform based on applied constraint as shown below:

docker service create \
--network collabnet --endpoint-mode dnsrr \
--constraint ‘node.platform.os == windows’ \
--env-file db-credentials.env \
--name db \


Ensure that you have db-credentials.env under the same directory with the below content:

DB_CONNECTION_STRING=Server=db;Database=SignUp;User Id=sa;Password=collabnix123

Once you create service, you can verify it up and running with the below command:

docker service ps db

Scaling the DB service 

Let us scale the DB service and see if it still applies only to Windows Platform:

As shown above, there are now 3 instance of the same application up and running. The MS SQL is now running on only Windows system based on the constraint specified.

Creating Linux specific applications 

Let us create a Web application having frontend as WordPress and backend as MySQL container with a constraint that it should get deployed only on Linux specific platform.

docker service create \
--replicas 1 \
--name wordpressdb1 \
--network collabnet \
--constraint ‘node.platform.os == linux’ \
--env MYSQL_ROOT_PASSWORD=collab123 \
--env MYSQL_DATABASE=wordpress

docker service create \
--env WORDPRESS_DB_HOST=wordpressdb1 \
--env WORDPRESS_DB_PASSWORD=collab123 \
--network collabnet --constraint ‘node.platform.os == linux’ \
--replicas 4 \
--name wordpressapp \
--publish 80:80/tcp \


Hence, we saw that MS SQL service is running on Windows host instance while WordPress application specific to Linux is successfully up and running on Linux nodes which proves that the newer Swarm Mode comes with an ability to intelligently orchestrate across mixed clusters of Windows & Linux worker nodes.

Adding Linux Worker Node and promoting to Manager

You should be able to view the list of nodes from Ubuntu manager node too:

In case you are looking out for application which uses frontend application running on Windows platform whereas the backend application uses Linux, here is a quick example.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.


Walkthrough: Enabling IPv6 Functionality for Docker & Docker Compose

By default, Docker assigns IPv4 addresses to containers. Does Docker support IPv6 protocol too? If yes, how complicated is to get it enabled? Can I use docker-compose to build micro services which uses IPv6 addresses? What if I work for a company where our services run natively under IPv6 only environment? How shall I build Multi-Node Cluster setup using IPv6? Does Docker 17.06 Swarm Mode support IPv6?

I have been reading numerous queries, GITHUB issues around breaking IPv6 configuration while upgrading Docker version, issues related to IPv6 changes with host configuration etc. and just thought to share few of the findings around IPv6 effort ongoing in Docker upcoming releases.

Does Docker support IPv6 Protocol?

Yes. Support for IPv6 address has been there since Docker Engine 1.5 release.As of Docker 17.06 version (which is the latest stable release as of August 2017) by default, the Docker server configures the container network for IPv4 only. You can enable IPv4/IPv6 dualstack support by adding the below entry under daemon.json file as shown below:


File: /etc/docker/daemon.json

“ipv6”: true,
“fixed-cidr-v6”: “2001:db8:1::/64”

This is very similar to old way of running the Docker daemon with the --ipv6 flag. Docker will set up the bridge docker0 with the IPv6 link-local address fe80::1.

Why did we add “fixed-cidr-v6”: “2001:db8:1::/64” entry?

By default, containers that are created will only get a link-local IPv6 address. To assign globally routable IPv6 addresses to your containers you have to specify an IPv6 subnet to pick the addresses from. Setting the IPv6 subnet via the --fixed-cidr-v6 parameter when starting Docker daemon will help us achieve globally routable IPv6 address.

The subnet for Docker containers should at least have a size of /80. This way an IPv6 address can end with the container’s MAC address and you prevent NDP neighbor cache invalidation issues in the Docker layer.

With the --fixed-cidr-v6 parameter set Docker will add a new route to the routing table. Further IPv6 routing will be enabled (you may prevent this by starting dockerd with --ip-forward=false).

Let us closely examine the changes which Docker Host undergoes before & after IPv6 Enablement:

A Typical Host Network Configuration – Before IPv6 Enablement 


As shown above, before IPv6 protocol is enabled, the docker0 bridge network shows IPv4 address only.

Let us enable IPv6 on the Host system. In case you find daemon.json already created under /etc/docker directory, don’t delete the old entries, rather just add these two below entries into the file as shown:

“ipv6”: true,
“fixed-cidr-v6”: “2001:db8:1::/64”



Restarting the docker daemon to reflect the changes:

sudo systemctl restart docker


A Typical Host Network Configuration – After IPv6 Enablement 

Did you see anything new? Yes, the docker0 now gets populated with IPV6 configuration.(shown below)

docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:06:62:82:4d brd ff:ff:ff:ff:ff:ff inet scope global docker0 valid_lft forever preferred_lft forever
inet6 2001:db8:1::1/64 scope global tentative
valid_lft forever preferred_lft forever
inet6 fe80::1/64 scope link tentative valid_lft forever preferred_lft forever

Not only this, the docker_gwbridge network interface too received IPV6 changes:

docker_gwbridge: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:bc:0b:2a:84 brd ff:ff:ff:ff:ff:ff
inet scope global docker_gwbridge
valid_lft forever preferred_lft forever
inet6 fe80::42:bcff:fe0b:2a84/64 scope link
valid_lft forever preferred_lft forever


PING TEST – Verifying IPv6 Functionalities For Docker Host containers

Let us try bringing up two containers on the same host and see if they can ping each using IPV6 address:


Setting up Ubuntu Container:

mymanager1==>sudo docker run -itd ajeetraina/ubuntu-iproute bash

Setting up CentOS Container:

mymanager1==>sudo docker run -itd ajeetraina/centos-iproute bash

[Please Note: If you are using default Ubuntu or CentOS Docker Image, you will be surprised to find that ip or ifconfig command doesn’t work. You might need to install iproute package for ip command to work & net-tools package for ifconfig to work. If you want to save time, use ajeetraina/ubuntu-iproute for Ubuntu OR ajeetraina/centos-iproute for CentOS directly.]

Now let us initiate the quick ping test:


In this example the Docker container is assigned a link-local address with the network suffix /64 (here: fe80::42:acff:fe11:3/64) and a globally routable IPv6 address (here: 2001:db8:1:0:0:242:ac11:3/64). The container will create connections to addresses outside of the 2001:db8:1::/64 network via the link-local gateway at fe80::1 on eth0.

mymanager1==>sudo docker exec -it 907 ping6 fe80::42:acff:fe11:2
PING fe80::42:acff:fe11:2(fe80::42:acff:fe11:2) 56 data bytes
64 bytes from fe80::42:acff:fe11:2%eth0: icmp_seq=1 ttl=64 time=0.153 ms
64 bytes from fe80::42:acff:fe11:2%eth0: icmp_seq=2 ttl=64 time=0.100 ms
--- fe80::42:acff:fe11:2 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 999
msrtt min/avg/max/mdev = 0.100/0.126/0.153/0.028 ms

So the two containers are able to reach out to each other using IPv6 address.

Does Docker Compose support IPv6 protocol?

The answer is Yes. Let us verify it using docker-compose version 1.15.0 and compose file format 2.1. I faced an issue while I use the latest 3.3 file format. As Docker Swarm Mode doesn’t support IPv6, hence it is not included under 3.3 file format. Till then, let us try to bring up container using IPv6 address using 2.1 file format:

docker-compose version
version 1.15.0, build e12f3b9
docker-py version: 2.4.2
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.0.1t 3 May 2016

Let us first verify the network available in the host machine:


File: docker-compose.yml

version: ‘2.1’
    image: busybox
    command: ping
        ipv6_address: 2001:3200:3200::20
    enable_ipv6: true
    driver: bridge
      driver: default
        -- subnet: 2001:3200:3200::/64
          gateway: 2001:3200:3200::1


The above docker-compose file will create a new network called testping_app_net based on IPv6 network under the subnet 2001:3200:3200::/64 and container should get IPv6 address automatically assigned.

Let us bring up services using docker-compose up and see if the services communicates over IPv6 protocol:


Verifying the IPv6 address for each container:

As shown above, this new container gets IPv6 address – 2001:3200:3200::20 and hence they are able to reach other flawlessly.

What’s Next? Under the next blog post, I am going to showcase how does IPv6 works across the multiple host machine and will talk about ongoing effort to bring IPv6 support in terms of Swarm Mode. 

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.


Building a Secure VM based on LinuxKit on Microsoft Azure Platform

LinuxKit GITHUB repository recently crossed 3000 stars, forked around 300+ times and added 60+ contributors. Just 5 months old project and it has already gained lot of momentum across the Docker community. Built with a purpose that enables community to create secure, immutable, and minimal Linux distributions, LinuxKit is matured enough to support number of Cloud Platforms like Azure, AWS, Google Cloud Platform, VMware, and many more..


In my recent blogs, I showcased how to get LinuxKit OS built for Google Cloud Platform, Amazon Web Services and VirtualBox. ICYMI, I recently published few of the the video on LinuxKit too. Check it out.


Under this blog post, I will walkthrough how to build secure and portal VM based on LinuxKit image on Microsoft Azure Platform.


I will be leveraging macOS Sierra running Docker 17.06.1-ce-rc1-mac20 version. I tested it on Ubuntu 16.04 LTS edition too running on one of Azure VM and it went fine. Prior knowledge of Microsoft Azure / Azure CLI 2.0 will be required to configure Service Principle for VHD image to get uploaded to Azure smoothly.


Step-1: Pulling the latest LinuxKit repository

Pull the LinuxKit repository using the below command:

git clone


Step-2: Build Moby & LinuxKit tool

cd linuxkit


Step-3: Copying the tools into the right PATH

cp -rf bin/moby /usr/local/bin/
cp -rf bin/linuxkit /usr/local/bin/


Step-4: Preparing Azure CLI tool

curl -L | bash


Step-5: Run the below command to restart your shell

exec -l $SHELL


Step-6: Building LinuxKit OS for Azure Platform

cd linuxkit/examples/
moby build -output vhd azure.yml

This will build up VHD image which now has to be pushed to Azure Platform.

In order to push the VHD image to Azure, you need to authenticate LinuxKit with your Azure subscription, hence you  will need to set up the following environment variables:


Alternatively, the easy way to get all the above details is through the below command:

az login
To sign in, use a web browser to open the page and enter the code XXXXXX to authenticate.

The above command lists out Subscription ID and tenant ID which can be exported therein.

Next, follow this link to create an Azure Active Directory application and service principal that can access resources. If you want to stick to CLI rather than UI, you can follow the below steps:

Step-7: Pushing the VHD image to Azure Platform

linuxkit run azure --resourceGroupName mylinuxkit --accountName mylinuxkitstore -location eastasia azure.vhd
Creating resource group in eastasia
Creating storage account in eastasia, resource group mylinuxkit

The command will end up with the below message:


 Completed: 100% [     68.00 MB] RemainingTime: 00h:00m:00s Throughput: 0 Mb/sec    

Creating virtual network in resource group mylinuxkitresource, in eastasia

Creating subnet linuxkitsubnet468 in resource group mylinuxkitresource,

within virtual network linuxkitvirtualnetwork702

Creating public IP Address in resource group mylinuxkitresource, with name publicip159

Started deployment of virtual machine linuxkitvm941 in resource group mylinuxkitresource

Creating virtual machine in resource group mylinuxkitresource, with name linuxkitvm941, in location eastasia

NOTE: Since you created a minimal VM without the Azure Linux Agent,

the portal will notify you that the deployment failed. After around 50 seconds try connecting to the VM

ssh -i path-to-key


By this time, you should be able to see LinuxKit VM coming up under Azure Platform as shown below:

Wait for next 2-3 minutes till you try SSHing to this Azure instance and its all set to be up an running smoothly.

Known Issue:

  • Since the image currently does not contain the Azure Linux Agent, the Azure Portal will report the creation as failed.
  • The main workaround is the way the VHD is uploaded, specifically by using a Docker container based on Azure VHD Utils. This is mainly because the tool manages fast and efficient uploads, leveraging parallelism
  • There is work in progress to specify what ports to open on the VM (more specifically on a network security group)
  • The metadata package does not yet support the Azure metadata.


Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Further Reference:


Test Drive Elastic stack on PWD platform running Docker 17.06 CE Swarm Mode in 5 minutes

Let’s talk about Dockerized Elastic Stack…

Elastic Stack is an open source solution that reliably and securely take data from any source, in any format, and search, analyze, and visualize it in real time. It is a collection of open source products – Elasticsearch, Logstash, Kibana & recently added  fourth product, called Beats. Elastic Stack can be deployed on premises or made available as Software as a Service.

Brief about Elastic Stack Components:


Elasticsearch is a RESTful, distributed, highly scalable, JSON-based search and analytics engine built on top of Apache Lucene and released under Apache license. It is Java-based and designed for horizontal scalability, maximum reliability, and easy management. It is basically an open-source full-text search and analytics engine. It allows you to store, search, and analyze big volumes of data quickly and in near real time. It is generally used as the underlying engine/technology that powers applications that have complex search features and requirements.



Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.” (Elasticsearch). Logstash is a dynamic data collection pipeline with an extensible plugin ecosystem and strong Elasticsearch synergy. The product was originally optimized for log data but has expanded the scope to take data from all sources.Data is often scattered or siloed across many systems in many formats. Logstash supports a variety of inputs that pull in events from a multitude of common sources, all at the same time. Easily ingest from your logs, metrics, web applications, data stores, and various AWS services, all in continuous, streaming fashion.As data travels from source to store, Logstash filters parse each event, identify named fields to build structure, and transform them to converge on a common format for easier, accelerated analysis and business value.

Logstash dynamically transforms and prepare your data regardless of format or complexity:

  • Derive structure from unstructured data with grok
  • Decipher geo coordinates from IP addresses
  • Anonymize PII data, exclude sensitive fields completely
  • Ease overall processing independent of the data source, format, or schema.

Logstash has a pluggable framework featuring over 200 plugins. Mix, match, and orchestrate different inputs, filters, and outputs to work in pipeline harmony.


Lastly, Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack. It gives you the freedom to select the way you give shape to your data. And you don’t always have to know what you’re looking for. With its interactive visualizations, start with one question and see where it leads you.Kibana developer tools offer powerful ways to help developers interact with the Elastic Stack. With Console, you can bypass using curl from the terminal and tinker with your Elasticsearch data directly. The Search Profiler lets you easily see where time is spent during search requests. And authoring complex grok patterns in your Logstash configuration becomes a breeze with the Grok Debugger.

In next 5 minutes, we are going to test drive ELK stack on PWD playground.

Let’s get started –

Open up


Click on icon next to Instances to open up ready-made templates for Docker Swarm Mode:


Choose the first template (as highlighted in the above figure) to select 3 Managers and 2 Workers. It will bring up Docker 17.06 Swarm Mode cluster in just 10 seconds.

Run the below command to show up the cluster nodes:

docker node ls

Run the necessary command on node which will run elasticsearch:

sysctl -w vm.max_map_count=262144
echo ‘vm.max_map_count=262144’ >> /etc/sysctl.conf

Clone the GitHub repository:

git clone
cd docker101/play-with-docker/visualizer

Run the below command to bring up visualiser tool as shown below:

Soon you will notice port 8080 displayed on the top of the page which when clicked will open up visualiser tool.

It’s time to clone ELK stack and execute the below command to bring up ELK stack across Docker 17.06 Swarm Mode cluster:

git clone
cd swarm-elk
docker stack deploy -c docker-compose.yml myself


[Credits to Andrew Hromis for building this docker-compose file. I leveraged his project repository to bring up the ELK stack in the first try]

You will soon see the below list of containers appearing on the nodes:

Run the below command to see the list of services running across the cluster:

docker service ls

Click on port 5601 displayed on the top of the PWD page:

Please Note:  Kibana need data in Elasticsearch to work with. The .kibana index holds Kibana related data, and if they is the only index you have there is no data available that Kibana can visualise.Before you can use Kibana you will therefore need to index some data into Elasticsearch. This can be done e.g. using Logstash or directly through the REST interface using curl.

Soon you will see the below Kibana page:


Enabling High Availability for Elastic Stack through scaling

Let us scale out more number of replicas for elasticsearch:

Pushing data into Logstash:

Example #1:

Let us push NGINX web server logs into logstash and see if Kibana is able to detect it:

docker run -d --name nginx-with-syslog --log-driver=syslog --log-opt syslog-address=udp:// -p 80:80 nginx:alpine

Now if you open up Kibana UI, you should be able to see logs being displayed for Nginx:

Example #2:

We can also push logs to logstash using the below command:

docker run --rm -it --log-driver=gelf --log-opt gelf-address=udp:// alpine ping

Open up Kibana and now you will see the below GREEN status:

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.



Building Docker For Mac 17.06 Community Edition using Moby & LinuxKit

Docker For Mac 17.06 CE edition is the first Docker version built entirely on the Moby Project. In case you’re new, Moby is an open framework created by Docker, Inc to assemble specialised container systems. It comprises of 3 basic elements: a library of containerised backend components (e.g., a low-level builder, logging facility, volume management, networking, image management, containerd, SwarmKit), a framework for assembling the components into a standalone container platform, and tooling to build, test and deploy artifacts for these assemblies and a reference assembly, called Moby Origin, which is the open base for the Docker container platform, as well as examples of container systems using various components from the Moby library or from other projects.

Docker for Mac is a Docker Community Edition (CE) app and aims for a native OSX experience that works with existing developer workflows. The Docker for Mac install package includes everything you need to run Docker on a Mac. Few of the attractive features it includes: 

  • Easy drag and drop installation, and auto-updates to get latest Docker.
  • Secure, sandboxed virtualisation architecture without elevated privileges. 
  • Native networking support, with VPN and network sharing compatibility. 
  • File sharing between container and host: uid mapping, inotify events, etc

The core building blocks for Docker for Mac includes –

  • Virtualisation
  • Networking
  • Filesystem

Some notable components include:

  • HyperKit, a toolkit for embedding hypervisor capabilities in your application
  • DataKit, a tool to orchestrate applications using a 9P dataflow
  • VPNKit, a set of tools and services for helping HyperKit VMs interoperate with host VPN configurations

Screen Shot 2017-07-13 at 10.01.33 PM


Screen Shot 2017-07-13 at 10.05.21 PM


Screen Shot 2017-07-13 at 10.08.09 PM

                                                                                                                                                                                            source ~ Docker Inc.

If you want to learn more details about these components, this should be the perfect guide.

LinuxKit today support multiple Cloud platforms like AWS, Google Cloud Platform, Microsoft Azure, VMware  etc. In terms of Local hypervisor, it supports HyperKit, VMware, KVM and Microsoft Hyper-V too. 


Screen Shot 2017-07-13 at 10.16.48 PM


If you have closely watched LinuxKit repository, a new directory called blueprint has been introduced which will contain the blueprints for base systems on the platforms that will be supported with LinuxKit.These has been targeted to include all the platforms that Docker has editions on, and all platforms that Docker community supports. All the initial testing work will be done under examples/ and then pushed to blueprints/ directory as shown. 

Currently, the blueprint/ directory holds  essential files for Docker For Mac 17.06 CE – 

  • base.yml => which contains the open source components for Docker for Mac.
  • docker-17.06.ce.yml => necessary YAML file to build up VM Image

The blueprint has support for controlling dockerd from the host via vsudd and port forwarding with VPNKit. It requires HyperKit, VPNKit and a Docker client on the host to run.

Screen Shot 2017-07-13 at 8.55.29 AM

File: docker-17.06-ce.yml

Screen Shot 2017-07-13 at 9.00.10 AM

The VPNKIT specific enablement comes from the below YAML code:

Screen Shot 2017-07-13 at 10.40.42 PM

File: base.yml

Screen Shot 2017-07-13 at 9.03.49 AM

Use the Moby tool to build it with Docker 17.06:

moby build -name docker4mac base.yml docker-17.06-ce.yml

Screen Shot 2017-07-13 at 10.09.33 AM


This will produce couple of files under docker4mac-state directory as shown below:


Screen Shot 2017-07-13 at 11.59.04 AM


Next, we can now run the LinuxKit command to run VM with 1024M disk

linuxkit run hyperkit -networking=vpnkit -vsock-ports=2376 -disk size=1024M docker4mac

By now, you should be able to see docker4mac VM booting up smoothly:

Screen Shot 2017-07-13 at 10.11.28 AM

Screen Shot 2017-07-14 at 10.07.12 PM

You can open up a new terminal to see the overall directory/files tree structure:

Screen Shot 2017-07-13 at 10.28.18 AM


Let us try listing the service containers using  ctr containers ls command. It should show up Docker For Mac 17.06 service container as shown below:

Screen Shot 2017-07-14 at 10.13.40 PM

Run the ctr tasks ls command to get the list of service containers:

Screen Shot 2017-07-16 at 10.30.49 AM

Now its easy to enter into docker-ddm service container with the below command:

ctr exec -t --exec-id 861 docker-dfm sh

Screen Shot 2017-07-16 at 10.33.07 AM

You can verify further information with docker info command:

Screen Shot 2017-07-16 at 10.36.49 AM


How to connect to docker-dfm` from another terminal?

Using another terminal, it is pretty easy to access docker via the socket guest.00000948 in the state directory (docker4mac-state/ by default) with the below command:

docker -H unix://docker4mac-state/guest.00000948 images


Let us create a Nginx docker container and see if it is accessible from Safari browser:

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more what’s happening in LinuxKit project by visiting this link.


Walkthrough: How to build your own customised LinuxKit kernel?


“..Its Time to Talk about Bring Your Own Components (BYOC) now..”

The Moby Project  is gaining momentum day by day. If you are System Builder looking out for building your own container based systems, Moby is for you. You have freedom to choose from the library of components derived from Docker or you can elect to “bring your own components” packaged as containers with mix-&match options among all of the components to create your own customised container system. Moby as a tool promises to convert Docker from a monolithic engine into a toolkit of reusable components and these individual components results in the building blocks for custom solutions.


I was a speaker at Docker Bangalore Meetup last weekend and talked about “Introduction to LinuxKit”. I walked through the need of LinuxKit for immutable infrastructure, platform it supports and finally demoed PWM(Play with Moby). I spent considerable amount of time talking about “How to Build Your Own LinuxKit OS” and showcased them how to build it using a single YAML file. One of the interesting question raised was – ” How shall I build Ubuntu or Debian based OS as our infrastructure runs most of these distributions?” I promised them to write a blog post which is easy to follow and help them build their own customised kernel with LinuxKit.

A Brief about LinuxKit Kernel..

LinuxKit Kernel images are distributed as hub images as shown below:

Screen Shot 2017-07-09 at 7.25.10 AM


It contains the kernel, kernel modules, kernel config file, and optionally, kernel headers to compile kernel modules against. It is important to note that LinuxKit kernels are based on the latest stable releases.Each kernel image is tagged with the full kernel version (e.g.,linuxkit/kernel:4.10.x) and with the full kernel version plus the hash of the files it was created from.

LinuxKit offers the ability to build bootable Linux images with kernels from various distributions like CentOS, Debian, Ubuntu. As of today, LinuxKit offers a choice of the following kernels:

Moby uses YAML file as an input to build LinuxKit OS image. By default, the YAML file uses the default linuxkit/kernel:<version> as an official image as shown below in the first 3 lines.

Screen Shot 2017-07-09 at 7.34.01 AM


Let us see how to build our own distribution Linux Kernel(Ubuntu) through the below steps:

Under LinuxKit project repository, there is a kernel build directory and holds essential Dockerfile, configuration files and patches available to build your own Kernel for LinuxKit OS.

Screen Shot 2017-07-09 at 7.46.57 AM

If you want to include your own new patch, you can fork the repository and get the patch added under the right patch directory(shown above). For example, if you need to add patch to 4.11.x release, you can add it under patch-4.11.x directory as shown below:

Screen Shot 2017-07-09 at 7.57.35 AM

Below is the snippet of kernel_config-4.11.x file. The few lines below shows that the new built kernel will be pushed to Dockerhub under ORG/kernel:<version>.

Screen Shot 2017-07-09 at 8.33.36 AM

Next, all you need to do is run the below command to build your own customised kernel with required patches and modules loaded:

make build_4.11.x ORG=ajeetraina

PLEASE NOTE – ORG should be replaced by your Dockerhub ORG (it’s basically Dockerhub ID). If you are planning to raise PR for your patch to be included, you can use LinuxKit ORG and then get it approved.


Screen Shot 2017-07-09 at 7.52.21 AM

This will take sometime to create a local kernel image called ajeetraina/kernel:4.11.8-8bcfec8e1f86bab7a642082aa383696c182732f5 assuming you haven’t committed you local changes.

Screen Shot 2017-07-09 at 9.24.57 AM


You can then use this kernel in your YAML file as:

   image: "ajeetraina/kernel:4.11.9-2fd6982c78b66dbdef84ca55935f8f2ab3d2d3e6"

Screen Shot 2017-07-09 at 9.45.42 AM


Run the Moby tool to build up this new kernel:

moby build linuxkit.yml

Screen Shot 2017-07-09 at 9.46.57 AM


How to Build Ubuntu based Kernel Image

If you are looking out for building distribution specific Kernel images, then this is the right section for you to refer. Docker Team has done a great job in putting it altogether under linuxkit/scripts/kernel directory as shown below:

Screen Shot 2017-07-09 at 8.12.28 AM

Let us look inside Dockerfile.deb:

Screen Shot 2017-07-09 at 8.50.35 AM

The script picks up Dockerfile.deb to build and push the kernel-ubuntu image as shown below in the snippet:

Screen Shot 2017-07-09 at 8.53.07 AM

All you need to do is execute the script:


It will take sometime and you should see the below output:

Screen Shot 2017-07-09 at 8.56.10 AM

Now you can leverage this kernel into YAML file as shown below in the output:

Screen Shot 2017-07-09 at 10.01.27 AM

Let us pick up linuxkit/linuxkit.yml and add the above kernel entry to build LinuxKit OS running Docker containers as a service.

Screen Shot 2017-07-09 at 10.31.12 AM


Use the moby build command to build LinuxKit OS image based on Ubuntu kernel:

moby build linuxkit.yml

Screen Shot 2017-07-09 at 10.36.47 AM



Next, execute the linuxkit run command to boot up LinuxKit.

linuxkit run linuxkit

Screen Shot 2017-07-09 at 10.40.18 AM


Let us verify if the services are up and running:

Screen Shot 2017-07-09 at 10.48.50 AM

It’s easy to enter into one of service container as shown below:

Screen Shot 2017-07-09 at 10.52.40 AM

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more what’s happening in LinuxKit project by visiting this link.