Test-Drive Docker “Infrakit” using Docker container

Estimated Reading Time: 7 minutes

Last week I read a research paper called “Computer Immunology” written by a computer scientist Mark Burgess(Founder of CfEngine). Thanks to Solomon Hykes for referring about it in one of YC  (Start-up Incubator Forum). I was heavily inspired by the idea of its writing which,as an analogy to Human Immune System, think about how systems, not just servers but whole systems, could be self-healing and the different scales from the small parts to the large scales. Though the research paper was published around 20 years ago, but interestingly the concept is getting rediscovered in the recent past when we talk about Cloud Immutable Infrastructure Pattern. In one of his recent interview, Mark pointed out an interesting statement –

“..Self-healing of machines was a good strategy at that time. In the future though, if you look at biology, there was also another strategy which is you have sufficient redundancy if you scratch a few skin cells off your arm, you don’t bleed, you don’t die. But in this world of hundreds of servers, if you lose a server it still means a lot.

Today, we’re actually approaching a sort of biological scale which we can argue in this way and this of course is what is happening in cloud with the immutable infrastructure pattern. Don’t repair, simply cast aside and build a new one and replace it.”

Inspired by this research paper, there has been rise of several products and tools for creating and managing declarative and self-healing infrastructure. “Infrakit” is a new toolkit which has recently been open sourced by Docker Inc. in last ContainerCon Europe 2016 held in Berlin, Germany. Last year, “Swarmkit” was introduced as a declarative management toolkit for orchestrating containers which got later integrated into 1.12 as “Swarm Mode”. This year, Docker Team focused on solving another difficult problem – Managing Docker on diverse Cloud Platform like AWS & Azure.

Why Infrakit?

In the recent past, Docker Team came up with various Docker editions like Docker for Mac & Docker for AWS or Azure, trying to make Docker work out-of-the-box and making updates easier, especially Docker for Mac just works great.But infrastructure Management in terms of AWS & Azure is still difficult. Reason – AWS architecture is inherently a distributed system which contains numerous subsystems having lots of moving parts like provisioning VPCs, EC2 instances setup, IAM roles, integration with CloudFormation etc. Before you run Docker containers, you need to setup all these infrastructure management functionalities beforehand in order to provide services to your end-user.


To solve the above problem, Docker Team came up with a solution in the form of toolkit called “Infrakit”. As it name suggests, Infrakit is a toolkit to create and manage infrastructure,that is  scalable, self-healing and declarative in nature. Please remember that it is not a standalone tool, it’s a component designed to be embedded in a higher-level tool. What does this mean? One can use Infrakit to provision any kind of infrastructure using a custom “flavor”. This is expected to be integrated into Docker 1.13 for built-in infrastructure management(just like the story of “SwarmKit”)

Defining Infrakit


Infrakit is defined as “Declarative”, “Self-healing” & “Scalable”. Let us spend some time in understanding what it actually means. By declarative, it means that the toolkit rely on JSON schema configuration for the user to describe the infrastructure state. JSON is used for configuration because it is composable and a widely accepted format for many infrastructure SDKs and tools. Since the system is highly component-driven, the JSON format follows simple patterns to support the composition of components.

Talking about “Self-Healing” aspect, it is different from other traditional configuration management tools(like Puppet, CFEngine). Infrakit comprises of whole network sets of active process(called as plugins) which collaborate with each other to analyze and perform action to bring infrastructure in the desired state.These processes are always busy in continuously monitoring the infrastructure to take any kind of state divergence. Based on the state divergence, it decides on the necessary actions. It is continuously monitoring and reconciling the actual state against the specification. This is also valid for rolling updates which is seamless without disturbing the running services.

How does Infrakit Plugins Work?

InfraKit uses Plugins, which are active processes and the only building block to manage infrastructure. Technically, a Plugin is an HTTP server with a well-defined API, listening on a UNIX socket.Currently there are three kind of plugins (also called as Utilities) supported by Infrakit which represents different layer of abstraction representing whole stack of infrastructure, namely – Flavors, Group and Instance.

Group:  InfraKit provides primitives to manage Groups plugins which are for managing cluster of machines. These clusters comprises of either cattle (machines with identical configs) or pets(machines with different configs)

Flavors: A Flavor Plugin can be thought of as defining what runs on an Instance. It basically helps in distinguishing members of one group from another by describing how these members should be treated. Flavors allow a group of instances to have different characteristics. In a group of cattle, all members are treated identically and individual members do not have strong identity. In a group of pets, however, the members may require special handling and demand stronger notions of identity and state.

Instance: Instances are members of a group. An instance Plugin manages some physical resource instances. It knows only about individual instances and nothing about Groups. Instance is technically defined by the plugin, and need not be a physical machine at all.For compute, for example, instances can be VM instances of identical spec.

To deep dive into Infrakit architecture, I recommend reading this.

Test-Driving the Infrakit:

To test drive Infrakit in just 5 minutes, I created a Docker image “ajeetraina/infrakit” which is a Ubuntu 16.04 based image with required GO package and build system ready to use. Let’s use this Docker image to illustrate how Infrakit performs self-healing operation.

Login into one of your Docker host machine and run the container as shown below:


First, we will execute the below command to start the File instance plugin:


Next, we will start a group plugin in the new terminal as shown below:


Then, it’s time to start one of the provided flavor plugins called ‘vanilla’, which will be used to create a group.


In addition to default Instance File Plugin, let’s start another File Instance plugin as shown:


We need to create JSON based configuration file to give it to the Group plugin:

cat << EOF > collabnix.json
“ID”: “collabnix”,
“Properties”: {
“Allocation”: {
“Size”: 5
“Instance”: {
“Plugin”: “collabnix-instance-file”,
“Properties”: {
“Note”: “Collabnix Infrakit 101”
“Flavor”: {
“Plugin”: “flavor-vanilla”,
“Properties”: {
“Init”: [
“docker pull dockercloud/hello-world”,
“docker run -d -p 8080:80 dockercloud/hello-world”
“Tags”: {
“tier”: “web”,
“project”: “infrakit”

This Docker image is already equipped with this file under collabnix directory under Infrakit base directory. Hence, the below commands should work great as expected.


By now, you must see the below command showing the right output:


Let’s execute the below command to keep a close watch over collabnix group.


Under Infrakit-group-plugin, you will find some actions for group “collabnix” where 5 instances of dockercloud/hello-world is ready.


Also, you can inspect “collabnix” group as shown below:


Cool, Isn’t it?

Let’s test the “Self-Healing” aspect now. As we see below, there are number of instances running.


Let us remove few of those instances to see if it actually spun up well:


Wow ! Self-Healing aspect just worked well.  Meanwhile, we can check the output at group instance:


This is just one operation with Infrakit. Infrakit is currently under active development and there are lots of further exciting stuffs coming up. In the future post, I am going to see how it integrates under Swarm Mode. Keep Reading !

[clickandtweet handle=”@ajeetsraina” hashtag=”#Infrakit” related=”@docker” layout=”” position=””]Infrakit Test-Drive & demonstration of Self-Healing Operation using Docker container[/clickandtweet]

Docker 1.12 Swarm Mode & Persistent Storage using NFS

Estimated Reading Time: 5 minutes

Containers are stateless by nature and likely to be short-lived. They are quite ephemeral than VMs. What it actually means? Say, you have any data or logs generated inside the container and you don’t really care about loosing it no matter how many times you spin it up and down, like HTTP requests, then the ideal stateless feature should be good enough. BUT in case you are looking out for a solution which should record “stateful” applications like storing databases, storing logs etc. you definitely need persistent storage to be implemented. This is achieved by leveraging Docker’s volume mounts to create either a data volume or a data volume container that can be used and shared by other containers.


In case you’re new to Docker Storage, Docker Volumes are logical building blocks for shared storage when combined with plugins. It helps us to store state from the application to locations outside the docker image layer. Whenever you run a Docker command with -v, and provide it with a name or path, this gets managed within /var/lib/docker or in case you’re using a host mount, it’s something that exists on the host. The problem with this implementation is that the data is pretty inflexible, which means anything you write to that specific location, yes, it’ll stay there after the container’s life cycle, but only on that host. If you lose that host, the data will get erased. This clearly means that the situation is very prone to data loss. Within Docker, it looks very similar to what shown in the above picture, /var/lib/docker directory structure. Let’s talk about how to implement the management of data with an external storage. This could be anything from NFS to distributed file systems to block storage.

In my previous blog, we discussed about Persistent Storage implementation with DellEMC RexRay for Docker 1.12 Swarm Mode. Under this blog, we will look at how NFS works with Swarm Mode.I assume that you have an existing NFS server running in your infrastructure. If not, you can quickly set it up in just few minutes. I have Docker 1.12 Swarm Mode initialized with 1 master node and 4 worker nodes. Just for an example, I am going to leverage a Ubuntu 16.04 node(outside the swarm mode cluster) as NFS server and rest of the nodes( 1 master & 4 workers) as NFS client systems.

Setting up NFS environment:

There are two ways to setup NFS server – either using available Docker image or manually setting up NFS server on the Docker host machine. As I already have NFS server working on one of Docker host running Ubuntu 16.04 system, I will just verify if the configuration looks good.

Let us ensure that NFS server packages are properly installed as shown below:

raina_ajeet@master1:~$ sudo dpkg -l | grep nfs-kernel-server

1:1.2.8-9ubuntu12               amd64        support for NFS kernel server


I created the following NFS directory which I want to share across the containers running the swarm cluster.

$sudo mkdir /var/nfs

$sudo chown nobody:nogroup /var/nfs

It’s time to configure NFS shares. For this, let’s edit the export file to look like as show below:

raina_ajeet@master1:~$ cat /etc/exports

# /etc/exports: the access control list for filesystems which may be exported

#               to NFS clients.  See exports(5).#

# Example for NFSv2 and NFSv3:

# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2


# Example for NFSv4:

# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)

# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)

/var/nfs    *(rw,sync,no_subtree_check)



As shown above, we will be sharing /var/nfs directory among all the worker nodes in the Swarm cluster.

Let’s not forget to run the below commands to provide the proper permission:

$sudo chown nobody:nogroup /var/nfs

$sudo exportfs -a

$sudo service nfs-kernel-server start

Great ! Let us cross-check if the configuration holds good.

raina_ajeet@master:~$ sudo df -h

Filesystem           Size  Used Avail Use% Mounted on

udev                 1.9G     0  1.9G   0% /dev

tmpfs                370M   43M  328M  12% /run

/dev/sda1             20G  6.6G   13G  35% /

tmpfs                1.9G     0  1.9G   0% /dev/shm

tmpfs                5.0M     0  5.0M   0% /run/lock

tmpfs                1.9G     0  1.9G   0% /sys/fs/cgroup

tmpfs                100K     0  100K   0% /run/lxcfs/controllers

tmpfs                370M     0  370M   0% /run/user/1001   20G   19G  1.3G  94% /mnt/nfs/var/nfs

As shown above, we have NFS server with IP:  and ready to share volume to all the worker nodes.

Running NFS service on Swarm Mode


In case you are new to –mount option introduced under Docker 1.12 Swarm Mode, here is an easy explanation:-

$sudo docker service create  –mount type=volume,volume-opt=o=addr=<source-host which is master node>, volume-opt=device=:<NFS directory>,volume-opt=type=nfs,source=<volume name>, target=/<insideContainer> –replicas 3 –name <service-name> dockerimage <command>

Let’s verify if NFS service is up and running across the swarm cluster –


We can verify the NFS volume driver through ‘docker volume’ utility as shown below –


Inspecting the NFS Volume on the nodes:


The docker volume inspect <vol_name> rightly displays the Mountpoint, Labels and Scope of the NFS volume driver.


Verifying if worker nodes are able to see the Volumes

Let’s verify by logging into one of worker node and trying to check if volume is shared across the swarm cluster:


Let us create a file under /var/nfs directory so as to see if storage persistent is actually working and data is shared across the swarm cluster:


Let’s verify if one of worker node can see the created file.


Hence, we implemented persistent storage for Docker 1.12 Swarm Mode using NFS. In the future post, we will talk further on the available storage plugin and its implementations.

[clickandtweet handle=”@ajeetsraina” hashtag=”#Swarm, #docker, #NFS” related=”@docker” layout=”” position=””]Docker 1.12 Swarm Mode & Persistent Storage using NFS[/clickandtweet]

A Comparative Study of Docker Engine on Windows Server Vs Linux Platform

Estimated Reading Time: 8 minutes

September 26, 2016 was an important day for both Docker Inc. and Microsoft at Ignite conference in Atlanta. Two week ago, Microsoft finally unveiled the final GA release of Windows Server 2016 which holds plenty of new features such as improved security, productivity, intelligence, cloud, networking tools and not to miss out, a better support for clustering. The major point of attraction was  the addition of the Nano Server option, a stripped-down version of the OS for use in the cloud and Microsoft’s System Center 2016 announcement too. A new preview of Azure Stack, targeted to be available in 2017, will allow enterprises to run the core Azure services inside their own data centers.BUT the biggest news was that of Docker Commercial Partnership with Microsoft by extending Docker Engine support on Windows Server 2016 platform. As part of commercial partnership, Microsoft will make the commercially supported Docker Engine available to Windows Server 2016 customer at no additional charge. Essentially, Microsoft will handle most of the basic support and then pass more complicated issues on to Docker Inc.



What does it mean to Windows community?

It means that Windows Server 2016 natively supports Docker containers now on-wards and offers two deployment options – Windows Server Containers and Hyper-V Containers, which offer an additional level of isolation for multi-tenant environments.The extensive partnership integrates across the Microsoft portfolio of developer tools, operating systems and cloud infrastructure including:

  • Windows Server 2016
  • Hyper-V
  • Visual Studio
  • Microsoft Azure

What does it mean to Linux enthusiasts?

In case you are Linux enthusiast like me, you must be curious to know how different does Docker Engine on Windows Server Platform work in comparison to Linux Platform. Under this post, I am going to spend considerable amount of time talking about architectural difference, CLI which works under both the platform and further details about Dockerfile, docker compose and the state of Docker Swarm under Windows Platform.

Let us first talk about architectural difference of Windows containers Vs Linux containers.

Looking at Docker Engine on Linux architecture, sitting on the top are CLI tools like Docker compose, Docker Client CLI, Docker Registry etc. which talks to Docker REST API. Users communicates and interacts with the Docker Engine and in turn, engine communicates with containerd. Containerd spins up runC or other OCI compliant run time to run containers. At the bottom of the architecture, there are underlying kernel features like namespaces which provides isolation and  control groups etc. which implements resource accounting and limiting, providing many useful metrics, but they also help ensure that each container gets its fair share of memory, CPU, disk I/O; and, more importantly, that a single container cannot bring the system down by exhausting one of those resources.


     Docker Engine on Linux Platform



Under Windows, it’s slightly a different story. The architecture looks same for the most of the top level components like same Remote API, same working tools (Docker Compose, Swarm) but as we move down, the architecture looks different. In case you are new to Windows kernel, the Kernel within the Windows is somewhat different than that of Linux because Microsoft takes somewhat different approach to the Kernel’s design. The term  “Kernel mode” in Microsoft language refers to not only the Kernel itself but the HAL(hal.dll) and various  system services as well. Various managers for  Objects, processes, Memory, Security, Cache, Plug in Play (PnP), Power,  Configuration and I/O collectively called Windows Executive(ntoskrnl.exe) are available. There is no kernel feature specifically called namespace and cgroup on Windows. Instead, Microsoft team came up with new version of Windows Server 2016 introducing “Compute Service Layer” at OS level which provides namespace, resource control and UFS like capabilities. Also, as you see below, there is NO containerd and runC concept available under Windows Platform. Compute Service Layer provides public interface to container and does the responsibility of managing the containers like starting and stopping containers but it doesn’t maintain the state as such. In short, it replaces containerd on windows and abstracts low level capabilities which the kernel provides.


   Docker Engine on Windows Platform


The below picture depicts the underlying Windows kernel feature built to support the containers. At the bottom, there is a shared kernel just like that we saw on Linux. The Host User mode talks about the Windows host operating system, primarily the system processes.The most important components are on the left hand side of the picture – System Processes & Application Processes which works differently from Linux prospective. Usually under Linux system, the system call mechanism  is documented and guaranteed to be stable across different kernel versions. Windows does not document or even guarantee consistency of the system call mechanism. The only way to make a system call on Windows is to call into ntdll.dll. Reason of large container size is because of DLLs which are interlinked processes calling each other.

t13                                                                                                                                                                                                      ~ Source: DockerCon 2016

It is important to note that there is no “FROM scratch” in Dockerfile for Windows due to large number of DLLs interlinked system processes to provide the base functionalities. Instead, Microsoft settled down their base images at the following two options:

  1. microsoft/windowsservercore – basically windows server, .Net 4.5, 9.3 Gigs, large, fully compatible, support Windows existing app
  2. microsoft/nanoserver – very smaller, ~600MB, no graphic stack, fast, smaller API  surface, existing application mighn’t be compatible, less memory

A Brief about Windows Namespace:

Under Windows system, there is NO such concept primarily called “namespaces” compared to Linux. But very similar to what namespace does, there is a concept called “Silos” – extension to so called “Windows Job objects.” – set of processes which you can assign or limit the resource control. With this, there is an introduction to process namespace, user namespace, object namespace, network namespace etc. Object namespace is a system level namespace hidden from users. Just like Linux, Windows too has \(slash root) at NT level for all the devices, example C:\Windows maps to \DosDevices\C:\Windows, \Device\Tcp in case of networking.

Getting Started with Docker on Windows 2016 Server

Important : You need Windows 2016 Server Evaluation build 14393 or later to taste the newer Docker Engine on Win2k16. If you try to follow the usual Docker installation process on your old Windows 2016 TP5 system, you will get the following error:


Please note that you won’t be able to update your TP5 system to Evaluation version to try the newer Docker 1.12.2. One need to install the newer Windows Server 2016 Evaluation version which you can download directly using  this link.

Once you have Windows Server 2016 Evaluation versions readily installed, just run the below commands in sequence:

Invoke-WebRequest "https://download.docker.com/components/engine/windows-server/cs-1.12/docker.zip" -OutFile "$env:TEMP\docker.zip" -UseBasicParsing
Expand-Archive -Path "$env:TEMP\docker.zip" -DestinationPath $env:ProgramFiles
[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\Program Files\Docker", [EnvironmentVariableTarget]::Machine)

dockerd --register-service

Start-Service Docker

The above commands should be enough to get Docker installed on your system without any issue.

Update: 10/11/2016 – Good News ! Windows 2016 Final Release and Nano Server is available under Azure Platform. In case you choose Windows Server 2016 with Container, you need to ensure that the following service is started:

> Start-Service Docker


Now you can search plenty of Windows Dockerized application using the below command:



Important Points:

1. Linux containers doesn’t work on Windows Platform.(see below)


2. DTR is still not supported on Windows Platform

3.  You can’t commit a running container and build image out of it. (This is very much possible on Linux Platform.

Using Dockerfile for MySQL

Building containers using Dockerfile is supported on Windows server platform. Let’s pick up a sample MySQL Dockerfile to build up MySQL container. I found it available on some github repository and want to see if Dockerfile is supported or not. The sample Dockerfile looks somewhat like as shown below:

FROM microsoft/windowsservercore

LABEL Description=”MySql” Vendor=”Oracle” Version=”5.6.29″

RUN powershell -Command \

$ErrorActionPreference = ‘Stop’; \

Invoke-WebRequest -Method Get -Uri https://dev.mysql.com/get/Downloads/MySQL-5.6/mysql-5.6.29-winx64.zip -OutFile c:\mysql.zip ; \

Expand-Archive -Path c:\mysql.zip -DestinationPath c:\ ; \

Remove-Item c:\mysql.zip -Force

RUN SETX /M Path %path%;C:\mysql-5.6.29-winx64\bin

RUN powershell -Command \

$ErrorActionPreference = ‘Stop’; \

mysqld.exe –install ; \

Start-Service mysql ; \

Stop-Service mysql ; \

Start-Service mysql

RUN type NUL > C:\mysql-5.6.29-winx64\bin\foo.mysql

RUN echo UPDATE user SET Password=PASSWORD(‘mysql123′) WHERE User=’root’; FLUSH PRIVILEGES; .> C:\mysql-5.6.29-winx64\bin\foo.mysql

RUN mysql -u root mysql < C:\mysql-5.6.29-winx64\bin\foo.mysql

This just brings up the MySQL image perfectly. I had my own version of MySQL Dockerized image available which is still under progress. I still need to populate the Docker image details.


Does Docker Engine on Windows support Swarm Mode?

Not Yet. Docker Engine on Windows Platform is still young. There has been number of contributions flowing in from Windows and Linux enthusiasts to build Windows containers.

In the future post, I will be talking about how to get simple application like wordpress up and running using Docker-compose.

[clickandtweet handle=”@ajeetsraina” hashtag=”#docker, #windows” related=”@docker” layout=”” position=””]A Comparative Study of Docker Engine on Windows Server Vs Linux Platform[/clickandtweet]