What’s New in Docker Enterprise Edition 18.03 Engine Release?

Estimated Reading Time: 6 minutes


A New Docker Enterprise Engine 18.03.0-ee-1 has been released. This is a stand-alone engine release intended for EE Basic customers. Docker EE Basic includes the Docker EE Engine and does not include UCP or DTR.

In case you’re new, Docker is available in two editions:

  • Community Edition (CE)
  • Enterprise Edition (EE)

Docker Community Edition (CE) is ideal for individual developers and small teams looking to get started with Docker and experimenting with container-based apps. Docker Enterprise Edition (EE) is designed for enterprise development and IT teams who build, ship, and run business critical applications in production at scale. Below are the list of capabilities around various Docker Editions:

[Please Note – Since UCP and DTR can not be run on 18.03 EE Engine, it is intended for EE Basic customers. It is not intended, qualified, or supported for use in an EE Standard or EE Advanced environment.If you’re deploying UCP or DTR, use Docker EE Engine 17.06.]

In my recent blog, I talked about “Docker EE 2.0 – Under the Hood” where I covered on 3 major components of EE 2.0 which together enable a full software supply chain, from image creation, to secure image storage, to secure image deployment. Under this blog post, I am going to talk about what’s new features have been included under the newly introduced Docker EE 18.03 Engine.


Let us talk about each of these features in detail.

Containerd 1.1 Merged under 18.03 EE Engine

containerd is an industry-standard core container runtime. It is based on the Docker Engine’s core container runtime to benefit from its maturity and existing contributors. It provides a daemon for managing running containers. It is available today as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments.

1.1 is the second major release for containerd with added support for CRI, the
Kubernetes Container Runtime InterfaceCRI is a new plugin which allows connecting the containerd daemon directly to a Kubernetes kubelet to be used as the container runtime. The CRI GRPC interface listens on the same socket as the containerd GRPC interface and runs in the same process.

containerd 1.1 announced in April implements Kubernetes Container Runtime Interface (CRI), so it can be used directly by Kubernetes, as well as Docker Engine. It implements namespaces so that clients from different container systems (e.g. Docker Engine and DC/OS) can leverage a single containerd instance on one system while being logically separated. This release allows  you to plug in any OCI-compliant runtime, such as Kata containers, gVisor etc.  And includes  many performance improvements such as significantly better pod start latency and cpu/memory usage of the CRI plugin. And for additional performance benefits it replaces graphdrivers with more efficient snapshotters, that BuildKit leverages for faster build.



The new containerd 1.1 includes –

  • CRI plugin
  • ZFS, AUFS, and native snapshotter
  • Improvements to the ctr tool
  • Better support for multiple platforms
  • Cross namespace content sharing
  • Better mount cleanup
  • Support for disabling plugins
  • TCP debug address for remote debugging
  • Update to Go 1.10
  • Improvements to the garbage collector

FIPS 140-2 Compliant Engine

FIPS stands for Federal Information Processing Standard (FIPS) Publication. It is a U.S. government computer security standard which is used to approve cryptographic modules. It is important to note that the Docker EE cryptography libraries are at the “In-process(Co-ordination)” phase of the FIPS 140-2 Level 1 Cryptographic Module Validation Program.The Docker platform is validated against widely-accepted standards and best practices is a critical aspect of the product development as this enables companies and agencies across all industries to adopt Docker containers. FIPS is a notable standard which validates and approves the use of various security encryption modules within a software system.

For further detail: https://blog.docker.com/2017/06/docker-ee-is-now-in-process-for-fips-140-2/



Source: https://csrc.nist.gov/Projects/Cryptographic-Module-Validation-Program/Modules-In-Process/Modules-In-Process-List

Support for Windows Server 1709 (RS3) / 1803 (RS4)

Docker EE 18.03 Engine added support for Microsoft Windows Server 1709 & Windows Server 1803. With this release, there has been subtle improvement made in terms of density and performance via smaller image sizes than Server 2016. With this release, a single deployment architecture for Windows and Linux via parity with ingress networking and VIP service discovery has been introduced. Reduced operational requirements via relaxed image
compatibility is one of notable feature introduced with this newer release.

Below are the list of customer pain points which Docker, Inc focused for last couple of years:


Refer https://docs.docker.com/ee/engine/release-notes/#runtime for more information.

The –chown support in COPY/ ADD Commands in Docker EE

With Docker EE 18.03 EE Release, there is a support for –chown with COPY and ADD in Dockerfile. This release improve security by enabling developer-specific users vs root for run-time file operations. This release provide parity with functionality added in Docker CE. At this point of time, this has been introduced under Linux OS only.

Support for Compose on Kubernetes CLI

Under Docker EE 18.03 release, now you can use the same Compose files to deploy apps to Kubernetes on Enterprise Edition. This feature is not available under any of Desktop or Community Edition Release.

You should be able to pass –orchestrator parameter to specify the orchestrator.


This means that it should be easy to move applications between Kubernetes and Swarm, and simplify application configuration. Under this release, one can create native Kubernetes Stack object and able to interact with Stacks via the Kubernetes API. This release improve UX and move out of experimental phase. Functionality now available in the main, supported Docker CLI.

Easy setup of Docker Content Trust

Docker Content Trust was introduced first of all in Docker Engine 1.8 and Docker CS Engine 1.9.0 and is available in Docker EE.It allows image operations with a remote Docker registry to enforce client-side signing and verification of image tags. It enables digital signatures for data sent to, and received from, remote Docker registries. These signatures allow Docker client-side interfaces to verify the integrity and publisher of specific image tags.

Docker Content Trust works with Docker Hub as well as Docker Trusted Registry (Notary integration is experimental in version 1.4 of Docker Trusted Registry).It provides strong cryptographic guarantees over what code and what versions of software are being run in your infrastructure. Docker Content Trust integrates The Update Framework (TUF) into Docker using Notary , an open source tool that provides trust over any content

Below are the list of CLI options available under the current release –

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Under the Hood: Demystifying Docker For Mac CE Edition

Estimated Reading Time: 6 minutes


Docker is a full development platform for creating containerized apps, and Docker for Mac is the most efficient way to start and run Docker on your MacBook. It runs on a LinuxKit VM and NOT on VirtualBox or VMware Fusion. It embeds a hypervisor (based on xhyve), a Linux distribution which runs on LinuxKit and filesystem & network sharing that is much more Mac native. It is a Mac native application, that you install in /Applications. At installation time, it creates symlinks in /usr/local/bin for docker & docker-compose and others, to the commands in the application bundle, in /Applications/Docker.app/Contents/Resources/bin.

One of the most amazing feature about Docker for Mac is “drag & Drop” the Mac application to /Applications to run Docker CLI and it just works flawlessly. The way the filesystem sharing maps OSX volumes seamlessly into Linux containers and remapping macOS UIDs into Linux is one of the most anticipated feature.

Few Notables Features of Docker for Mac:

  • Docker for Mac runs in a LinuxKit VM.
  • Docker for Mac uses HyperKit instead of Virtual Box. Hyperkit is a lightweight macOS virtualization solution built on top of Hypervisor.framework in macOS 10.10 Yosemite and higher.
  • Docker for Mac does not use docker-machine to provision its VM. The Docker Engine API is exposed on a socket available to the Mac host at /var/run/docker.sock. This is the default location Docker and Docker Compose clients use to connect to the Docker daemon, so you to use docker and docker-compose CLI commands on your Mac.
  • When you install Docker for Mac, machines created with Docker Machine are not affected.
  • There is no docker0 bridge on macOS. Because of the way networking is implemented in Docker for Mac, you cannot see a docker0 interface on the host. This interface is actually within the virtual machine.
  • Docker for Mac has now Multi-Architectural support. It provides binfmt_misc multi architecture support, so you can run containers for different Linux architectures, such as arm, mips, ppc64le, and even s390x.

Top 5 Exclusive Features of Docker For Mac That You Can’t Afford to Ignore

Under this blog, I will deep dive into Docker for Mac architecture and show how to access service containers running on top of LinuxKit VM.

At the base of architecture, we have hypervisor called Hyperkit which is derived from xhyve. The xhyve hypervisor is a port of bhyve to OS X. It is built on top of Hypervisor.framework in OS X 10.10 Yosemite and higher, runs entirely in userspace, and has no other dependencies. HyperKit is basically a toolkit for embedding hypervisor capabilities in your application. It includes a complete hypervisor optimized for lightweight virtual machines and container deployment. It is designed to be interfaced with higher-level components such as the VPNKit and DataKit.

Just sitting next to HyperKit is Filesystem sharing solution. The osxfs is a new shared file system solution, exclusive to Docker for Mac. osxfs provides a close-to-native user experience for bind mounting macOS file system trees into Docker containers. To this end, osxfs features a number of unique capabilities as well as differences from a classical Linux file system.On macOS Sierra and lower, the default file system is HFS+. On macOS High Sierra, the default file system is APFS.With the recent release, NFS Volume sharing has been enabled both for Swarm & Kubernetes.

There is one more important component sitting next to Hyperkit, rightly called as VPNKit. VPNKit is a part of HyperKit attempts to work nicely with VPN software by intercepting the VM traffic at the Ethernet level, parsing and understanding protocols like NTP, DNS, UDP, TCP and doing the “right thing” with respect to the host’s VPN configuration. VPNKit operates by reconstructing Ethernet traffic from the VM and translating it into the relevant socket API calls on OSX. This allows the host application to generate traffic without requiring low-level Ethernet bridging support.

On top of these open source components, we have LinuxKit VM which runs containerd and service containers which includes Docker Engine to run service containers. LinuxKit VM is built based on YAML file. The docker-for-mac.yml contains an example use of the open source components of Docker for Mac. The example has support for controlling dockerd from the host via vsudd and port forwarding with VPNKit. It requires HyperKit, VPNKit and a Docker client on the host to run.

Sitting next to Docker CE service containers, we have kubelet binaries running inside LinuxKit VM. If you are new to K8s, kubelet is an agent that runs on each node in the cluster. It makes sure that containers are running in a pod. It basically takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.On top of Kubelet, we have kubernetes services running. We can either run Swarm Cluster or Kubernetes Cluster. We can use the same Compose YAML file to bring up both the clusters side by side.

Peeping into LinuxKit VM

Curious about VM and how Docker for Mac CE Edition actually look like?

Below are the list of commands which you can leverage to get into LinuxKit VM and see kubernetes services up and running. Here you go..

How to enter into LinuxKit VM?

Open MacOS terminal and run the below command to enter into LinuxKit VM:

$screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty

Listing out the service containers:

Earlier the ctr tasks ls used to list the service containers running inside LinuxKit VM but in the recent release, namespace concept has been introduced, hence you might need to run the below command to list out the service containers:

$ ctr -n services.linuxkit tasks ls
TASK                    PID     STATUS
acpid                   854     RUNNING
diagnose                898     RUNNING
docker-ce               936     RUNNING
host-timesync-daemon    984     RUNNING
ntpd                    1025    RUNNING
trim-after-delete       1106    RUNNING
vpnkit-forwarder        1157    RUNNING
vsudd                   1198    RUNNING

How to display containerd version?

Under Docker for Mac 18.05 RC1, containerd version 1.0.1 is available as shown below:

linuxkit-025000000001:~# ctr version
  Version:  v1.0.1
  Revision: 9b55aab90508bd389d7654c4baf173a981477d55

  Version:  v1.0.1
  Revision: 9b55aab90508bd389d7654c4baf173a981477d55

How shall I enter into docker-ce service container using containerd?

ctr -n services.linuxkit tasks exec -t --exec-id 936 docker-ce sh
/ # docker version
 Version:      18.05.0-ce-rc1
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   33f00ce
 Built:        Thu Apr 26 00:58:14 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm

  Version:      18.05.0-ce-rc1
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.10.1
  Git commit:   33f00ce
  Built:        Thu Apr 26 01:06:49 2018
  OS/Arch:      linux/amd64
  Experimental: true
/ #

How to verify Kubernetes Single Node Cluster?

/ # kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-23T09:38:59Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
/ # kubectl get nodes
NAME                 STATUS    ROLES     AGE       VERSION
docker-for-desktop   Ready     master    26d       v1.9.6
/ #


Interested to read further? Check out my curated list of blog posts –

Docker for Mac is built with LinuxKit. How to access the LinuxKit VM
Top 5 Exclusive Features of Docker for Mac That you can’t afford to ignore
5 Minutes to Bootstrap Kubernetes Cluster on GKE using Docker for Mac 18.03.0
Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0
2-minutes to Kubernetes Cluster on Docker for Mac 18.01 using Swarm CLI
Docker For Mac 1.13.0 brings support for macOS Sierra, now runs ARM & AARCH64 based Docker containers
Docker for Mac 18.03.0 now comes with NFS Volume Sharing Support for Kubernetes


Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

New Docker Engine 1.11 integrates runC and containerd

Estimated Reading Time: 5 minutes

Docker 1.11 is two week old now. Popular for its Software Infrastructure Plumbing(SIP), Docker Inc. first focused on adoption of libnetwork, followed by Notary and Unikernel Projects and now rightly integrated Docker Engine with runC and containerd. What it really means is the management of containers is now split up into a separate piece of infrastructure plumbing called containerd, a daemon for managing runC. With this new release, Docker Engine is now more loosely coupled to the containers.


This new architecture looks promising in the sense that it is expected to bring a significant performance boost when handling a big amount of containers. The new Docker Engine 1.11 execution layer will be entirely relying on well delimited tools that can be used independently, with the drawback that it won’t ship as a single binary anymore. It makes it possible to upgrade the daemon without shutting down all running containers in the future.As stated by Docker Inc., “..this new Engine architecture will use containerd for container supervision. Because containerd ultimately relies on runC and the OCI specification for container execution, this will open the door for the Engine to be able to use any OCI compliant runtime..”


Let’s have a quick look at Docker 1.11 new release components. I had a Ubuntu 14.04.3 system running on AWS cloud instance I tried downloading the latest release and tried looking into the release as shown below:

Downloading Docker 1.11:

root@ubuntu:~# cat /etc/issue
Ubuntu 14.04.3 LTS \n \l

root@ubuntu:~# wget https://get.docker.com/builds/Linux/x86_64/docker-1.11.0.tgz–2016-04-23
13:05:16– https://get.docker.com/builds/Linux/x86_64/docker-1.11.0.tgz
Connecting to… connected.
Proxy request sent, awaiting response… 200 OK
Length: 20520756 (20M) [application/x-tar]
Saving to: ‘docker-1.11.0.tgz’

100%[======================================>] 20,520,756 329KB/s in 62s

2016-04-23 13:06:20 (325 KB/s) – ‘docker-1.11.0.tgz’ saved [20520756/20520756]

Checking docker version:

user@ubuntu:~$ sudo docker version
Version: 1.11.0
API version: 1.23
Go version: go1.5.4
Git commit: 4dc5990
Built: Wed Apr 13 19:36:04 2016
OS/Arch: linux/amd64

Version: 1.11.0
API version: 1.23
Go version: go1.5.4
Git commit: 4dc5990
Built: Wed Apr 13 19:36:04 2016
OS/Arch: linux/amd64


Extracting Docker 1.11:

root@ubuntu:~# tar xvzf docker-1.11.0.tgz

As shown above, a Linux docker installation is now made of 4 binaries (docker, docker-containerd, docker-containerd-shim and docker-runc). Here is a brief explanation in case you are completely new to runC.


runC is a lightweight universal runtime container. runC is built on libcontainer, the same container technology powering millions of Docker Engine installations.It is a CLI tool for spawning and running containers according to the OCP specification.The Open Container Project is an open governance structure for the express purpose of creating open industry standards around container formats and runtime. Projects associated to the Open Container Project can be found at https://github.com/opencontainers.

runc integrates well with existing process supervisors to provide a production container runtime environment for applications. It can be used with your existing process monitoring tools and the container will be spawned as a direct child of the process supervisor.

Containers are configured using bundles. A bundle for a container is a directory that includes a specification file named “config.json” and a root filesystem. The root filesystem contains the contents of the container.


Docker Inc. built up containerd as a seperate daemon to move the container supervision out of the core Docker Engine.It is firmly believed that Containerd improves on parallel container start times which means if you need to launch multiple containers as fast as possible you should see improvements with this release.Containerd is claimed to have full Support For starting OCI bundles And managing Their Lifecycle.

Containerd has full support for starting OCI bundles and mananaging their lifecycle. This allows users to replace the runC binary on their system with an alternate runtime and get the benefits of still using Docker’s API. When starting a container most of the time is spent within syscalls and system level operations. It does not make sense to launch all 100 containers concurrently since the majority of the startup time is mostly spent waiting on hardware / kernel to complete the operations. Containerd uses events to schedule container start requests and various other operations lock free. It has a configurable limit to how many containers it will start concurrently, by default we have it set at 10 workers. This allows you to make as many API requests as you want and containerd will start containers as fast as it can without totally overwhelming the system.

Let’s look at how the newly added docker-runc and docker-containerd looks like in this new release.

$sudo /usr/bin/docker-containerd -version
containerd version 0.1.0 commit: d2f03861c91edaafdcb3961461bf82ae83785ed7

Checking docker-containerd version:

$sudo /usr/bin/docker-containerd-ctr -v
ctr version 0.1.0 commit: d2f03861c91edaafdcb3961461bf82ae83785ed7

Checking docker-runc version:

$sudo docker-runc –version
runc version 0.0.9
commit: e87436998478d222be209707503c27f6f91be0c5
spec: 0.5.0-dev

Looking at the process states:

root@ubuntu:~# ps ax | grep docker
1091 pts/0 T 0:00 sudo docker daemon
1095 pts/0 T 0:00 sudo docker daemon
1107 ? Ssl 0:00 docker-containerd -l /var/run/docker/libcontainerd/docker-containerd.sock –runtime docker-runc

I have one Nagios container running as shown below:

userchef@ubuntu:~$ sudo docker-runc list
e938b4aeeed2484258aeb56f7ceb987f6226450c8c5bdc3c0261cedb1b49dd6e 3583 running /run/docker/libcontainerd/e938b4aeeed2484258aeb56f7ceb987f6226450c8c5bdc3c0261cedb1b49dd6e 2016-04-24T18:49:29.32912562Z

Normally, docker-runc needs 3 thing to start container:

  1. rootfs dir with the whole filesystem
  2. config.json
  3. runtime.json

We will create the rootfs, by exporting a docker container as shown below:

Exporting a container:

To be able to export a container, first you have to create one:

userchef@ubuntu:~$ sudo docker create –name container1 ajeetraina/testnagios shb64ed4ab4b872a6ff0f27825d4142e5f4c809aebe209591c56055f1d03a53a3b

Now you are ready to export it into a tar file: docker-nagios.tar

userchef@ubuntu:~$ sudo docker export container1 > docker-nagios.tar

To create the rootfs dir, just untar docker-nagios.tar into a directory named roots:

userchef@ubuntu:~$ sudo mkdir rootfs
userchef@ubuntu:~$ sudo tar -xf docker-nagios.tar -C rootfs/

userchef@ubuntu:~/rootfs$ ls
anaconda-post.log dev lib media opt run sys var
bin etc lib64 mnt proc sbin tmp
boot home lost+found nagios-installer.sh root srv usr

Creating specification files:

With the Rootfs ready you need to generate 2 spec files. As shown below, the docker-runc can create a default file based on the rootfs directory using:

userchef@ubuntu:~$ sudo docker-runc spec
userchef@ubuntu:~$ ls
docker-nagios.tar rootfs
config.json myapp.tar shim-log.json

userchef@ubuntu:~$ sudo docker-runc start e938
sh-4.2# ls

Hence, we saw how to get started with docker-runc. In the next post, we will look more into containerd daemon in more detail.