How I built ARM based Docker Images for Raspberry Pi using buildx CLI Plugin on Docker Desktop?

Estimated Reading Time: 11 minutes


2 weeks back in Dockercon 2019 San Francisco, Docker & ARM demonstrated the integration of ARM capabilities into Docker Desktop Community for the first time. Docker & ARM unveiled go-to-market strategy to accelerate Cloud, Edge & IoT Development. These two companies have planned to streamline the app development tools for cloud, edge, and internet of things environments built on ARM platform. The tools include AWS EC2 A1 instances based on AWS’ Graviton Processors (which feature 64-bit Arm Neoverse cores). Docker in collaboration with ARM will make new Docker-based solutions available to the Arm ecosystem as an extension of Arm’s server-tailored Neoverse platform, which they say will let developers more easily leverage containers — both remote and on-premises which is going to be pretty cool.

This integration is today available to the approximately 2 million developers using Docker Desktop Community Edition . As part of Docker Captain’s programme, we were lucky to get an early access to this build during Docker Captain Summit which took place on the first day of Dockercon 2019.

Introducing buildx

Under Docker 19.03.0 Beta 3, there is a new experimental CLI plugin called “buildx”. It is a pretty new Docker CLI plugin that extends the docker build command with the full support of the features provided by Moby BuildKit builder toolkit. It provides the same user experience as docker build with many new features like creating scoped builder instances and building against multiple nodes concurrently. As per the discussion with Docker staff, the “x” under buildx might get dropped in near future and features and flags are too subjected to change when the stable release is announced.

Buildx always build using the BuildKit engine and does not require DOCKER_BUILDKIT=1 environment variable for starting builds. Buildx build command supports the features available for docker build including the new features in Docker 19.03 such as outputs configuration, inline build caching or specifying target platform. In addition, buildx supports new features not yet available for regular docker build like building manifest lists, distributed caching, exporting build results to OCI image tarballs etc.

How does builder Instance work?

Buildx allows you to create new instances of isolated builders. This can be used for getting a scoped environment for your CI builds that does not change the state of the shared daemon or for isolating the builds for different projects. You can create a new instance for a set of remote nodes, forming a build farm, and quickly switch between them.

New instances can be created with docker buildx create command. This will create a new builder instance with a single node based on your current configuration. To use a remote node you can specify the DOCKER_HOST or remote context name while creating the new builder. After creating a new instance you can manage its lifecycle with the inspectstop and rm commands and list all available builders with ls. After creating a new builder you can also append new nodes to it.

To switch between different builders use docker buildx use <name>. After running this command the build commands would automatically keep using this builder.

Docker 19.03 also features a new docker context command that can be used for giving names for remote Docker API endpoints. Buildx integrates with docker context so that all of your contexts automatically get a default builder instance. While creating a new builder instance or when adding a node to it you can also set the context name as the target.

Enough theory !!! Do you really want to see it in action? Under this blog post, I will showcase how I built ARM-based Docker Images for my tiny Raspberry cluster using `docker buildx’ utility which runs on my Docker Desktop for Mac.

Installing Docker Desktop for Mac 2.0.4.1

Open up https://www.docker.com/products/docker-desktop for an early access of Docker Desktop. Do switch to Edge release if in case you’ve installed stable version of Docker Desktop.

As of today, Docker Desktop 2.0.4.1 Edge Community Edition comes with Engine 19.03.0 Beta 3, latest Kubernetes v1.1.4.1 and Compose 1.24.0 Release.

Verifying the docker buildx CLI

[Captains-Bay]🚩 >  docker buildx --help

Usage:	docker buildx COMMAND

Build with BuildKit

Management Commands:
  imagetools  Commands to work on images in registry

Commands:
  bake        Build from a file
  build       Start a build
  create      Create a new builder instance
  inspect     Inspect current builder instance
  ls          List builder instances
  rm          Remove a builder instance
  stop        Stop builder instance
  use         Set the current builder instance
  version     Show buildx version information 

Run 'docker buildx COMMAND --help' for more information on a command.

Listing all builder instances and the nodes for each instance

[Captains-Bay]🚩 >  docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS  PLATFORMS
default * docker                  
  default default         running linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6

We are currently using the default builder, which is basically the old builder.

Creating a New Builder Instance

The `docker buildx create` makes a new builder instance pointing to a docker context or endpoint, where context is the name of a context from docker context ls and endpoint is the address for docker socket (eg. DOCKER_HOST value).

By default, the current docker configuration is used for determining the context/endpoint value.

Builder instances are isolated environments where builds can be invoked. All docker contexts also get the default builder instance.

Let’s create a new builder, which gives us access to some new multi-arch features.

[Captains-Bay]🚩 >  docker buildx create --help

Usage:	docker buildx create [OPTIONS] [CONTEXT|ENDPOINT]

Create a new builder instance

Options:
      --append                 Append a node to builder instead of changing it
      --driver string          Driver to use (eg. docker-container)
      --leave                  Remove a node from builder instead of changing it
      --name string            Builder instance name
      --node string            Create/modify node with given name
      --platform stringArray   Fixed platforms for current node
      --use                    Set the current builder instance

Creating a new builder called “testbuilder”

[Captains-Bay]🚩 >  docker buildx create --name testbuilder
testbuilder
[Captains-Bay]🚩 >  docker buildx ls
NAME/NODE      DRIVER/ENDPOINT             STATUS   PLATFORMS
testbuilder    docker-container                     
  testbuilder0 unix:///var/run/docker.sock inactive 
default *      docker                               
  default      default                     running  linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
[Captains-Bay]🚩 > 

Switching to “testbuilder” builder instance

[Captains-Bay]🚩 >  docker buildx use testbuilder
[Captains-Bay]🚩 >  docker buildx ls
NAME/NODE      DRIVER/ENDPOINT             STATUS   PLATFORMS
testbuilder *  docker-container                     
  testbuilder0 unix:///var/run/docker.sock inactive 
default        docker                               
  default      default                     running  linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
[Captains-Bay]🚩 > 

Here I created a new builder instance with the name mybuilder, switched to it, and inspected it. Note that –bootstrap isn’t needed, it just starts the build container immediately. Next we will test the workflow, making sure we can build, push, and run multi-arch images.

What is –bootstrap all about?

The `docker buildx inspect –bootstrap ensures that the builder is running before inspecting it. If the driver is docker-container, then --bootstrap starts the buildkit container and waits until it is operational. Bootstrapping is automatically done during build, it is thus not necessary. The same BuildKit container is used during the lifetime of the associated builder node (as displayed in buildx ls).

[Captains-Bay]🚩 >  docker buildx inspect --bootstrap
[+] Building 22.4s (1/1) FINISHED                                                                                        
 => [internal] booting buildkit                                                                                    22.4s
 => => pulling image moby/buildkit:master                                                                          21.5s
 => => creating container buildx_buildkit_testbuilder0                                                              0.9s
Name:   testbuilder
Driver: docker-container

Nodes:
Name:      testbuilder0
Endpoint:  unix:///var/run/docker.sock
Status:    running
Platforms: linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
[Captains-Bay]🚩 >  

Authenticating with Dockerhub

[Captains-Bay]🚩 >  docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: ajeetraina
Password: 
Login Succeeded

Cloning the Repository

[Captains-Bay]🚩 >  git clone https://github.com/collabnix/docker-cctv-raspbian
Cloning into 'docker-cctv-raspbian'...
remote: Enumerating objects: 47, done.
remote: Counting objects: 100% (47/47), done.
remote: Compressing objects: 100% (31/31), done.
remote: Total 47 (delta 20), reused 32 (delta 14), pack-reused 0
Unpacking objects: 100% (47/47), done.

Peeping into Dockerfile

[Captains-Bay]🚩 >  cat Dockerfile 
FROM resin/rpi-raspbian:latest

RUN apt update && apt upgrade && apt install motion
RUN mkdir /mnt/motion && chown motion /mnt/motion
COPY motion.conf /etc/motion/motion.conf

VOLUME /mnt/motion
EXPOSE 8081
ENTRYPOINT ["motion"]
[Captains-Bay]🚩 >

Building ARM-based Docker Image

[Captains-Bay]🚩 >  docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t ajeetraina/docker-cctv-raspbian --push .

[Captains-Bay]🚩 >  docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t ajeetraina/docker-cctv-raspbian --push .
[+] Building 254.2s (12/21)                                                                                              
[+] Building 2174.9s (22/22) FINISHED                                                                                    
 => [internal] load build definition from Dockerfile                                                                0.1s
 => => transferring dockerfile: 268B                                                                                0.0s
 => [internal] load .dockerignore                                                                                   0.1s
 => => transferring context: 2B                                                                                     0.0s
 => [linux/arm/v7 internal] load metadata for docker.io/resin/rpi-raspbian:latest                                   8.8s
 => [linux/arm64 internal] load metadata for docker.io/resin/rpi-raspbian:latest                                    8.8s
 => [linux/amd64 internal] load metadata for docker.io/resin/rpi-raspbian:latest                                    8.8s
 => [linux/amd64 1/4] FROM docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a99  0.1s
 => => resolve docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5  0.0s
 => [internal] load build context                                                                                   0.1s
 => => transferring context: 27.72kB                                                                                0.0s
 => [linux/arm64 1/4] FROM docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a9  17.3s
 => => resolve docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5  0.0s
 => => sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5f88d055b 2.81kB / 2.81kB                      0.0s
 => => sha256:67a3f53689ffd00b59f9a7b7f8af0da5b36c9e11e12ba43b35969586e0121303 1.56kB / 1.56kB                      2.3s
 => => sha256:ef16d706debb7a56d89bf14958aca2da76bd3d0a42a6762c37302c9527a47d10 305B / 305B                          3.5s
 => => sha256:2734b459704280a5238405384487095392e5414d973554b27545cec0e984a0f2 256B / 256B                          3.1s
 => => sha256:9d5b46a00008070b009b685af5846c225ad170f38768416b63917ea7ac94062d 7.75kB / 7.75kB                      4.4s
 => => sha256:5fb2b6f7daac67a8465e8201c76828550b811619be473ab594972da35b4b7ee7 354B / 354B                          4.4s
 => => sha256:a04aef7b1e2fc4905fea2a685c63bc1eeb94c86149fd1286459206c22794e610 177B / 177B                          4.4s
 => => sha256:3eae71a21b9db8bd54d6a189b7587a10f52d6ffee3d868705a89974fe574c268 234B / 234B                          2.3s
 => => sha256:28f1ee4d4d5aa8bb96b3ba6a5e2177b2b58b595edaf601b9aae69fd02f78a6c6 7.48kB / 7.48kB                      0.0s
 => => sha256:6bddb275e70b0083d76083d01be2c3da11f67f526a123adcc980c5a3260d46e8 51.54MB / 51.54MB                   13.4s
 => => sha256:873755612f304f066db70c4015fdeadc9a81c0e6e25fb1aa833aeba80a7aeffc 229B / 229B                          3.9s
 => => sha256:78ba3f0466312c019467b178339a89120f2dce298d7c3d5e6840b3566749f5c0 250B / 250B                          3.2s
 => => sha256:b98db37cf25231afe68852e2cb21fb8aa465bb7a32ecc9afc6dec100ec0ba9b0 367B / 367B                          3.5s
 => => sha256:6b6c68e7ac8567569cee8da92431637e561e7aef5addb70373401d0887447a00 363B / 363B                          4.4s
 => => unpacking docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15  3.8s
 => [linux/arm/v7 1/4] FROM docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a9  0.0s
 => => resolve docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5  0.0s
 => [linux/amd64 2/4] RUN cat /.resin/deprecation-warning                                                           0.6s
 => [linux/arm/v7 2/4] RUN cat /.resin/deprecation-warning                                                          0.5s
 => [linux/arm64 2/4] RUN cat /.resin/deprecation-warning                                                           0.6s
 => [linux/arm/v7 3/4] RUN apt update && apt upgrade && apt install motion                                        481.2s
 => [linux/amd64 3/4] RUN apt update && apt upgrade && apt install motion                                        1431.5s
 => [linux/arm64 3/4] RUN apt update && apt upgrade && apt install motion                                        1665.5s
 => [linux/arm/v7 4/4] RUN mkdir /mnt/motion && chown motion /mnt/motion                                            0.6s
 => [linux/arm/v7 5/4] COPY motion.conf /etc/motion/motion.conf                                                     0.1s
 => [linux/amd64 4/4] RUN mkdir /mnt/motion && chown motion /mnt/motion                                             0.6s
 => [linux/amd64 5/4] COPY motion.conf /etc/motion/motion.conf                                                      0.1s
 => [linux/arm64 4/4] RUN mkdir /mnt/motion && chown motion /mnt/motion                                             0.6s
 => [linux/arm64 5/4] COPY motion.conf /etc/motion/motion.conf                                                      0.1s
 => exporting to image                                                                                            481.7s
 => => exporting layers                                                                                            19.6s
 => => exporting manifest sha256:55ddd5c67557190344efea7327a5b2f8de0bdc8ba184f856b1086baac6bed702                   0.0s
 => => exporting config sha256:ebb6e951b8f206d258e8552313633c35f4fe4c82fe7a7fcc51475022ae089c2d                     0.0s
 => => exporting manifest sha256:7b6919de7edd7d1be695877f827e7ee6d302d3acfd3f69ed73bf2ffaa4a80632                   0.0s
 => => exporting config sha256:a83b02bf9cbcb408110c4b773f4e5edde04f35851e2a42d4ff1d947c132bed6d                     0.0s
 => => exporting manifest sha256:d379c0f79b6a72a770124bb2ee94d91d9afef031c81ac20ea7a4c51f4f13ddf2                   0.0s
 => => exporting config sha256:86c2e637fe39036d51779c3bb5f800d3e8da14122907c5fd03bdace47b03bb38                     0.0s
 => => exporting manifest list sha256:daec2787002024c07addf56a8099a866d7f1cd85ed8c33818beb64a5a208cd54              0.0s
 => => pushing layers                                                                                             455.4s
 => => pushing manifest for docker.io/ajeetraina/docker-cctv-raspbian:latest                                        6.4s

Awesome. It worked ! The –platform flag enabled buildx to generate Linux images for Intel 64-bit, Arm 32-bit, and Arm 64-bit architectures. The –push flag generates a multi-arch manifest and pushes all the images to Docker Hub. Cool, isn’t it?

What is this ImageTools all about?

Imagetools contains commands for working with manifest lists in the registry. These commands are useful for inspecting multi-platform build results. It creates a new manifest list based on source manifests. The source manifests can be manifest lists or single platform distribution manifests and must already exist in the registry where the new manifest is created. If only one source is specified create performs a carbon copy.

Let’s use imagetools to inspect what we did.

[Captains-Bay]🚩 >  docker buildx imagetools inspect docker.io/ajeetraina/docker-cctv-raspbian:latest
Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest
MediaType: application/vnd.docker.distribution.manifest.list.v2+json
Digest:    sha256:daec2787002024c07addf56a8099a866d7f1cd85ed8c33818beb64a5a208cd54
           
Manifests: 
  Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest@sha256:55ddd5c67557190344efea7327a5b2f8de0bdc8ba184f856b1086baac6bed702
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/amd64
             
  Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest@sha256:7b6919de7edd7d1be695877f827e7ee6d302d3acfd3f69ed73bf2ffaa4a80632
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/arm64
             
  Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest@sha256:d379c0f79b6a72a770124bb2ee94d91d9afef031c81ac20ea7a4c51f4f13ddf2
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/arm/v7
[Captains-Bay]🚩 >

The image is now available on Docker Hub with the tag ajeetraina/docker-cctv-raspbian:latest. You can run a container from that image on Intel laptops, Amazon EC2 A1 instances, Raspberry Pis, and more. Docker pulls the correct image for the current architecture, so Raspberry Pis run the 32-bit Arm version and EC2 A1 instances run 64-bit Arm.

Verifying this Image on Raspberry Pi Node

root@node2:/home/pi# cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
root@node2:/home/pi#

Verify the Docker version

root@node2:/home/pi# docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:57:21 2018
 OS/Arch:           linux/arm
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:17:57 2018
  OS/Arch:          linux/arm
  Experimental:     false

Running the Docker Image

root@node2:/home/pi# docker pull ajeetraina/docker-cctv-raspbian:latest
latest: Pulling from ajeetraina/docker-cctv-raspbian
6bddb275e70b: Already exists
ef16d706debb: Already exists
2734b4597042: Already exists
873755612f30: Already exists
67a3f53689ff: Already exists
5fb2b6f7daac: Already exists
a04aef7b1e2f: Already exists
9d5b46a00008: Already exists
b98db37cf252: Already exists
78ba3f046631: Already exists
6b6c68e7ac85: Already exists
3eae71a21b9d: Already exists
db08b1848bc3: Pull complete
9c84b2387994: Pull complete
3579f37869a5: Pull complete
Digest: sha256:daec2787002024c07addf56a8099a866d7f1cd85ed8c33818beb64a5a208cd54
Status: Downloaded newer image for ajeetraina/docker-cctv-raspbian:latest
root@node2:/home/pi#


Inspecting the Docker Image

 docker image inspect ajeetraina/docker-cctv-raspbian:latest | grep arm
                "QEMU_CPU=arm1176",
        "Architecture": "arm",

Running the Docker Container

 docker run -dit -p 8000:8000 ajeetraina/docker-cctv-raspbian:latest
c43f25cd06672883478908e71ad6f044766270fcbf413e69ad63c8020610816f
root@node2:/home/pi# docker ps
CONTAINER ID        IMAGE                                    COMMAND             CREATED             STATUS              PORTS                              NAMES
c43f25cd0667        ajeetraina/docker-cctv-raspbian:latest   "motion"            7 seconds ago       Up 2 seconds        0.0.0.0:8000->8000/tcp, 8081/tcp   zen_brown

Verifying the Logs

root@node2:/home/pi# docker logs -f c43
[0] [NTC] [ALL] conf_load: Processing thread 0 - config file /etc/motion/motion.conf
[0] [NTC] [ALL] motion_startup: Motion 3.2.12+git20140228 Started
[0] [NTC] [ALL] motion_startup: Logging to syslog
[0] [NTC] [ALL] motion_startup: Using log type (ALL) log level (NTC)
[0] [NTC] [ENC] ffmpeg_init: ffmpeg LIBAVCODEC_BUILD 3670016 LIBAVFORMAT_BUILD 3670272
[0] [NTC] [ALL] main: Thread 1 is from /etc/motion/motion.conf
[0] [NTC] [ALL] main: Thread 1 is device: /dev/video0 input -1
[0] [NTC] [ALL] main: Stream port 8081
[0] [NTC] [ALL] main: Waiting for threads to finish, pid: 1
[1] [NTC] [ALL] motion_init: Thread 1 started , motion detection Enabled
[1] [NTC] [VID] vid_v4lx_start: Using videodevice /dev/video0 and input -1
[1] [ALR] [VID] vid_v4lx_start: Failed to open video device /dev/video0:
[1] [WRN] [ALL] motion_init: Could not fetch initial image from camera Motion continues using width and height from config file(s)
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items
[1] [NTC] [STR] http_bindsock: motion-stream testing : IPV4 addr: 0.0.0.0 port: 8081
[1] [NTC] [STR] http_bindsock: motion-stream Bound : IPV4 addr: 0.0.0.0 port: 8081
[1] [NTC] [ALL] motion_init: Started motion-stream server in port 8081 auth Disabled
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 3 items
[0] [NTC] [STR] httpd_run: motion-httpd testing : IPV4 addr: 127.0.0.1 port: 8080
[0] [NTC] [STR] httpd_run: motion-httpd Bound : IPV4 addr: 127.0.0.1 port: 8080
[0] [NTC] [STR] httpd_run: motion-httpd/3.2.12+git20140228 running, accepting connections
[0] [NTC] [STR] httpd_run: motion-httpd: waiting for data on 127.0.0.1 port TCP 8080

Hence, it is now so easy to build ARM-based Docker Images on your Docker Desktop and then share it to DockerHub so that anyone can pull this image on their Raspberry Pi and run it flawlessly.

Reference:

Top 5 Most Exciting Dockercon EU 2018 Announcements

Estimated Reading Time: 9 minutes

Last week I attended Dockercon 2018 EU which took place at Centre de Convencions Internacional de Barcelona (CCIB) in Barcelona, Spain. With over 3000+ attendees from around the globe, 52 breakout sessions, 11 Community Theatres, 12 workshops, over 100+ total sessions, exciting Hallway Tracks & Hands-on Labs/Trainings, paid trainings, women’s networking event, DockerPals and so on..Dockercon allowed developers, sysadmins, Product Managers & industry evangelists come closer to share their wealth of experience around the container technology. This time I was lucky enough to get chance to Emcee for Docker for Developer Track  for the first time. Not only this, I conducted Hallway Track for OpenUSM project & DockerLabs community contribution effort. Around 20-30 participants  showed up their interest to learn more around this system management, monitoring & Log Analytics tool.

This Dockercon we had Docker Captains Summit for the first time where the entire day was  dedicated to Captains. On Dec #3 ( 10:00 AM till 3:00 PM), we got chance to interact with Docker Staffs, where we put all our queries around Docker Future roadmap. It was amazing to meet all young Captains who joined us this year as well as getting familiar to what they have been contributing to during the initial introductory rounds.

This Dockercon, there has been couple of exciting announcements. 3 of the new features were targeted at Docker Community Edition, while the two were for Docker Enterprise customers. Here’s a rundown of what I think are the most 5 exciting announcements made last week –

#1. Announcement of Cloud Native Application Bundles(CNAB)

Microsoft and Docker have captured a great piece of attention with announcement around CNAB – Cloud Native Application Bundles.

What is CNAB? 

Cloud Native Application Bundles (CNAB) are a standard packaging format for multi-component distributed applications. It allows packages to target different runtimes and architectures. It empowers application distributors to package applications for deployment on a wide variety of cloud platforms, cloud providers, and cloud services. It also provides the capabilities necessary for delivering multi-container applications in disconnected environments.

Is it platform-specific tool?

CNAB is not a platform-specific tool. While it uses containers for encapsulating installation logic, it remains un-opinionated about what cloud environment it runs in. CNAB developers can bundle applications targeting environments spanning IaaS (like OpenStack or Azure), container orchestrators (like Kubernetes or Nomad), container runtimes (like local Docker or ACI), and cloud platform services (like object storage or Database as a Service). CNAB can also be used for packaging other distributed applications, such as IoT or edge computing. In nutshell, CNAB are a package format specification that describes a technology for bundling, installing, and managing distributed applications, that are by design, cloud agnostic.

Why do we need CNAB?

The current distributed computing landscape involves a combination of executable units and supporting API-based services. Executable units include Virtual Machines (VMs), Containers (e.g. Docker and OCI) and Functions-as-a-Service (FaaS), as well as higher-level PaaS services. Along with these executable units, many managed cloud services (from load balancers to databases) are provisioned and interconnected via REST (and similar network-accessible) APIs. The overall goal of CNAB is to provide a packaging format that can enable application providers and developers with a way of installing a multi-component application into a distributed computing environment, supporting all of the above types.


Is it open source? Tell me more about CNAB format?

It is an open source, cloud-agnostic specification for packaging and running distributed applications. It is a nascent specification that offers a way to repackage distributed computing apps

The CNAB format is a packaging format for a broad range of distributed applications. It specifies a pairing of a bundle definition(bundle.json) to define the app, and an invocation image to install the app.

The bundle definition is a single file that contains the following information:

  • Information about the bundle, such as name, bundle version, description, and keywords
  • Information about locating and running the invocation image (the installer program)
  • A list of user-overridable parameters that this package recognizes
  • The list of executable images that this bundle will install
  • A list of credential paths or environment variables that this bundle requires to execute

What’s Docker future plan to do with CNAB?

This project was incubated by Microsoft and Docker 1 year back. The first implementation of the spec is an experimental utility called Docker App, which Docker officially rolled out this Dockercon and expected to be integrated with Docker Enterprise in near future.  Microsoft and Docker plan to donate CNAB to an open source foundation publicly which is expected to happen early next year.

If you have no patience, head over Docker App CNAB examples recently posted by Gareth Rushgrove, Docker Employee, which is accessible via https://github.com/garethr/docker-app-cnab-examples

The examples in this repository show some basic examples of using docker-app, in particular showing some of the CNAB integration details. Check it out –

 #2. Support for using Docker Compose on Kubernetes.

On the 2nd day of Dockercon, Docker Inc. open sourced Compose on Kubernetes project. Docker Enterprise Edition already had this capability enabled starting Compose File version 3.3 where one can use the same docker-compose.yml file for Swarm deployment as well as one can specify Kubernetes workloads whenever stack is deployed. 

What benefit does this bring to Community Developers?

By making it open source, Docker, Inc has really paved a way of infinite possibilities around simplified way of deploying Kubernetes application. Docker Swarm gained popularity because of its simplified approach of application deployment using docker-compose.yml file. Now the community developers can use the same YAML file to deploy their K8s application. 

Imagine, you are using Docker Desktop on your Macbook. Docker Desktop provides capability of running both Swarm & Kubernetes. You have context set to GKE cluster which is running on Google Cloud Platform. You just deployed your app using docker-compose.yml on your local Macbook. Now you want to deploy it in the same way but this time on your GKE cluster.  Just use docker stack deploy command to deploy it to GKE cluster. Interesting, Isn’t it?

How does Compose on Kubernetes architecture look like?

Compose on Kubernetes is made up of server-side and client-side components. This architecture was chosen so that the entire life cycle of a stack can be managed. The following image is a high-level diagram of the architecture:

Compose on Kubernetes architecture

If you’re interested to learn further, I would suggest you to visit this link.

How can I test it now?

First we need to install the Compose on Kubernetes controller into your Kubernetes cluster(which could be GKE/AKS). You can download the latest binary(as of 12/13/2018) via https://github.com/docker/compose-on-kubernetes/releases/tag/v0.4.16 .

This controller uses the standard Kubernetes extension points to introduce the `Stack` to the Kubernetes API. You can use any Kubernetes cluster you like, but if you don’t already have one available then remember that Docker Desktop comes with Kubernetes and the Compose controller built-in, and enabling it is as simple as ticking a box in the settings

Check out the latest doc which shows how to make it work with AKS here.

#3. Introducing Docker Desktop Enterprise

The 3rd Big announcement was an introduction to Docker Desktop Enterprise. With this, Docker Inc. made a new addition to their desktop product portfolio which currently includes the free Docker Desktop Community products for MacOS and Windows. Docker Desktop is the simplest way to get started with container-based development on both Windows 10 and macOS with a set of features now available for the enterprise.

Desktop Comparison Table

How will Docker Desktop Enterprise be different from Docker Desktop Community Edition?

Good question. Docker Desktop has Docker Engine and Kubernetes built-in and with the addition of swappable version packs you can now synchronize your desktop development environment with the same Docker API and Kubernetes versions that are used in production with Docker Enterprise. You get the assurance that your application will not break due to incompatible API calls, and if you have multiple downstream environments running different versions of the APIs, you can quickly change your desktop configuration with the click of a button.

Not only this, with Docker Desktop Enterprise, you get access to the Application Designer which is a new workflow that provides production-ready application and service templates that let you get coding quickly, with the reassurance that your application meets architectural standards

source ~ Docker, Inc


For those who are interested in Docker Desktop Enterprise – Please note that it is expected to be available for preview in January & General Availability is slated to happen during 1H 2019.

#4. From Zero to Docker in Seconds with “docker assemble” CLI

This time, Docker Team announced a very interesting docker subcommand rightly named as “assemble” to the public. Ann Rahma and Gareth Rushgrove from Docker, Inc. announced assemble, a new command that generates optimized images from non dockerized apps. It will get you from source to an optimized Docker images in seconds.

Here are few of interesting facts around docker assemble utility:

  • Docker assemble has capability to build an image without a Dockerfile, all about auto detecting the code framework.
  • It generates docker images (and lot more) from your code with single command and zero effort! which mean no more dockerfile needed for your app till you have a config (.pom file there). 
  • It can analyze your applications, dependencies, and caches, and give you a sweet Docker image without having to author your own Dockerfiles.
  • It is built on top of buildKit, will auto detect framework, versions etc. from a config file (.pom file) and automatically add dependencies to the image label, optimize image size and push.
  • Docker Assemble can also figure out what ports need to be published and what healthchecks are relevant.
  • The dockerassemble builds app without configuration files, without Dockerfile, just a git repository to deploy

Is it an open source project?

It’s an enterprise feature for now — not in the community version. It is available for a couple languages and frameworks (like Java as demonstrated on Dockercon stage). 

How is it different from buildpack?

By reading all through its feature, Docker assemble might look very similar to buildpacks  as it overlap with some of the stuff docker-assemble does. But the huge benefit with assemble is that it’s more than just an image (also ports, healthchecks, volume mounts, etc), and it’s integrated into the enterprise toolchain. The docker-assemble is sort of an enterprise-grade buildpack to help with digitalization.

Keep eye on my next blog post to get more detail around the fancy docker assemblecommand.

#5. Docker-app & CNAB together for the first time

On the 2nd day of Dockercon, Docker confirmed that they are the first to implement CNAB for containerized applications and will be expanding it across the Docker platform to support new application development, deployment and lifecycle management. Initially CNAB support will be released as part of our docker-app experimental tool for building, packaging and managing cloud-native applications. With this, Docker now lets you package CNAB bundles as Docker images, so you can distribute and share through Docker registry tools including Docker Hub and Docker Trusted Registry. Additionally, Docker will enable organizations to deploy and manage CNAB-based applications in Docker Enterprise in the upcoming months.

Can I test the preview binaries of docker-app which comes with CNAB support?

Yes, you can find some preview binaries of docker-app with CNAB support here.The latest release of Docker App is one such tool that implements the current CNAB spec. Tt can be used to both build CNAB bundles for Compose (which can then be used with any other CNAB client), and also to install, upgrade and uninstall any other CNAB bundle.

In case you have no patience, head over to this recently added example of how to deploy a Helm chart

You can visit https://github.com/docker/app/releases/tag/cnab-dockercon-preview for accessing preview build.

I hope you found this blog helpful. In my next blog series, I will deep-dive around each of these announcements in terms of implementation and enablements.

6 Things You Should Know before Dockercon 2018 EU

Estimated Reading Time: 6 minutes

 
At Dockercon 2018 this year, you can expect 2,500+ participates, 52 breakout sessions, 11 Community Theatres, 12 workshops, over 100+ total sessions, exciting Hallway Tracks & Hands-on Labs/Trainings, paid trainings, women’s networking event, DockerPals Meetup, chance to meet Docker community experts – Docker Captains & Community Leaders, attending Ecosystem Expo… and only 3 days to accomplish it all. It’s so easy to get overwhelmed but at the same time you need to attend with the right information, so that you walk out triumphant.
 

 

 
Coming Dec 3-5 2018, I will be attending my 3rd Dockercon conference which is slated to happen in the beautiful city of Barcelona, home to the largest football stadium in all of Europe! Based on my past experience, I am here to share and walk you through the inside scoop on where to go when, what to watch out, must-see sessions, who to meet, and much more.
 

#1. UnFold & Unlock with Dockercon AB – “Dockercon 2018 Agenda Builder”

Once you get your Dockercon ID ready via Registration & Info Desk, just turn it around to unfold and unlock Dockercon Agenda for next 3 days. Very simple, isn’t it? Dockercon Agenda Builder is right in your hand.

 
 
 
Wait, I want to access it beforehand? Sure. You can find out when and where everything is happening at Dockercon with this simple Agenda Builder
 
If you’re CI-CD Enthusiast like me, you should use filters under Agenda Builder to choose CI-CD Keywords. You should be able to easily find out what all sessions(breakout, community theatre, General or Workshops) is scheduled to happen on all 3 days.
 
 
I personally find Session Tracks very useful. This time in Barcelona, I will try my best to attend most of these tracks listed below – 
 
– Using Docker for Developers
– Using Docker for IT Infrastructure & Ops
– Customer Stories
– Black Belt
– Docker Technology
– Ecosystem
– Community Theatre
 
 
 

#2. Don’t Miss out HTP – Dockercon 2018 Hallway Track Platform

Trust Me..Dockercon is full of life-time opportunities. If you’re looking for a place which offers you a great way to network and get to know your fellow Docker enthusiasts, the answer is Hallway Tracks.

Hallway Track is an easy way to connect and learn from each other through conversations about things you care about. Share your knowledge and book 1-on-1 group conversations with other participants on the Hallway Track Platform. 

Recently I submitted a track around self-initiated DockerLabs community program. You can join me directly via https://hallwaytrack.dockercon.com/topics/29045/

 

Did you know that you can even enroll yourself for Hallway Track right away before Dockercon? If interested, Head over to https://hallwaytrack.dockercon.com/

#3. Get Your Hands Dirty with HOL –  Dockercon 2018 Hands-on Labs (HOL)

With Dockercon conference pass, comes a Docker’s Hands-on Lab. These self paced tutorials allow you to learn at your pace anytime during the conference. Featuring a wide range of topics for developers and sys admins working with Windows and Linux environments. The Docker Staff will be available to answer questions and help you along. 

What I love about Docker HOL is you don’t need pre-registration, just stop by during the available hours on Monday through Wednesday. All you need is carry your laptop for lab sessions.

Further Information: https://europe-2018.dockercon.com/hands-on-labs/

 

#4. Deepen Your Container Knowledge with DW – “Dockercon 2018 Workshops”

Pre-conference Docker workshops is an amazing opportunity for you to become better acquainted with Docker and take a deep dive into the Docker platform features, services and uses before the start of the conference. These two hour workshops will provide technical deep dives, practical guidance and hands on tutorials. Reserving a space require just a simple step – RSVP with your Agenda Builder. Please note that this is included under Full Conference Pass.

Below are the list of workshops you might be interested to attend on Monday, Tuesday & Wednesday(Dec 3 – Dec 5 2018):

 

252727 – Workshop: Migrating .NET Applications to Docker Containers

252740 – Workshop: Docker Application Package

252734 – Workshop: Container Networking for Swarm and Kubernetes in Docker Enterprise

252737 – Workshop: Container Storage Concepts and How to Use Them

252728 – Workshop: Security Best Practices for Kubernetes

252733 – Workshop: Building a Secure, Automated Software Supply Chain

252720 – Workshop: Using Istio

262804 – Workshop: Container 101 – Getting Up and Running with Docker Containers

252731 – Workshop: Container Monitoring and Logging

252738 – Workshop: Migrating Java Applications to Docker Containers

261088 – Workshop: Container Troubleshooting with Sysdig

262792 – Workshop: Swarm Orchestration – Features and Workflows

Further Information: https://europe-2018.dockercon.com/hands-on-labs/

 

#5. Meet Your Favourite Captains & Community Leaders via  Dockercon! Pals

DockerPals is an excellent opportunity to meet Docker Captains and Community Leaders who are open to engaging with container enthusiasts of all skill levels, specialities and backgrounds. By participating in Docker Pals you will be introduced to other conference attendees and connected with a DockerCon veteran, giving you a built-in network before you arrive in Barcelona.

If you’re new to Dockercon, you can sign up as a Docker Pal. Docker Pals are matched with 4-5 other conference attendees and one Guide who knows their way around DockerCon. Pals might be newer to DockerCon, or solo attendees who want a group of new friends. Guides help Pals figure out which sessions and activities to attend and are a familiar face at the after-conference events. Both Guides and Pals benefit from making new connections in the Docker Community. You can sign up for Docker Guide under this link.

 

#6 Don’t Miss out Black Belt Sessions

Are you code and demo-heavy contributor? If yes, then you gonna love these sessions.

Attendees of this track will learn from technical deep dives that haven’t been presented anywhere else by members of the Docker team and from the Docker community. These sessions are code and demo-heavy and light on the slides. One way to achieve a deep understanding of complex distributed systems is to isolate the various components of that system, as well as those that interact with it, and examine all of them relentlessly. This is what is discussed under the Black Belt track! It features deeply technical talks covering not only container technology but also related projects.

 

Looking out for Tips for attending Conference???

Earlier this year, I presented a talk around “5 Tips for Making Most out of International Conference” which you might find useful. Do let me know your feedback, if any.

If you still have queries around Dockercon, I would suggest you to join us all at Docker Slack channel. Search for #dc_pals Slack channel to get connected to DockerPals program.

To join Docker Slack, you are requested to go through this link first.