Running Docker Containers on EC2 A1 Instances powered by Arm-Based AWS Graviton Processors

Estimated Reading Time: 7 minutes

2 week back, I wrote a blog post on how Developers can now build ARM containers on Docker Desktop using docker buildx CLI Plugin. Usually developers are restricted to build Arm-based application right on top of Arm-based system.Using this plugin, developers can build their application for Arm platform right on their laptop(x86) and then deploy onto the Cloud flawlessly without any cross-compilation pain anymore.

Wait…Did you say “ARM containers on Cloud?”

Yes, you heard it right. It is possible to deploy Arm containers on Cloud. Thanks to new Amazon EC2 A1 instances powered by custom AWS Graviton processors based on the Arm architecture, which brings Arm to the public cloud as a first class citizen. Docker Developers can now build ARM containers on AWS Cloud Platform.

A Brief about AWS Graviton Processors..

Amazon announced the availability of EC2 instances on its Arm-based servers during AWS re:Invent(December 2018). AWS Graviton processors are a new line of processors that are custom designed by AWS targeted in building platform solutions for cloud applications running at scale.The Graviton based instances are known as EC2 A1. These instances are targeted at scale-out workloads and applications such container based microservices, web sites, and scripting language-based applications (e.g., Ruby, Python, etc.)

EC2 A1 instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which maximizes resource efficiency for customers while still supporting familiar AWS and Amazon EC2 instance capabilities such as EBS, Networking, and AMIs. Amazon Linux 2, Red Hat Enterpise Linux (RHEL), Ubuntu and ECS optimized AMIs are available today for A1 instances.  Built around Arm cores and making extensive use of custom-built silicon, the A1 instances are optimized for performance and cost.

Under this blog post, I will showcase how to deploy Containers on AWS EC2 A1 instance using Docker Machine running on Docker Desktop for Windows.

Pre-requisites:

My Image
  • Click on “Register for Public Beta”. This will open up various options to test drive Docker products
My Image
  • Don’t forget to Select “Docker Desktop CE with Multi-Arch images (Arm Enabled) – Edge Release Amazon Cloud Credits available for limited time” option.
  • Enter your details and this will open.
  • You will see an option to sign up for credits for Amazon EC2 A1 instances via https://www.surveymonkey.com/r/DockerCon19AWS.
  • Click on Sign Up

Creating AWS Account

  • Go to aws.amazon.com and create Free Tier Account
  • By now, you must have received email from Amazon on Free Credits of $50.
  • Open up https://aws.amazon.com/amazoncredits and add the Promo Code

Creating AWS A1 Instance

We will use Docker Desktop for Windows which comes installed with Docker Machine to bring up ARM instances quickly.

Go to My Security Credentials under your Account and Click “Access Keys” shown below to display Access Key IDs.


Run the below command to set the environmental variable for ACCESS_KEY_ID as well as SECRET_ACCESS_KEY ID.

PS C:\Users\Ajeet_Raina> set ACCESS_KEY_ID=XXX
PS C:\Users\Ajeet_Raina> set SECRET_ACCESS_KEY=XX

Running Docker Machine to bring up our first Docker Node on AWS A1 ARM instance

Docker Desktop for Windows comes with Docker Machine by default and there is NO need to install it separately.

PS C:\Users\Ajeet_Raina> docker-machine create  --driver amazonec2  --amazonec2-access-key=${ACCESS_KEY_ID}  --amazonec2-secret-key=${SECRET_ACCESS_KEY} --amazonec2-region=us-west-2 --amazonec2-vpc-id=vpc-ae59f0d6 --amazonec2-ami=ami-0db180c518750ee4f  --amazonec2-instance-type=a1.medium arm-node1

By now, you should be able to see arm-node1 up and running on your AWS environment.


Listing out the ARM Nodes

PS C:\Users\Ajeet_Raina> docker-machine ls
NAME        ACTIVE   DRIVER      STATE     URL                         SWARM   DOCKER     ERRORS
arm-node1   -        amazonec2   Running   tcp://34.218.208.175:2376           v18.09.6
PS C:\Users\Ajeet_Raina>

Login into the first Node

You can use docker-machine ssh to login into the AWS EC2 A1 instance directly.

PS C:\Users\Ajeet_Raina> docker-machine ssh arm-node1
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1028-aws aarch64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Thu May 16 04:35:16 UTC 2019

  System load:  0.06              Processes:              116
  Usage of /:   9.1% of 15.34GB   Users logged in:        0
  Memory usage: 10%               IP address for ens5:    172.31.60.52
  Swap usage:   0%                IP address for docker0: 172.17.0.1


  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

178 packages can be updated.
86 updates are security updates.




This node comes with Docker 18.09.6 installed.

ubuntu@arm-node1:~$ sudo docker version
Client:
 Version:           18.09.6
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        481bc77
 Built:             Sat May  4 02:40:48 2019
 OS/Arch:           linux/arm64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.6
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       481bc77
  Built:            Sat May  4 02:00:10 2019
  OS/Arch:          linux/arm64
  Experimental:     false
ubuntu@arm-node1:~$




Checking the Node IP

PS C:\Users\Ajeet_Raina> docker-machine ip arm-node1
34.218.208.175

Running ARM-based Portainer v1.20.2 Container

Before we run Portainer, we need to ensure that the port 9000 is open for accessibility.

Click on Actions > Inbound Rules and add 9000 for Portainer. Allowing “All TCP” from 0-65535 is just for testing purpose and not recommended for the production environment.


ubuntu@ip-172-31-62-91:~$ sudo docker run --rm mplatform/mquery portainer/portainer
Unable to find image 'mplatform/mquery:latest' locally
latest: Pulling from mplatform/mquery
db6020507de3: Pull complete
713cdc222639: Pull complete
Digest: sha256:e15189e3d6fbcee8a6ad2ef04c1ec80420ab0fdcf0d70408c0e914af80dfb107
Status: Downloaded newer image for mplatform/mquery:latest
Image: portainer/portainer
 * Manifest List: Yes
 * Supported platforms:
   - linux/amd64
   - linux/arm
   - linux/arm64
   - linux/ppc64le
   - windows/amd64:10.0.14393.2551
   - windows/amd64:10.0.16299.967
   - windows/amd64:10.0.17134.590
   - windows/amd64:10.0.17763.253

Initialising Docker Swarm Mode on Arm-based EC1 instance

Follow the below steps to setup 2 Node Docker Swarm Mode cluster on AWS Platform using Docker Machine.

PS C:\Users\Ajeet_Raina> docker-machine create  --driver amazonec2  --amazonec2-access-key=${ACCESS_KEY_ID}  --amazonec2-secret-key=${SE
CRET_ACCESS_KEY} --amazonec2-region=us-west-2 --amazonec2-vpc-id=vpc-ae59f0d6 --amazonec2-ami=ami-0db180c518750ee4f --amazonec2-open-por
t 2377 --amazonec2-open-port 7946 --amazonec2-open-port 4789 --amazonec2-open-port 7946/udp --amazonec2-open-port 4789/udp --amazonec2-open-port 8080 --amazonec2-open-port 443 --amazonec2-open-port 80 --amazonec2-subnet-id=subnet-827651c9 --amazonec2-instance-type=a1.medi
um arm-swarm-node2
Running pre-create checks...
Creating machine...
(arm-swarm-node2) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...

You can open all ports on AWS using the below command:

PS C:\Users\Ajeet_Raina> aws ec2 authorize-security-group-ingress --group-name docker-machine --protocol -1 --cidr 0.0.0.0/0

Initialising Docker Swarm Manager


PS C:\Users\Ajeet_Raina> docker-machine ssh arm-swarm-node1 sudo docker swarm init
Swarm initialized: current node (oqk875mcldbn28ce2rip31fg5) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-6bw0zfd7vjpXX17usjhccjlg3rs 172.31.50.5:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

PS C:\Users\Ajeet_Raina> docker-machine ssh arm-swarm-node2 sudo docker swarm join --token SWMTKN-1-6XX23ye817usjhccjlg3rs 172.31.50.5:2377
This node joined a swarm as a worker.

Adding Worker Node

PS C:\Users\Ajeet_Raina> docker-machine ssh arm-swarm-node2 sudo docker swarm join --token SWMTKN-1-6bw0zfXXXhccjlg3rs 172.31.50.5:2377
This node joined a swarm as a worker.

Verifying 2-Node Swarm Cluster

ubuntu@arm-swarm-node1:~$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
oqk875mcldbn28ce2rip31fg5 *   arm-swarm-node1     Ready               Active              Leader              18.09.6
f3rwuj6f6mghte3630car83ia     arm-swarm-node2     Ready               Active                                  18.09.6
ubuntu@arm-swarm-node1:~$

Building Up Portainer Application Stack

ubuntu@ip-172-31-62-91:~$ sudo docker stack deploy --compose-file=portainer-agent-stack.yml portainer
Creating network portainer_agent_network
Creating service portainer_portainer
Creating service portainer_agent
ubuntu@ip-172-31-62-91:~$

Listing out Portainer Stack

ubuntu@arm-node1:~$ sudo docker stack ls
NAME                SERVICES            ORCHESTRATOR
portainer           2                   Swarm
ubuntu@arm-node1:~$ sudo docker service ls
ID                  NAME                  MODE                REPLICAS            IMAGE                        PORTS
k5651aoxgqhk        portainer_agent       global              1/1                 portainer/agent:latest       
yoembxxj25k8        portainer_portainer   replicated          1/1                 portainer/portainer:latest   *:9000->9000/tcp

Viewing Portainer Dashboard

Portainer UI showing a Single Node Swarm Mode Cluster

In my future post, I am going to showcase how I leveraged buildx CLI plugin & AWS EC2 A1 instance to build in-house project called “Pico” for Deep Learning using Apache Kafka, IoT & Amazon Rekognition Service. Stay tuned !

How I built ARM based Docker Images for Raspberry Pi using buildx CLI Plugin on Docker Desktop?

Estimated Reading Time: 11 minutes


2 weeks back in Dockercon 2019 San Francisco, Docker & ARM demonstrated the integration of ARM capabilities into Docker Desktop Community for the first time. Docker & ARM unveiled go-to-market strategy to accelerate Cloud, Edge & IoT Development. These two companies have planned to streamline the app development tools for cloud, edge, and internet of things environments built on ARM platform. The tools include AWS EC2 A1 instances based on AWS’ Graviton Processors (which feature 64-bit Arm Neoverse cores). Docker in collaboration with ARM will make new Docker-based solutions available to the Arm ecosystem as an extension of Arm’s server-tailored Neoverse platform, which they say will let developers more easily leverage containers — both remote and on-premises which is going to be pretty cool.

This integration is today available to the approximately 2 million developers using Docker Desktop Community Edition . As part of Docker Captain’s programme, we were lucky to get an early access to this build during Docker Captain Summit which took place on the first day of Dockercon 2019.

Introducing buildx

Under Docker 19.03.0 Beta 3, there is a new experimental CLI plugin called “buildx”. It is a pretty new Docker CLI plugin that extends the docker build command with the full support of the features provided by Moby BuildKit builder toolkit. It provides the same user experience as docker build with many new features like creating scoped builder instances and building against multiple nodes concurrently. As per the discussion with Docker staff, the “x” under buildx might get dropped in near future and features and flags are too subjected to change when the stable release is announced.

Buildx always build using the BuildKit engine and does not require DOCKER_BUILDKIT=1 environment variable for starting builds. Buildx build command supports the features available for docker build including the new features in Docker 19.03 such as outputs configuration, inline build caching or specifying target platform. In addition, buildx supports new features not yet available for regular docker build like building manifest lists, distributed caching, exporting build results to OCI image tarballs etc.

How does builder Instance work?

Buildx allows you to create new instances of isolated builders. This can be used for getting a scoped environment for your CI builds that does not change the state of the shared daemon or for isolating the builds for different projects. You can create a new instance for a set of remote nodes, forming a build farm, and quickly switch between them.

New instances can be created with docker buildx create command. This will create a new builder instance with a single node based on your current configuration. To use a remote node you can specify the DOCKER_HOST or remote context name while creating the new builder. After creating a new instance you can manage its lifecycle with the inspectstop and rm commands and list all available builders with ls. After creating a new builder you can also append new nodes to it.

To switch between different builders use docker buildx use <name>. After running this command the build commands would automatically keep using this builder.

Docker 19.03 also features a new docker context command that can be used for giving names for remote Docker API endpoints. Buildx integrates with docker context so that all of your contexts automatically get a default builder instance. While creating a new builder instance or when adding a node to it you can also set the context name as the target.

Enough theory !!! Do you really want to see it in action? Under this blog post, I will showcase how I built ARM-based Docker Images for my tiny Raspberry cluster using `docker buildx’ utility which runs on my Docker Desktop for Mac.

Installing Docker Desktop for Mac 2.0.4.1

Open up https://www.docker.com/products/docker-desktop for an early access of Docker Desktop. Do switch to Edge release if in case you’ve installed stable version of Docker Desktop.

As of today, Docker Desktop 2.0.4.1 Edge Community Edition comes with Engine 19.03.0 Beta 3, latest Kubernetes v1.1.4.1 and Compose 1.24.0 Release.

Verifying the docker buildx CLI

[Captains-Bay]🚩 >  docker buildx --help

Usage:	docker buildx COMMAND

Build with BuildKit

Management Commands:
  imagetools  Commands to work on images in registry

Commands:
  bake        Build from a file
  build       Start a build
  create      Create a new builder instance
  inspect     Inspect current builder instance
  ls          List builder instances
  rm          Remove a builder instance
  stop        Stop builder instance
  use         Set the current builder instance
  version     Show buildx version information 

Run 'docker buildx COMMAND --help' for more information on a command.

Listing all builder instances and the nodes for each instance

[Captains-Bay]🚩 >  docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS  PLATFORMS
default * docker                  
  default default         running linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6

We are currently using the default builder, which is basically the old builder.

Creating a New Builder Instance

The `docker buildx create` makes a new builder instance pointing to a docker context or endpoint, where context is the name of a context from docker context ls and endpoint is the address for docker socket (eg. DOCKER_HOST value).

By default, the current docker configuration is used for determining the context/endpoint value.

Builder instances are isolated environments where builds can be invoked. All docker contexts also get the default builder instance.

Let’s create a new builder, which gives us access to some new multi-arch features.

[Captains-Bay]🚩 >  docker buildx create --help

Usage:	docker buildx create [OPTIONS] [CONTEXT|ENDPOINT]

Create a new builder instance

Options:
      --append                 Append a node to builder instead of changing it
      --driver string          Driver to use (eg. docker-container)
      --leave                  Remove a node from builder instead of changing it
      --name string            Builder instance name
      --node string            Create/modify node with given name
      --platform stringArray   Fixed platforms for current node
      --use                    Set the current builder instance

Creating a new builder called “testbuilder”

[Captains-Bay]🚩 >  docker buildx create --name testbuilder
testbuilder
[Captains-Bay]🚩 >  docker buildx ls
NAME/NODE      DRIVER/ENDPOINT             STATUS   PLATFORMS
testbuilder    docker-container                     
  testbuilder0 unix:///var/run/docker.sock inactive 
default *      docker                               
  default      default                     running  linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
[Captains-Bay]🚩 > 

Switching to “testbuilder” builder instance

[Captains-Bay]🚩 >  docker buildx use testbuilder
[Captains-Bay]🚩 >  docker buildx ls
NAME/NODE      DRIVER/ENDPOINT             STATUS   PLATFORMS
testbuilder *  docker-container                     
  testbuilder0 unix:///var/run/docker.sock inactive 
default        docker                               
  default      default                     running  linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
[Captains-Bay]🚩 > 

Here I created a new builder instance with the name mybuilder, switched to it, and inspected it. Note that –bootstrap isn’t needed, it just starts the build container immediately. Next we will test the workflow, making sure we can build, push, and run multi-arch images.

What is –bootstrap all about?

The `docker buildx inspect –bootstrap ensures that the builder is running before inspecting it. If the driver is docker-container, then --bootstrap starts the buildkit container and waits until it is operational. Bootstrapping is automatically done during build, it is thus not necessary. The same BuildKit container is used during the lifetime of the associated builder node (as displayed in buildx ls).

[Captains-Bay]🚩 >  docker buildx inspect --bootstrap
[+] Building 22.4s (1/1) FINISHED                                                                                        
 => [internal] booting buildkit                                                                                    22.4s
 => => pulling image moby/buildkit:master                                                                          21.5s
 => => creating container buildx_buildkit_testbuilder0                                                              0.9s
Name:   testbuilder
Driver: docker-container

Nodes:
Name:      testbuilder0
Endpoint:  unix:///var/run/docker.sock
Status:    running
Platforms: linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
[Captains-Bay]🚩 >  

Authenticating with Dockerhub

[Captains-Bay]🚩 >  docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: ajeetraina
Password: 
Login Succeeded

Cloning the Repository

[Captains-Bay]🚩 >  git clone https://github.com/collabnix/docker-cctv-raspbian
Cloning into 'docker-cctv-raspbian'...
remote: Enumerating objects: 47, done.
remote: Counting objects: 100% (47/47), done.
remote: Compressing objects: 100% (31/31), done.
remote: Total 47 (delta 20), reused 32 (delta 14), pack-reused 0
Unpacking objects: 100% (47/47), done.

Peeping into Dockerfile

[Captains-Bay]🚩 >  cat Dockerfile 
FROM resin/rpi-raspbian:latest

RUN apt update && apt upgrade && apt install motion
RUN mkdir /mnt/motion && chown motion /mnt/motion
COPY motion.conf /etc/motion/motion.conf

VOLUME /mnt/motion
EXPOSE 8081
ENTRYPOINT ["motion"]
[Captains-Bay]🚩 >

Building ARM-based Docker Image

[Captains-Bay]🚩 >  docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t ajeetraina/docker-cctv-raspbian --push .

[Captains-Bay]🚩 >  docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t ajeetraina/docker-cctv-raspbian --push .
[+] Building 254.2s (12/21)                                                                                              
[+] Building 2174.9s (22/22) FINISHED                                                                                    
 => [internal] load build definition from Dockerfile                                                                0.1s
 => => transferring dockerfile: 268B                                                                                0.0s
 => [internal] load .dockerignore                                                                                   0.1s
 => => transferring context: 2B                                                                                     0.0s
 => [linux/arm/v7 internal] load metadata for docker.io/resin/rpi-raspbian:latest                                   8.8s
 => [linux/arm64 internal] load metadata for docker.io/resin/rpi-raspbian:latest                                    8.8s
 => [linux/amd64 internal] load metadata for docker.io/resin/rpi-raspbian:latest                                    8.8s
 => [linux/amd64 1/4] FROM docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a99  0.1s
 => => resolve docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5  0.0s
 => [internal] load build context                                                                                   0.1s
 => => transferring context: 27.72kB                                                                                0.0s
 => [linux/arm64 1/4] FROM docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a9  17.3s
 => => resolve docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5  0.0s
 => => sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5f88d055b 2.81kB / 2.81kB                      0.0s
 => => sha256:67a3f53689ffd00b59f9a7b7f8af0da5b36c9e11e12ba43b35969586e0121303 1.56kB / 1.56kB                      2.3s
 => => sha256:ef16d706debb7a56d89bf14958aca2da76bd3d0a42a6762c37302c9527a47d10 305B / 305B                          3.5s
 => => sha256:2734b459704280a5238405384487095392e5414d973554b27545cec0e984a0f2 256B / 256B                          3.1s
 => => sha256:9d5b46a00008070b009b685af5846c225ad170f38768416b63917ea7ac94062d 7.75kB / 7.75kB                      4.4s
 => => sha256:5fb2b6f7daac67a8465e8201c76828550b811619be473ab594972da35b4b7ee7 354B / 354B                          4.4s
 => => sha256:a04aef7b1e2fc4905fea2a685c63bc1eeb94c86149fd1286459206c22794e610 177B / 177B                          4.4s
 => => sha256:3eae71a21b9db8bd54d6a189b7587a10f52d6ffee3d868705a89974fe574c268 234B / 234B                          2.3s
 => => sha256:28f1ee4d4d5aa8bb96b3ba6a5e2177b2b58b595edaf601b9aae69fd02f78a6c6 7.48kB / 7.48kB                      0.0s
 => => sha256:6bddb275e70b0083d76083d01be2c3da11f67f526a123adcc980c5a3260d46e8 51.54MB / 51.54MB                   13.4s
 => => sha256:873755612f304f066db70c4015fdeadc9a81c0e6e25fb1aa833aeba80a7aeffc 229B / 229B                          3.9s
 => => sha256:78ba3f0466312c019467b178339a89120f2dce298d7c3d5e6840b3566749f5c0 250B / 250B                          3.2s
 => => sha256:b98db37cf25231afe68852e2cb21fb8aa465bb7a32ecc9afc6dec100ec0ba9b0 367B / 367B                          3.5s
 => => sha256:6b6c68e7ac8567569cee8da92431637e561e7aef5addb70373401d0887447a00 363B / 363B                          4.4s
 => => unpacking docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15  3.8s
 => [linux/arm/v7 1/4] FROM docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a9  0.0s
 => => resolve docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5  0.0s
 => [linux/amd64 2/4] RUN cat /.resin/deprecation-warning                                                           0.6s
 => [linux/arm/v7 2/4] RUN cat /.resin/deprecation-warning                                                          0.5s
 => [linux/arm64 2/4] RUN cat /.resin/deprecation-warning                                                           0.6s
 => [linux/arm/v7 3/4] RUN apt update && apt upgrade && apt install motion                                        481.2s
 => [linux/amd64 3/4] RUN apt update && apt upgrade && apt install motion                                        1431.5s
 => [linux/arm64 3/4] RUN apt update && apt upgrade && apt install motion                                        1665.5s
 => [linux/arm/v7 4/4] RUN mkdir /mnt/motion && chown motion /mnt/motion                                            0.6s
 => [linux/arm/v7 5/4] COPY motion.conf /etc/motion/motion.conf                                                     0.1s
 => [linux/amd64 4/4] RUN mkdir /mnt/motion && chown motion /mnt/motion                                             0.6s
 => [linux/amd64 5/4] COPY motion.conf /etc/motion/motion.conf                                                      0.1s
 => [linux/arm64 4/4] RUN mkdir /mnt/motion && chown motion /mnt/motion                                             0.6s
 => [linux/arm64 5/4] COPY motion.conf /etc/motion/motion.conf                                                      0.1s
 => exporting to image                                                                                            481.7s
 => => exporting layers                                                                                            19.6s
 => => exporting manifest sha256:55ddd5c67557190344efea7327a5b2f8de0bdc8ba184f856b1086baac6bed702                   0.0s
 => => exporting config sha256:ebb6e951b8f206d258e8552313633c35f4fe4c82fe7a7fcc51475022ae089c2d                     0.0s
 => => exporting manifest sha256:7b6919de7edd7d1be695877f827e7ee6d302d3acfd3f69ed73bf2ffaa4a80632                   0.0s
 => => exporting config sha256:a83b02bf9cbcb408110c4b773f4e5edde04f35851e2a42d4ff1d947c132bed6d                     0.0s
 => => exporting manifest sha256:d379c0f79b6a72a770124bb2ee94d91d9afef031c81ac20ea7a4c51f4f13ddf2                   0.0s
 => => exporting config sha256:86c2e637fe39036d51779c3bb5f800d3e8da14122907c5fd03bdace47b03bb38                     0.0s
 => => exporting manifest list sha256:daec2787002024c07addf56a8099a866d7f1cd85ed8c33818beb64a5a208cd54              0.0s
 => => pushing layers                                                                                             455.4s
 => => pushing manifest for docker.io/ajeetraina/docker-cctv-raspbian:latest                                        6.4s

Awesome. It worked ! The –platform flag enabled buildx to generate Linux images for Intel 64-bit, Arm 32-bit, and Arm 64-bit architectures. The –push flag generates a multi-arch manifest and pushes all the images to Docker Hub. Cool, isn’t it?

What is this ImageTools all about?

Imagetools contains commands for working with manifest lists in the registry. These commands are useful for inspecting multi-platform build results. It creates a new manifest list based on source manifests. The source manifests can be manifest lists or single platform distribution manifests and must already exist in the registry where the new manifest is created. If only one source is specified create performs a carbon copy.

Let’s use imagetools to inspect what we did.

[Captains-Bay]🚩 >  docker buildx imagetools inspect docker.io/ajeetraina/docker-cctv-raspbian:latest
Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest
MediaType: application/vnd.docker.distribution.manifest.list.v2+json
Digest:    sha256:daec2787002024c07addf56a8099a866d7f1cd85ed8c33818beb64a5a208cd54
           
Manifests: 
  Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest@sha256:55ddd5c67557190344efea7327a5b2f8de0bdc8ba184f856b1086baac6bed702
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/amd64
             
  Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest@sha256:7b6919de7edd7d1be695877f827e7ee6d302d3acfd3f69ed73bf2ffaa4a80632
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/arm64
             
  Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest@sha256:d379c0f79b6a72a770124bb2ee94d91d9afef031c81ac20ea7a4c51f4f13ddf2
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/arm/v7
[Captains-Bay]🚩 >

The image is now available on Docker Hub with the tag ajeetraina/docker-cctv-raspbian:latest. You can run a container from that image on Intel laptops, Amazon EC2 A1 instances, Raspberry Pis, and more. Docker pulls the correct image for the current architecture, so Raspberry Pis run the 32-bit Arm version and EC2 A1 instances run 64-bit Arm.

Verifying this Image on Raspberry Pi Node

root@node2:/home/pi# cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
root@node2:/home/pi#

Verify the Docker version

root@node2:/home/pi# docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:57:21 2018
 OS/Arch:           linux/arm
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:17:57 2018
  OS/Arch:          linux/arm
  Experimental:     false

Running the Docker Image

root@node2:/home/pi# docker pull ajeetraina/docker-cctv-raspbian:latest
latest: Pulling from ajeetraina/docker-cctv-raspbian
6bddb275e70b: Already exists
ef16d706debb: Already exists
2734b4597042: Already exists
873755612f30: Already exists
67a3f53689ff: Already exists
5fb2b6f7daac: Already exists
a04aef7b1e2f: Already exists
9d5b46a00008: Already exists
b98db37cf252: Already exists
78ba3f046631: Already exists
6b6c68e7ac85: Already exists
3eae71a21b9d: Already exists
db08b1848bc3: Pull complete
9c84b2387994: Pull complete
3579f37869a5: Pull complete
Digest: sha256:daec2787002024c07addf56a8099a866d7f1cd85ed8c33818beb64a5a208cd54
Status: Downloaded newer image for ajeetraina/docker-cctv-raspbian:latest
root@node2:/home/pi#


Inspecting the Docker Image

 docker image inspect ajeetraina/docker-cctv-raspbian:latest | grep arm
                "QEMU_CPU=arm1176",
        "Architecture": "arm",

Running the Docker Container

 docker run -dit -p 8000:8000 ajeetraina/docker-cctv-raspbian:latest
c43f25cd06672883478908e71ad6f044766270fcbf413e69ad63c8020610816f
root@node2:/home/pi# docker ps
CONTAINER ID        IMAGE                                    COMMAND             CREATED             STATUS              PORTS                              NAMES
c43f25cd0667        ajeetraina/docker-cctv-raspbian:latest   "motion"            7 seconds ago       Up 2 seconds        0.0.0.0:8000->8000/tcp, 8081/tcp   zen_brown

Verifying the Logs

root@node2:/home/pi# docker logs -f c43
[0] [NTC] [ALL] conf_load: Processing thread 0 - config file /etc/motion/motion.conf
[0] [NTC] [ALL] motion_startup: Motion 3.2.12+git20140228 Started
[0] [NTC] [ALL] motion_startup: Logging to syslog
[0] [NTC] [ALL] motion_startup: Using log type (ALL) log level (NTC)
[0] [NTC] [ENC] ffmpeg_init: ffmpeg LIBAVCODEC_BUILD 3670016 LIBAVFORMAT_BUILD 3670272
[0] [NTC] [ALL] main: Thread 1 is from /etc/motion/motion.conf
[0] [NTC] [ALL] main: Thread 1 is device: /dev/video0 input -1
[0] [NTC] [ALL] main: Stream port 8081
[0] [NTC] [ALL] main: Waiting for threads to finish, pid: 1
[1] [NTC] [ALL] motion_init: Thread 1 started , motion detection Enabled
[1] [NTC] [VID] vid_v4lx_start: Using videodevice /dev/video0 and input -1
[1] [ALR] [VID] vid_v4lx_start: Failed to open video device /dev/video0:
[1] [WRN] [ALL] motion_init: Could not fetch initial image from camera Motion continues using width and height from config file(s)
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items
[1] [NTC] [STR] http_bindsock: motion-stream testing : IPV4 addr: 0.0.0.0 port: 8081
[1] [NTC] [STR] http_bindsock: motion-stream Bound : IPV4 addr: 0.0.0.0 port: 8081
[1] [NTC] [ALL] motion_init: Started motion-stream server in port 8081 auth Disabled
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 3 items
[0] [NTC] [STR] httpd_run: motion-httpd testing : IPV4 addr: 127.0.0.1 port: 8080
[0] [NTC] [STR] httpd_run: motion-httpd Bound : IPV4 addr: 127.0.0.1 port: 8080
[0] [NTC] [STR] httpd_run: motion-httpd/3.2.12+git20140228 running, accepting connections
[0] [NTC] [STR] httpd_run: motion-httpd: waiting for data on 127.0.0.1 port TCP 8080

Hence, it is now so easy to build ARM-based Docker Images on your Docker Desktop and then share it to DockerHub so that anyone can pull this image on their Raspberry Pi and run it flawlessly.

Reference:

New Docker CLI API Support for NVIDIA GPUs under Docker Engine 19.03.0 Pre-Release

Estimated Reading Time: 7 minutes

Let’s talk about Docker in a GPU-Accelerated Data Center…

Docker is the leading container platform which provides both hardware and software encapsulation by allowing multiple containers to run on the same system at the same time each with their own set of resources (CPU, memory, etc) and their own dedicated set of dependencies (library version, environment variables, etc.). Docker  can now be used to containerize GPU-accelerated applications. In case you’re new to GPU-accelerated computing, it is basically the use of graphics processing unit  to accelerates high performance computing workloads and applications. This means you can easily containerize and isolate accelerated application without any modifications and deploy it on any supported GPU-enabled infrastructure.

Yes, you heard it right. Today Docker does natively support NVIDIA GPUs within containers. This is possible with the latest Docker 19.03.0 Beta 3 Release which is the latest pre-release and is available for download here. With this release, Docker  can now be flawlessly be used to containerize GPU-accelerated applications.

Let’s go back to 2017…

2 year back, I wrote a blog post titled “Running NVIDIA Docker in a GPU Accelerated Data center”. The nvidia-docker is an open source project hosted on GITHUB and it provides driver-agnostic CUDA images  & docker command line wrapper that mounts the user mode components of the driver and the GPUs (character devices) into the container at launch. With this enablement, the NVIDIA Docker plugin enabled deployment of GPU-accelerated applications across any Linux GPU server with NVIDIA Docker support. Under the same blog post, I showcased how to get started with nvidia-docker to interact with NVIDIA GPU system and then look at few of interesting applications which can be build for GPU-accelerated data center.

With the recent 19.03.0 Beta Release, now you don’t need to spend time in downloading the NVIDIA-DOCKER plugin and rely on nvidia-wrapper to launch GPU containers. All you can now use –gpus option with docker run CLI to allow containers to use GPU devices seamlessly.

Under this blog post, I will showcase how to get started with this new CLI API for NVIDIA GPU.

Prerequisite:

  • Ubuntu 18.04 instance running on Google Cloud Platform
  • Verify that NVIDIA card is detected
$ lspci -vv | grep -i nvidia
00:04.0 3D controller: NVIDIA Corporation GP100GL [Tesla P100 PCIe 16GB] (rev a1)
        Subsystem: NVIDIA Corporation GP100GL [Tesla P100 PCIe 16GB]
        Kernel modules: nvidiafb

Installing NVIDIA drivers first

$ apt-get install ubuntu-drivers-common \
	&& sudo ubuntu-drivers autoinstall
- Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.18.0-1009-gcp/updates/dkms/

nvidia-uvm.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.18.0-1009-gcp/updates/dkms/

depmod...

DKMS: install completed.
Setting up xserver-xorg-video-nvidia-390 (390.116-0ubuntu0.18.10.1) ...
Processing triggers for libc-bin (2.28-0ubuntu1) ...
Processing triggers for systemd (239-7ubuntu10.13) ...
Setting up nvidia-driver-390 (390.116-0ubuntu0.18.10.1) ...
Setting up adwaita-icon-theme (3.30.0-0ubuntu1) ...
update-alternatives: using /usr/share/icons/Adwaita/cursor.theme to provide /usr/share/icons/default/index.theme (x-cursor-theme) in auto mode
Setting up humanity-icon-theme (0.6.15) ...
Setting up libgtk-3-0:amd64 (3.24.4-0ubuntu1.1) ...
Setting up libgtk-3-bin (3.24.4-0ubuntu1.1) ...
Setting up policykit-1-gnome (0.105-6ubuntu2) ...
Setting up screen-resolution-extra (0.17.3build1) ...
Setting up ubuntu-mono (16.10+18.10.20181005-0ubuntu1) ...
Setting up nvidia-settings (390.77-0ubuntu1) ...
Processing triggers for libgdk-pixbuf2.0-0:amd64 (2.38.0+dfsg-6) ...
Processing triggers for initramfs-tools (0.131ubuntu15.1) ...
update-initramfs: Generating /boot/initrd.img-4.18.0-1009-gcp
cryptsetup: WARNING: The initramfs image may not contain cryptsetup binaries 
    nor crypto modules. If that's on purpose, you may want to uninstall the 
    'cryptsetup-initramfs' package in order to disable the cryptsetup initramfs 
    integration and avoid this warning.
Processing triggers for libc-bin (2.28-0ubuntu1) ...
Processing triggers for dbus (1.12.10-1ubuntu2) ...

Go ahead and reboot the system

$ reboot

Follow the below steps once the Ubuntu instance comes back.

Installing NVIDIA Container Runtime

Create a file named nvidia-container-runtime-script.sh and save it

$ cat nvidia-container-runtime-script.sh
 
curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | \
  sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
sudo apt-get update

Execute the script

sh nvidia-container-runtime-script.sh

OK
deb https://nvidia.github.io/libnvidia-container/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/$(ARCH) /
Hit:1 http://archive.canonical.com/ubuntu bionic InRelease
Get:2 https://nvidia.github.io/libnvidia-container/ubuntu18.04/amd64  InRelease [1139 B]                
Get:3 https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/amd64  InRelease [1136 B]           
Hit:4 http://security.ubuntu.com/ubuntu bionic-security InRelease                                       
Get:5 https://nvidia.github.io/libnvidia-container/ubuntu18.04/amd64  Packages [4076 B]                 
Get:6 https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/amd64  Packages [3084 B]            
Hit:7 http://us-east4-c.gce.clouds.archive.ubuntu.com/ubuntu bionic InRelease
Hit:8 http://us-east4-c.gce.clouds.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:9 http://us-east4-c.gce.clouds.archive.ubuntu.com/ubuntu bionic-backports InRelease
Fetched 9435 B in 1s (17.8 kB/s)                   
Reading package lists... Done
$ apt-get install nvidia-container-runtime
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  grub-pc-bin libnuma1
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
Get:1 https://nvidia.github.io/libnvidia-container/ubuntu18.04/amd64  libnvidia-container1 1.0.2-1 [59.1 kB]
Get:2 https://nvidia.github.io/libnvidia-container/ubuntu18.04/amd64  libnvidia-container-tools 1.0.2-1 [15.4 kB]
Get:3 https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/amd64  nvidia-container-runtime-hook 1.4.0-1 [575 kB]

...
Unpacking nvidia-container-runtime (2.0.0+docker18.09.6-3) ...
Setting up libnvidia-container1:amd64 (1.0.2-1) ...
Setting up libnvidia-container-tools (1.0.2-1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Setting up nvidia-container-runtime-hook (1.4.0-1) ...
Setting up nvidia-container-runtime (2.0.0+docker18.09.6-3) ...
which nvidia-container-runtime-hook
/usr/bin/nvidia-container-runtime-hook

Installing Docker 19.03 Beta 3 Test Build

curl -fsSL https://test.docker.com -o test-docker.sh 

Execute the script

sh test-docker.sh

Verifying Docker Installation

$ docker version
Client:
 Version:           19.03.0-beta3
 API version:       1.40
 Go version:        go1.12.4
 Git commit:        c55e026
 Built:             Thu Apr 25 02:58:59 2019
 OS/Arch:           linux/amd64
 Experimental:      false
Server:
 Engine:
  Version:          19.03.0-beta3
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.4
  Git commit:       c55e026
  Built:            Thu Apr 25 02:57:32 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Verifying –gpus option under docker run

$ docker run --help | grep -i gpus
      --gpus gpu-request               GPU devices to add to the container ('all' to pass all GPUs)

Running a Ubuntu container which leverages GPUs

 $ docker run -it --rm --gpus all ubuntu nvidia-smi
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
f476d66f5408: Pull complete 
8882c27f669e: Pull complete 
d9af21273955: Pull complete 
f5029279ec12: Pull complete 
Digest: sha256:d26d529daa4d8567167181d9d569f2a85da3c5ecaf539cace2c6223355d69981
Status: Downloaded newer image for ubuntu:latest
Tue May  7 15:52:15 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.116                Driver Version: 390.116                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla P4            Off  | 00000000:00:04.0 Off |                    0 |
| N/A   39C    P0    22W /  75W |      0MiB /  7611MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
:~$ 

Troubleshooting:

Did you encounter the below error message:

$ docker run -it --rm --gpus all debian
docker: Error response from daemon: linux runtime spec devices: could not select device driver "" with capabilities: [[gpu]].

The above error means that Nvidia could not properly register with Docker. What it actually mean is the drivers are not properly installed on the host. This could also mean the nvidia container tools were installed without restarting the docker daemon: you need to restart the docker daemon.

I suggest you to go back and verify if nvidia-container-runtime is installed or not OR restart the Docker daemon.

Listing out GPU devices

$ docker run -it --rm --gpus all ubuntu nvidia-smi -L
GPU 0: Tesla P4 (UUID: GPU-fa974b1d-3c17-ed92-28d0-805c6d089601)
$ docker run -it --rm --gpus all ubuntu nvidia-smi  --query-gpu=index,name,uui
d,serial --format=csv
index, name, uuid, serial
0, Tesla P4, GPU-fa974b1d-3c17-ed92-28d0-805c6d089601, 0325017070224

A Quick Look at NVIDIA Deep Learning..

The NVIDIA Deep Learning GPU Training System, a.k.a DIGITS is a webapp for training deep learning models. It puts  the power of deep learning into the hands of engineers & data scientists. It can be used to rapidly train the highly accurate deep neural network (DNNs) for image classification, segmentation and object detection tasks.The currently supported frameworks are: Caffe, Torch, and Tensorflow.

DIGITS simplifies common deep learning tasks such as managing data, designing and training neural networks on multi-GPU systems, monitoring performance in real time with advanced visualizations, and selecting the best performing model from the results browser for deployment. DIGITS is completely interactive so that data scientists can focus on designing and training networks rather than programming and debugging.

To test-drive DIGITS, you can get it up and running in a single Docker container:

$docker run -itd --gpus all -p 5000:5000 nvidia/digits

You can open up web browser and verify if its running on the below address:

w3m http://<dockerhostip>:5000

Verifying again with nvidia-smi

$ docker run -it --rm --gpus all ubuntu nvidia-smi
Tue May  7 16:27:37 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.116                Driver Version: 390.116                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla P4            Off  | 00000000:00:04.0 Off |                    0 |
| N/A   51C    P0    24W /  75W |    129MiB /  7611MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

If you want to see it in action, here’s a quick video

Can I use Docker compose for NVIDIA GPU?

Not Yet. I have raised a feature request under this link few minutes back.

Special Thanks to Tibor Vass, Docker Staff Engineer for reviewing this blog.