How to create a Local Private Docker Registry on Play with Docker in 5 Minutes?

Estimated Reading Time: 4 minutes

DockerHub is a service provided by Docker for finding and sharing container images with your team. It is the world’s largest repository of container images with an array of content sources including container community developers, open source projects and independent software vendors (ISV) building and distributing their code in containers. Users get access to free public repositories for storing and sharing images or can choose subscription plan for private repos.

One can easily pull the Docker Image from Dockerhub and run application in their environment flawlessly. But in case you want to setup a private registry, it is still possible to accomplish. Under this blog post, I will demonstrate how to build a private registry on Play with Docker in just 5 minutes.

Caution – Please note that Play with Docker platform is just for demo or training purpose. As the instance wipe away after 4 hours of limited time, it’s better to save your data before it gets wiped up.

Tested Infrastructure

PlatformNumber of InstanceReading Time
Play with Docker15 min

Pre-requisite

  • Create an account with DockerHub
  • Open PWD Platform on your browser
  • Click on Add New Instance on the left side of the screen to bring up Alpine OS instance on the right side

Create a directory to permanently store images.

$ mkdir -p /opt/registry/data

Authenticate with DockerHub

$docker login

Start the registry container.

$ docker run -d \
  -p 5000:5000 \
  --name registry \
  -v /opt/registry/data:/var/lib/registry \
  --restart always \
  registry:2

Display running containers.

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS
                    NAMES
3a056bf96c6d        registry:2          "/entrypoint.sh /etc…"   About an hour ago   Up About an hour    0.0.0
.0:5000->5000/tcp   registry

Pull Debian Stretch image from official repository.

$ docker pull debian:stretch

stretch: Pulling from library/debian
723254a2c089: Pull complete
Digest: sha256:0a5fcee6f52d5170f557ee2447d7a10a5bdcf715dd7f0250be0b678c556a501b
Status: Downloaded newer image for debian:stretch

Tag local Debian Stretch image with an additional tag – local repository address.

$ docker tag debian:stretch localhost:5000/debian:stretch

Push image to the local repository.

[node1] (local) root@192.168.0.23 ~
$ docker push localhost:5000/debian:stretch
The push refers to repository [localhost:5000/debian]
90d1009ce6fe: Pushed
stretch: digest: sha256:38236c068c393272ad02db100e09cac36a5465149e2924a035ee60d6c60c38fe size: 529

Remove local images.

[node1] (local) root@192.168.0.23 ~
$ docker image remove debian:stretch
Untagged: debian:stretch
Untagged: debian@sha256:df6ebd5e9c87d0d7381360209f3a05c62981b5c2a3ec94228da4082ba07c4f05
[node1] (local) root@192.168.0.23 ~
$ docker image remove localhost:5000/debian:stretch
Untagged: localhost:5000/debian:stretch
Untagged: localhost:5000/debian@sha256:38236c068c393272ad02db100e09cac36a5465149e2924a035ee60d6c60c38fe
Deleted: sha256:4879790bd60d439cfe39c063660eef7af525d5f6f1cbb701a14c7cfc11cbfcf7

Pull Debian Stretch image from local repository.

[node1] (local) root@192.168.0.23 ~
$ docker pull localhost:5000/debian:stretch
stretch: Pulling from debian
54f7e8ac135a: Pull complete
Digest: sha256:38236c068c393272ad02db100e09cac36a5465149e2924a035ee60d6c60c38fe
Status: Downloaded newer image for localhost:5000/debian:stretch

List stored images.

[node1] (local) root@192.168.0.23 ~
$ docker image ls
REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE
localhost:5000/debian   stretch             4879790bd60d        12 days ago         101MB
registry                2                   2e2f252f3c88        2 months ago        33.3MB

Shared local registry

Create a directory to permanently store images.

$ mkdir -p /srv/registry/data

Create a directory to permanently store certificates and authentication data.

$ mkdir -p /srv/registry/security

Store domain and intermediate certificates using /srv/registry/security/registry.crt file, private key using /srv/registry/security/registry.key file. Use valid certificate and do not waste time with self-signed one. This step is required do use basic authentication.

Install apache2-utils to use htpasswd utility.

[node1] (local) root@192.168.0.23 ~
$ apk add apache2-utils
OK: 302 MiB in 110 packages

Create initial username and password. The only supported password format is bcrypt.

$ : | sudo tee /srv/registry/security/htpasswd
[node1] (local) root@192.168.0.23 ~
$ echo "password" | sudo htpasswd -iB /srv/registry/security/htpasswd username
Adding password for user username

Adding password for user username

$(local) root@192.168.0.23 ~ $ cat /srv/registry/security/htpasswd username:$2y$05$q9R5FSNYpAppB4Vw/AGWb.RqMCGE8DmZ4q5HZC/1wC87oTWyvB9vy 

Stop and Remove all old containers

$ docker stop $(docker ps -a -q)
3a056bf96c6d
$ docker rm -f $(docker ps -a -q) 
3a056bf96c6d 

Start the registry container.

[node1] (local) root@192.168.0.23 ~
$ docker run -d   -p 443:5000   --name registry   -v /srv/registry/data:/var/lib/registry   -v /srv/registry/security:/etc/security   -e REGISTRY_HTTP_TLS_CERTIFICATE=/etc/security/registry.crt   -e REGISTRY_HTTP_TLS_KEY=/etc/security/registry.key   -e REGISTRY_AUTH=htpasswd   -e REGISTRY_AUTH_HTPASSWD_PATH=/etc/security/htpasswd   -e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm"   --restart always   registry:2
e7755af8cbd70ea84ab77237a87cb97fd1abb18c7726fbc116c40f081d3b7098

Display running containers.

[node1] (local) root@192.168.0.23 ~
$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS        PORTS               NAMES
e7755af8cbd7        registry:2          "/entrypoint.sh /etc…"   About a minute ago   Restarting (1) 22 seconds ago                       registry

Pull Debian Stretch image from official repository.

$ docker pull debian:stretch

stretch: Pulling from library/debian
723254a2c089: Pull complete
Digest: sha256:0a5fcee6f52d5170f557ee2447d7a10a5bdcf715dd7f0250be0b678c556a501b
Status: Downloaded newer image for debian:stretch

Tag local Debian Stretch image with an additional tag – local repository address.

$ docker tag debian:stretch registry.collabnix.com/debian:stretch

This time you need to provide login credentials to use local repository.

$ docker push registry.collabnix.com/debian:stretch

e27a10675c56: Preparing
no basic auth credentials
$ docker pull registry.collabnix.com/debian:stretch

Error response from daemon: Get https://registry.collabnix.com/v2/debian/manifests/stretch: no basic auth credentials

Log in to the local registry.

$ docker login --username username registry.collabnix.com
Password: ********

Login Succeeded

Push image to the local repository.

$ docker push registry.collabnix.com/debian:stretch
The push refers to repository [registry.collabnix.com/debian]
e27a10675c56: Pushed
stretch: digest: sha256:02741df16aee1b81c4aaff4c48d75cc2c308bade918b22679df570c170feef7c size: 529

Remove local images.

$ docker image remove debian:stretch

Untagged: debian:stretch
Untagged: debian@sha256:0a5fcee6f52d5170f557ee2447d7a10a5bdcf715dd7f0250be0b678c556a501b
$ docker image remove registry.collabnix.com/debian:stretch

Untagged: registry.collabnix.com/debian:stretch
Untagged: registry.sl.collabnix.com/debian@sha256:02741df16aee1b81c4aaff4c48d75cc2c308bade918b22679df570c170feef7c
Deleted: sha256:da653cee0545dfbe3c1864ab3ce782805603356a9cc712acc7b3100d9932fa5e
Deleted: sha256:e27a10675c5656bafb7bfa9e4631e871499af0a5ddfda3cebc0ac401dfe19382

Pull Debian Stretch image from local repository.

$ docker pull registry.collabnix.com/debian:stretch

stretch: Pulling from debian
723254a2c089: Pull complete
Digest: sha256:02741df16aee1b81c4aaff4c48d75cc2c308bade918b22679df570c170feef7c
Status: Downloaded newer image for registry.collabnix.com/debian:stretch

List stored images.

$ docker image ls

REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
registry                             2                   d1fd7d86a825        4 weeks ago         33.3MB
registry.collabnix.com/debian   stretch             da653cee0545        2 months ago        100MB
hello-world                          latest              f2a91732366c        2 months ago     

Running Docker Containers on EC2 A1 Instances powered by Arm-Based AWS Graviton Processors

Estimated Reading Time: 7 minutes

2 week back, I wrote a blog post on how Developers can now build ARM containers on Docker Desktop using docker buildx CLI Plugin. Usually developers are restricted to build Arm-based application right on top of Arm-based system.Using this plugin, developers can build their application for Arm platform right on their laptop(x86) and then deploy onto the Cloud flawlessly without any cross-compilation pain anymore.

Wait…Did you say “ARM containers on Cloud?”

Yes, you heard it right. It is possible to deploy Arm containers on Cloud. Thanks to new Amazon EC2 A1 instances powered by custom AWS Graviton processors based on the Arm architecture, which brings Arm to the public cloud as a first class citizen. Docker Developers can now build ARM containers on AWS Cloud Platform.

A Brief about AWS Graviton Processors..

Amazon announced the availability of EC2 instances on its Arm-based servers during AWS re:Invent(December 2018). AWS Graviton processors are a new line of processors that are custom designed by AWS targeted in building platform solutions for cloud applications running at scale.The Graviton based instances are known as EC2 A1. These instances are targeted at scale-out workloads and applications such container based microservices, web sites, and scripting language-based applications (e.g., Ruby, Python, etc.)

EC2 A1 instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which maximizes resource efficiency for customers while still supporting familiar AWS and Amazon EC2 instance capabilities such as EBS, Networking, and AMIs. Amazon Linux 2, Red Hat Enterpise Linux (RHEL), Ubuntu and ECS optimized AMIs are available today for A1 instances.  Built around Arm cores and making extensive use of custom-built silicon, the A1 instances are optimized for performance and cost.

Under this blog post, I will showcase how to deploy Containers on AWS EC2 A1 instance using Docker Machine running on Docker Desktop for Windows.

Pre-requisites:

My Image
  • Click on “Register for Public Beta”. This will open up various options to test drive Docker products
My Image
  • Don’t forget to Select “Docker Desktop CE with Multi-Arch images (Arm Enabled) – Edge Release Amazon Cloud Credits available for limited time” option.
  • Enter your details and this will open.
  • You will see an option to sign up for credits for Amazon EC2 A1 instances via https://www.surveymonkey.com/r/DockerCon19AWS.
  • Click on Sign Up

Creating AWS Account

  • Go to aws.amazon.com and create Free Tier Account
  • By now, you must have received email from Amazon on Free Credits of $50.
  • Open up https://aws.amazon.com/amazoncredits and add the Promo Code

Creating AWS A1 Instance

We will use Docker Desktop for Windows which comes installed with Docker Machine to bring up ARM instances quickly.

Go to My Security Credentials under your Account and Click “Access Keys” shown below to display Access Key IDs.


Run the below command to set the environmental variable for ACCESS_KEY_ID as well as SECRET_ACCESS_KEY ID.

PS C:\Users\Ajeet_Raina> set ACCESS_KEY_ID=XXX
PS C:\Users\Ajeet_Raina> set SECRET_ACCESS_KEY=XX

Running Docker Machine to bring up our first Docker Node on AWS A1 ARM instance

Docker Desktop for Windows comes with Docker Machine by default and there is NO need to install it separately.

PS C:\Users\Ajeet_Raina> docker-machine create  --driver amazonec2  --amazonec2-access-key=${ACCESS_KEY_ID}  --amazonec2-secret-key=${SECRET_ACCESS_KEY} --amazonec2-region=us-west-2 --amazonec2-vpc-id=vpc-ae59f0d6 --amazonec2-ami=ami-0db180c518750ee4f  --amazonec2-instance-type=a1.medium arm-node1

By now, you should be able to see arm-node1 up and running on your AWS environment.


Listing out the ARM Nodes

PS C:\Users\Ajeet_Raina> docker-machine ls
NAME        ACTIVE   DRIVER      STATE     URL                         SWARM   DOCKER     ERRORS
arm-node1   -        amazonec2   Running   tcp://34.218.208.175:2376           v18.09.6
PS C:\Users\Ajeet_Raina>

Login into the first Node

You can use docker-machine ssh to login into the AWS EC2 A1 instance directly.

PS C:\Users\Ajeet_Raina> docker-machine ssh arm-node1
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1028-aws aarch64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Thu May 16 04:35:16 UTC 2019

  System load:  0.06              Processes:              116
  Usage of /:   9.1% of 15.34GB   Users logged in:        0
  Memory usage: 10%               IP address for ens5:    172.31.60.52
  Swap usage:   0%                IP address for docker0: 172.17.0.1


  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

178 packages can be updated.
86 updates are security updates.




This node comes with Docker 18.09.6 installed.

ubuntu@arm-node1:~$ sudo docker version
Client:
 Version:           18.09.6
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        481bc77
 Built:             Sat May  4 02:40:48 2019
 OS/Arch:           linux/arm64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.6
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       481bc77
  Built:            Sat May  4 02:00:10 2019
  OS/Arch:          linux/arm64
  Experimental:     false
ubuntu@arm-node1:~$




Checking the Node IP

PS C:\Users\Ajeet_Raina> docker-machine ip arm-node1
34.218.208.175

Running ARM-based Portainer v1.20.2 Container

Before we run Portainer, we need to ensure that the port 9000 is open for accessibility.

Click on Actions > Inbound Rules and add 9000 for Portainer. Allowing “All TCP” from 0-65535 is just for testing purpose and not recommended for the production environment.


ubuntu@ip-172-31-62-91:~$ sudo docker run --rm mplatform/mquery portainer/portainer
Unable to find image 'mplatform/mquery:latest' locally
latest: Pulling from mplatform/mquery
db6020507de3: Pull complete
713cdc222639: Pull complete
Digest: sha256:e15189e3d6fbcee8a6ad2ef04c1ec80420ab0fdcf0d70408c0e914af80dfb107
Status: Downloaded newer image for mplatform/mquery:latest
Image: portainer/portainer
 * Manifest List: Yes
 * Supported platforms:
   - linux/amd64
   - linux/arm
   - linux/arm64
   - linux/ppc64le
   - windows/amd64:10.0.14393.2551
   - windows/amd64:10.0.16299.967
   - windows/amd64:10.0.17134.590
   - windows/amd64:10.0.17763.253

Initialising Docker Swarm Mode on Arm-based EC1 instance

Follow the below steps to setup 2 Node Docker Swarm Mode cluster on AWS Platform using Docker Machine.

PS C:\Users\Ajeet_Raina> docker-machine create  --driver amazonec2  --amazonec2-access-key=${ACCESS_KEY_ID}  --amazonec2-secret-key=${SE
CRET_ACCESS_KEY} --amazonec2-region=us-west-2 --amazonec2-vpc-id=vpc-ae59f0d6 --amazonec2-ami=ami-0db180c518750ee4f --amazonec2-open-por
t 2377 --amazonec2-open-port 7946 --amazonec2-open-port 4789 --amazonec2-open-port 7946/udp --amazonec2-open-port 4789/udp --amazonec2-open-port 8080 --amazonec2-open-port 443 --amazonec2-open-port 80 --amazonec2-subnet-id=subnet-827651c9 --amazonec2-instance-type=a1.medi
um arm-swarm-node2
Running pre-create checks...
Creating machine...
(arm-swarm-node2) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...

You can open all ports on AWS using the below command:

PS C:\Users\Ajeet_Raina> aws ec2 authorize-security-group-ingress --group-name docker-machine --protocol -1 --cidr 0.0.0.0/0

Initialising Docker Swarm Manager


PS C:\Users\Ajeet_Raina> docker-machine ssh arm-swarm-node1 sudo docker swarm init
Swarm initialized: current node (oqk875mcldbn28ce2rip31fg5) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-6bw0zfd7vjpXX17usjhccjlg3rs 172.31.50.5:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

PS C:\Users\Ajeet_Raina> docker-machine ssh arm-swarm-node2 sudo docker swarm join --token SWMTKN-1-6XX23ye817usjhccjlg3rs 172.31.50.5:2377
This node joined a swarm as a worker.

Adding Worker Node

PS C:\Users\Ajeet_Raina> docker-machine ssh arm-swarm-node2 sudo docker swarm join --token SWMTKN-1-6bw0zfXXXhccjlg3rs 172.31.50.5:2377
This node joined a swarm as a worker.

Verifying 2-Node Swarm Cluster

ubuntu@arm-swarm-node1:~$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
oqk875mcldbn28ce2rip31fg5 *   arm-swarm-node1     Ready               Active              Leader              18.09.6
f3rwuj6f6mghte3630car83ia     arm-swarm-node2     Ready               Active                                  18.09.6
ubuntu@arm-swarm-node1:~$

Building Up Portainer Application Stack

ubuntu@ip-172-31-62-91:~$ sudo docker stack deploy --compose-file=portainer-agent-stack.yml portainer
Creating network portainer_agent_network
Creating service portainer_portainer
Creating service portainer_agent
ubuntu@ip-172-31-62-91:~$

Listing out Portainer Stack

ubuntu@arm-node1:~$ sudo docker stack ls
NAME                SERVICES            ORCHESTRATOR
portainer           2                   Swarm
ubuntu@arm-node1:~$ sudo docker service ls
ID                  NAME                  MODE                REPLICAS            IMAGE                        PORTS
k5651aoxgqhk        portainer_agent       global              1/1                 portainer/agent:latest       
yoembxxj25k8        portainer_portainer   replicated          1/1                 portainer/portainer:latest   *:9000->9000/tcp

Viewing Portainer Dashboard

Portainer UI showing a Single Node Swarm Mode Cluster

In my future post, I am going to showcase how I leveraged buildx CLI plugin & AWS EC2 A1 instance to build in-house project called “Pico” for Deep Learning using Apache Kafka, IoT & Amazon Rekognition Service. Stay tuned !

How I built ARM based Docker Images for Raspberry Pi using buildx CLI Plugin on Docker Desktop?

Estimated Reading Time: 11 minutes


2 weeks back in Dockercon 2019 San Francisco, Docker & ARM demonstrated the integration of ARM capabilities into Docker Desktop Community for the first time. Docker & ARM unveiled go-to-market strategy to accelerate Cloud, Edge & IoT Development. These two companies have planned to streamline the app development tools for cloud, edge, and internet of things environments built on ARM platform. The tools include AWS EC2 A1 instances based on AWS’ Graviton Processors (which feature 64-bit Arm Neoverse cores). Docker in collaboration with ARM will make new Docker-based solutions available to the Arm ecosystem as an extension of Arm’s server-tailored Neoverse platform, which they say will let developers more easily leverage containers — both remote and on-premises which is going to be pretty cool.

This integration is today available to the approximately 2 million developers using Docker Desktop Community Edition . As part of Docker Captain’s programme, we were lucky to get an early access to this build during Docker Captain Summit which took place on the first day of Dockercon 2019.

Introducing buildx

Under Docker 19.03.0 Beta 3, there is a new experimental CLI plugin called “buildx”. It is a pretty new Docker CLI plugin that extends the docker build command with the full support of the features provided by Moby BuildKit builder toolkit. It provides the same user experience as docker build with many new features like creating scoped builder instances and building against multiple nodes concurrently. As per the discussion with Docker staff, the “x” under buildx might get dropped in near future and features and flags are too subjected to change when the stable release is announced.

Buildx always build using the BuildKit engine and does not require DOCKER_BUILDKIT=1 environment variable for starting builds. Buildx build command supports the features available for docker build including the new features in Docker 19.03 such as outputs configuration, inline build caching or specifying target platform. In addition, buildx supports new features not yet available for regular docker build like building manifest lists, distributed caching, exporting build results to OCI image tarballs etc.

How does builder Instance work?

Buildx allows you to create new instances of isolated builders. This can be used for getting a scoped environment for your CI builds that does not change the state of the shared daemon or for isolating the builds for different projects. You can create a new instance for a set of remote nodes, forming a build farm, and quickly switch between them.

New instances can be created with docker buildx create command. This will create a new builder instance with a single node based on your current configuration. To use a remote node you can specify the DOCKER_HOST or remote context name while creating the new builder. After creating a new instance you can manage its lifecycle with the inspectstop and rm commands and list all available builders with ls. After creating a new builder you can also append new nodes to it.

To switch between different builders use docker buildx use <name>. After running this command the build commands would automatically keep using this builder.

Docker 19.03 also features a new docker context command that can be used for giving names for remote Docker API endpoints. Buildx integrates with docker context so that all of your contexts automatically get a default builder instance. While creating a new builder instance or when adding a node to it you can also set the context name as the target.

Enough theory !!! Do you really want to see it in action? Under this blog post, I will showcase how I built ARM-based Docker Images for my tiny Raspberry cluster using `docker buildx’ utility which runs on my Docker Desktop for Mac.

Installing Docker Desktop for Mac 2.0.4.1

Open up https://www.docker.com/products/docker-desktop for an early access of Docker Desktop. Do switch to Edge release if in case you’ve installed stable version of Docker Desktop.

As of today, Docker Desktop 2.0.4.1 Edge Community Edition comes with Engine 19.03.0 Beta 3, latest Kubernetes v1.1.4.1 and Compose 1.24.0 Release.

Verifying the docker buildx CLI

[Captains-Bay]🚩 >  docker buildx --help

Usage:	docker buildx COMMAND

Build with BuildKit

Management Commands:
  imagetools  Commands to work on images in registry

Commands:
  bake        Build from a file
  build       Start a build
  create      Create a new builder instance
  inspect     Inspect current builder instance
  ls          List builder instances
  rm          Remove a builder instance
  stop        Stop builder instance
  use         Set the current builder instance
  version     Show buildx version information 

Run 'docker buildx COMMAND --help' for more information on a command.

Listing all builder instances and the nodes for each instance

[Captains-Bay]🚩 >  docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS  PLATFORMS
default * docker                  
  default default         running linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6

We are currently using the default builder, which is basically the old builder.

Creating a New Builder Instance

The `docker buildx create` makes a new builder instance pointing to a docker context or endpoint, where context is the name of a context from docker context ls and endpoint is the address for docker socket (eg. DOCKER_HOST value).

By default, the current docker configuration is used for determining the context/endpoint value.

Builder instances are isolated environments where builds can be invoked. All docker contexts also get the default builder instance.

Let’s create a new builder, which gives us access to some new multi-arch features.

[Captains-Bay]🚩 >  docker buildx create --help

Usage:	docker buildx create [OPTIONS] [CONTEXT|ENDPOINT]

Create a new builder instance

Options:
      --append                 Append a node to builder instead of changing it
      --driver string          Driver to use (eg. docker-container)
      --leave                  Remove a node from builder instead of changing it
      --name string            Builder instance name
      --node string            Create/modify node with given name
      --platform stringArray   Fixed platforms for current node
      --use                    Set the current builder instance

Creating a new builder called “testbuilder”

[Captains-Bay]🚩 >  docker buildx create --name testbuilder
testbuilder
[Captains-Bay]🚩 >  docker buildx ls
NAME/NODE      DRIVER/ENDPOINT             STATUS   PLATFORMS
testbuilder    docker-container                     
  testbuilder0 unix:///var/run/docker.sock inactive 
default *      docker                               
  default      default                     running  linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
[Captains-Bay]🚩 > 

Switching to “testbuilder” builder instance

[Captains-Bay]🚩 >  docker buildx use testbuilder
[Captains-Bay]🚩 >  docker buildx ls
NAME/NODE      DRIVER/ENDPOINT             STATUS   PLATFORMS
testbuilder *  docker-container                     
  testbuilder0 unix:///var/run/docker.sock inactive 
default        docker                               
  default      default                     running  linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
[Captains-Bay]🚩 > 

Here I created a new builder instance with the name mybuilder, switched to it, and inspected it. Note that –bootstrap isn’t needed, it just starts the build container immediately. Next we will test the workflow, making sure we can build, push, and run multi-arch images.

What is –bootstrap all about?

The `docker buildx inspect –bootstrap ensures that the builder is running before inspecting it. If the driver is docker-container, then --bootstrap starts the buildkit container and waits until it is operational. Bootstrapping is automatically done during build, it is thus not necessary. The same BuildKit container is used during the lifetime of the associated builder node (as displayed in buildx ls).

[Captains-Bay]🚩 >  docker buildx inspect --bootstrap
[+] Building 22.4s (1/1) FINISHED                                                                                        
 => [internal] booting buildkit                                                                                    22.4s
 => => pulling image moby/buildkit:master                                                                          21.5s
 => => creating container buildx_buildkit_testbuilder0                                                              0.9s
Name:   testbuilder
Driver: docker-container

Nodes:
Name:      testbuilder0
Endpoint:  unix:///var/run/docker.sock
Status:    running
Platforms: linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
[Captains-Bay]🚩 >  

Authenticating with Dockerhub

[Captains-Bay]🚩 >  docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: ajeetraina
Password: 
Login Succeeded

Cloning the Repository

[Captains-Bay]🚩 >  git clone https://github.com/collabnix/docker-cctv-raspbian
Cloning into 'docker-cctv-raspbian'...
remote: Enumerating objects: 47, done.
remote: Counting objects: 100% (47/47), done.
remote: Compressing objects: 100% (31/31), done.
remote: Total 47 (delta 20), reused 32 (delta 14), pack-reused 0
Unpacking objects: 100% (47/47), done.

Peeping into Dockerfile

[Captains-Bay]🚩 >  cat Dockerfile 
FROM resin/rpi-raspbian:latest

RUN apt update && apt upgrade && apt install motion
RUN mkdir /mnt/motion && chown motion /mnt/motion
COPY motion.conf /etc/motion/motion.conf

VOLUME /mnt/motion
EXPOSE 8081
ENTRYPOINT ["motion"]
[Captains-Bay]🚩 >

Building ARM-based Docker Image

[Captains-Bay]🚩 >  docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t ajeetraina/docker-cctv-raspbian --push .

[Captains-Bay]🚩 >  docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t ajeetraina/docker-cctv-raspbian --push .
[+] Building 254.2s (12/21)                                                                                              
[+] Building 2174.9s (22/22) FINISHED                                                                                    
 => [internal] load build definition from Dockerfile                                                                0.1s
 => => transferring dockerfile: 268B                                                                                0.0s
 => [internal] load .dockerignore                                                                                   0.1s
 => => transferring context: 2B                                                                                     0.0s
 => [linux/arm/v7 internal] load metadata for docker.io/resin/rpi-raspbian:latest                                   8.8s
 => [linux/arm64 internal] load metadata for docker.io/resin/rpi-raspbian:latest                                    8.8s
 => [linux/amd64 internal] load metadata for docker.io/resin/rpi-raspbian:latest                                    8.8s
 => [linux/amd64 1/4] FROM docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a99  0.1s
 => => resolve docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5  0.0s
 => [internal] load build context                                                                                   0.1s
 => => transferring context: 27.72kB                                                                                0.0s
 => [linux/arm64 1/4] FROM docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a9  17.3s
 => => resolve docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5  0.0s
 => => sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5f88d055b 2.81kB / 2.81kB                      0.0s
 => => sha256:67a3f53689ffd00b59f9a7b7f8af0da5b36c9e11e12ba43b35969586e0121303 1.56kB / 1.56kB                      2.3s
 => => sha256:ef16d706debb7a56d89bf14958aca2da76bd3d0a42a6762c37302c9527a47d10 305B / 305B                          3.5s
 => => sha256:2734b459704280a5238405384487095392e5414d973554b27545cec0e984a0f2 256B / 256B                          3.1s
 => => sha256:9d5b46a00008070b009b685af5846c225ad170f38768416b63917ea7ac94062d 7.75kB / 7.75kB                      4.4s
 => => sha256:5fb2b6f7daac67a8465e8201c76828550b811619be473ab594972da35b4b7ee7 354B / 354B                          4.4s
 => => sha256:a04aef7b1e2fc4905fea2a685c63bc1eeb94c86149fd1286459206c22794e610 177B / 177B                          4.4s
 => => sha256:3eae71a21b9db8bd54d6a189b7587a10f52d6ffee3d868705a89974fe574c268 234B / 234B                          2.3s
 => => sha256:28f1ee4d4d5aa8bb96b3ba6a5e2177b2b58b595edaf601b9aae69fd02f78a6c6 7.48kB / 7.48kB                      0.0s
 => => sha256:6bddb275e70b0083d76083d01be2c3da11f67f526a123adcc980c5a3260d46e8 51.54MB / 51.54MB                   13.4s
 => => sha256:873755612f304f066db70c4015fdeadc9a81c0e6e25fb1aa833aeba80a7aeffc 229B / 229B                          3.9s
 => => sha256:78ba3f0466312c019467b178339a89120f2dce298d7c3d5e6840b3566749f5c0 250B / 250B                          3.2s
 => => sha256:b98db37cf25231afe68852e2cb21fb8aa465bb7a32ecc9afc6dec100ec0ba9b0 367B / 367B                          3.5s
 => => sha256:6b6c68e7ac8567569cee8da92431637e561e7aef5addb70373401d0887447a00 363B / 363B                          4.4s
 => => unpacking docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15  3.8s
 => [linux/arm/v7 1/4] FROM docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a9  0.0s
 => => resolve docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5  0.0s
 => [linux/amd64 2/4] RUN cat /.resin/deprecation-warning                                                           0.6s
 => [linux/arm/v7 2/4] RUN cat /.resin/deprecation-warning                                                          0.5s
 => [linux/arm64 2/4] RUN cat /.resin/deprecation-warning                                                           0.6s
 => [linux/arm/v7 3/4] RUN apt update && apt upgrade && apt install motion                                        481.2s
 => [linux/amd64 3/4] RUN apt update && apt upgrade && apt install motion                                        1431.5s
 => [linux/arm64 3/4] RUN apt update && apt upgrade && apt install motion                                        1665.5s
 => [linux/arm/v7 4/4] RUN mkdir /mnt/motion && chown motion /mnt/motion                                            0.6s
 => [linux/arm/v7 5/4] COPY motion.conf /etc/motion/motion.conf                                                     0.1s
 => [linux/amd64 4/4] RUN mkdir /mnt/motion && chown motion /mnt/motion                                             0.6s
 => [linux/amd64 5/4] COPY motion.conf /etc/motion/motion.conf                                                      0.1s
 => [linux/arm64 4/4] RUN mkdir /mnt/motion && chown motion /mnt/motion                                             0.6s
 => [linux/arm64 5/4] COPY motion.conf /etc/motion/motion.conf                                                      0.1s
 => exporting to image                                                                                            481.7s
 => => exporting layers                                                                                            19.6s
 => => exporting manifest sha256:55ddd5c67557190344efea7327a5b2f8de0bdc8ba184f856b1086baac6bed702                   0.0s
 => => exporting config sha256:ebb6e951b8f206d258e8552313633c35f4fe4c82fe7a7fcc51475022ae089c2d                     0.0s
 => => exporting manifest sha256:7b6919de7edd7d1be695877f827e7ee6d302d3acfd3f69ed73bf2ffaa4a80632                   0.0s
 => => exporting config sha256:a83b02bf9cbcb408110c4b773f4e5edde04f35851e2a42d4ff1d947c132bed6d                     0.0s
 => => exporting manifest sha256:d379c0f79b6a72a770124bb2ee94d91d9afef031c81ac20ea7a4c51f4f13ddf2                   0.0s
 => => exporting config sha256:86c2e637fe39036d51779c3bb5f800d3e8da14122907c5fd03bdace47b03bb38                     0.0s
 => => exporting manifest list sha256:daec2787002024c07addf56a8099a866d7f1cd85ed8c33818beb64a5a208cd54              0.0s
 => => pushing layers                                                                                             455.4s
 => => pushing manifest for docker.io/ajeetraina/docker-cctv-raspbian:latest                                        6.4s

Awesome. It worked ! The –platform flag enabled buildx to generate Linux images for Intel 64-bit, Arm 32-bit, and Arm 64-bit architectures. The –push flag generates a multi-arch manifest and pushes all the images to Docker Hub. Cool, isn’t it?

What is this ImageTools all about?

Imagetools contains commands for working with manifest lists in the registry. These commands are useful for inspecting multi-platform build results. It creates a new manifest list based on source manifests. The source manifests can be manifest lists or single platform distribution manifests and must already exist in the registry where the new manifest is created. If only one source is specified create performs a carbon copy.

Let’s use imagetools to inspect what we did.

[Captains-Bay]🚩 >  docker buildx imagetools inspect docker.io/ajeetraina/docker-cctv-raspbian:latest
Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest
MediaType: application/vnd.docker.distribution.manifest.list.v2+json
Digest:    sha256:daec2787002024c07addf56a8099a866d7f1cd85ed8c33818beb64a5a208cd54
           
Manifests: 
  Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest@sha256:55ddd5c67557190344efea7327a5b2f8de0bdc8ba184f856b1086baac6bed702
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/amd64
             
  Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest@sha256:7b6919de7edd7d1be695877f827e7ee6d302d3acfd3f69ed73bf2ffaa4a80632
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/arm64
             
  Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest@sha256:d379c0f79b6a72a770124bb2ee94d91d9afef031c81ac20ea7a4c51f4f13ddf2
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/arm/v7
[Captains-Bay]🚩 >

The image is now available on Docker Hub with the tag ajeetraina/docker-cctv-raspbian:latest. You can run a container from that image on Intel laptops, Amazon EC2 A1 instances, Raspberry Pis, and more. Docker pulls the correct image for the current architecture, so Raspberry Pis run the 32-bit Arm version and EC2 A1 instances run 64-bit Arm.

Verifying this Image on Raspberry Pi Node

root@node2:/home/pi# cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
root@node2:/home/pi#

Verify the Docker version

root@node2:/home/pi# docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:57:21 2018
 OS/Arch:           linux/arm
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:17:57 2018
  OS/Arch:          linux/arm
  Experimental:     false

Running the Docker Image

root@node2:/home/pi# docker pull ajeetraina/docker-cctv-raspbian:latest
latest: Pulling from ajeetraina/docker-cctv-raspbian
6bddb275e70b: Already exists
ef16d706debb: Already exists
2734b4597042: Already exists
873755612f30: Already exists
67a3f53689ff: Already exists
5fb2b6f7daac: Already exists
a04aef7b1e2f: Already exists
9d5b46a00008: Already exists
b98db37cf252: Already exists
78ba3f046631: Already exists
6b6c68e7ac85: Already exists
3eae71a21b9d: Already exists
db08b1848bc3: Pull complete
9c84b2387994: Pull complete
3579f37869a5: Pull complete
Digest: sha256:daec2787002024c07addf56a8099a866d7f1cd85ed8c33818beb64a5a208cd54
Status: Downloaded newer image for ajeetraina/docker-cctv-raspbian:latest
root@node2:/home/pi#


Inspecting the Docker Image

 docker image inspect ajeetraina/docker-cctv-raspbian:latest | grep arm
                "QEMU_CPU=arm1176",
        "Architecture": "arm",

Running the Docker Container

 docker run -dit -p 8000:8000 ajeetraina/docker-cctv-raspbian:latest
c43f25cd06672883478908e71ad6f044766270fcbf413e69ad63c8020610816f
root@node2:/home/pi# docker ps
CONTAINER ID        IMAGE                                    COMMAND             CREATED             STATUS              PORTS                              NAMES
c43f25cd0667        ajeetraina/docker-cctv-raspbian:latest   "motion"            7 seconds ago       Up 2 seconds        0.0.0.0:8000->8000/tcp, 8081/tcp   zen_brown

Verifying the Logs

root@node2:/home/pi# docker logs -f c43
[0] [NTC] [ALL] conf_load: Processing thread 0 - config file /etc/motion/motion.conf
[0] [NTC] [ALL] motion_startup: Motion 3.2.12+git20140228 Started
[0] [NTC] [ALL] motion_startup: Logging to syslog
[0] [NTC] [ALL] motion_startup: Using log type (ALL) log level (NTC)
[0] [NTC] [ENC] ffmpeg_init: ffmpeg LIBAVCODEC_BUILD 3670016 LIBAVFORMAT_BUILD 3670272
[0] [NTC] [ALL] main: Thread 1 is from /etc/motion/motion.conf
[0] [NTC] [ALL] main: Thread 1 is device: /dev/video0 input -1
[0] [NTC] [ALL] main: Stream port 8081
[0] [NTC] [ALL] main: Waiting for threads to finish, pid: 1
[1] [NTC] [ALL] motion_init: Thread 1 started , motion detection Enabled
[1] [NTC] [VID] vid_v4lx_start: Using videodevice /dev/video0 and input -1
[1] [ALR] [VID] vid_v4lx_start: Failed to open video device /dev/video0:
[1] [WRN] [ALL] motion_init: Could not fetch initial image from camera Motion continues using width and height from config file(s)
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items
[1] [NTC] [STR] http_bindsock: motion-stream testing : IPV4 addr: 0.0.0.0 port: 8081
[1] [NTC] [STR] http_bindsock: motion-stream Bound : IPV4 addr: 0.0.0.0 port: 8081
[1] [NTC] [ALL] motion_init: Started motion-stream server in port 8081 auth Disabled
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 3 items
[0] [NTC] [STR] httpd_run: motion-httpd testing : IPV4 addr: 127.0.0.1 port: 8080
[0] [NTC] [STR] httpd_run: motion-httpd Bound : IPV4 addr: 127.0.0.1 port: 8080
[0] [NTC] [STR] httpd_run: motion-httpd/3.2.12+git20140228 running, accepting connections
[0] [NTC] [STR] httpd_run: motion-httpd: waiting for data on 127.0.0.1 port TCP 8080

Hence, it is now so easy to build ARM-based Docker Images on your Docker Desktop and then share it to DockerHub so that anyone can pull this image on their Raspberry Pi and run it flawlessly.

Reference: