How to Deploy Apache Kafka on AWS Platform using Docker Swarm Mode?

Estimated Reading Time: 7 minutes

I am thrilled and excited to start a new open source project called “Pico”. Pico is a beta project which is targeted at object detection and analytics using Apache Kafka, Docker, Raspberry Pi & AWS Rekognition Service. The whole idea of Pico project is to simplify object detection and analytics process using few bunch of Docker containers. A cluster of Raspberry Pi nodes installed at various location points are coupled with camera modules and sensors with motion detection activated on them. Docker containers running on these Raspberry Pis are able to convert these nodes into CCTV camera. After producing images of all these cameras, the real-time data are then consumed on any of the five containers because of the replication factor of Kafka. The camera captured video streams and processed by Apache Kafka. The data is consumed inside a different container which runs on all of these nodes. AWS Rekognition analyses the real time video data & searches object on screen against a collection of objects.

In my past blog, I already demonstrated how to convert Raspberry Pi into CCTV camera using Docker container. The next target was to understand what is Apache Kafka and how to implement it on AWS Cloud so that the real time data can be sent to AWS Rekognition Service. I spent considerable amount of time understanding the basics of Apache Kafka before I jump directly into Docker Compose to containerize the various services which falls under this piece of software stack.

Apache Kafka is an open-source stream-processing software platform developed by LinkedIn and donated to the Apache Software Foundation. It is written in Scala and Java. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.

Apache Kafka is a distributed, partitioned, and replicated publish-subscribe messaging system that is used to send high volumes of data, in the form of messages, from one point to another. It replicates these messages across a cluster of servers in order to prevent data loss and allows both online and offline message consumption. This in turn shows the fault-tolerant behaviour of Kafka in the presence of machine failures that also supports low latency message delivery. In a broader sense, Kafka is considered as a unified platform which guarantees zero data loss and handles real-time data feeds.

Architecture of Apache Kafka

The overall architecture of Kafka is shown below.

It is composed of three server machines which together act as a cluster computing platform. In a typical Kafka cluster, each server is configured to behave as a single broker system that shows the persistence and replication of message data. In other words, we can say that there is more than one broker in a typical Kafka cluster. Essentially, broker is the key component of Kafka cluster which is basically responsible for maintaining published data. Each broker instance can easily handle thousands of reads and writes per topic, as they have a stateless behavior.



At a basic level, Kafka broker uses topics to handle message data. The topic is first created and then divided into multiple partitions in order to balance load. The above diagram illustrates the basic concept of topic which is divided into three partitions. Each partition has multiple offsets in which messages are stored. As an example, suppose that the topic has a replication factor of value ‘3’, then Kafka will create three identical replicas of each partition regarding the topic and distribute them across the cluster. In order to balance load and maintaining data replication, each broker stores one or more partition replicas. Suppose that there are N brokers and N number of partitions then each broker will store one partition.

What’s the role of Zookeeper?

Kafka uses Zookeeper to maintain cluster state. Zookeeper is a synchronization and coordination service for managing Kafka brokers and its main functionality is to perform leader election across multiple broker instances. Under zookeeper, one server acts as a leader and the other two servers act as followers. Leader node handles all reads and writes per partition. Follower node just follows the instructions given by the leader node. If the leader fails, then the follower node will be automatically appointed as a new leader.

Benefits of Apache Kafka

Kafka has the following benefits.

  • Durability: Kafka allows messages to persist on the disk in order to prevent data loss. It uses distributed commit log for replicating messages across the cluster, and thus making it a durable system.
  • Scalability: It can easily be expanded without any downtime. Since a single Kafka cluster is acting as a central backbone for handling the large organization, we can elastically spread it to multiple clusters.
  • Reliability: It is reliable over time, as it is considered as a distributed, repli- cated, and fault tolerant messaging system.
  • Efficiency: Kafka publishes and subscribes messages efficiently which shows high system throughput. It can store terabytes of messages without any performance impact.

Under this blog post, I will showcase how to implement Apache Kafka on 2 Node Docker Swarm Cluster running on AWS via Docker Desktop.

Pre-requisites:

  • Docker Desktop for Mac
  • AWS Free Tier Account
  • AWS CLI installed

Adding Your Credentials:

[Captains-Bay]🚩 >  cat ~/.aws/credentials
[default]
aws_access_key_id = XXXA 
aws_secret_access_key = XX

Verifying AWS Version

[Captains-Bay]🚩 >  aws --version
aws-cli/1.11.107 Python/2.7.10 Darwin/17.7.0 botocore/1.5.70

Setting up Environmental Variable

[Captains-Bay]🚩 >  export VPC=vpc-ae59f0d6
[Captains-Bay]🚩 >  export REGION=us-west-2a
[Captains-Bay]🚩 >  export SUBNET=subnet-827651c9
[Captains-Bay]🚩 >  export ZONE=a
[Captains-Bay]🚩 >  export REGION=us-west-2

Building up First Node using Docker Machine

[Captains-Bay]🚩 >  docker-machine create  --driver amazonec2  --amazonec2-access-key=${ACCESS_KEY_ID}  --amazonec2-secret-key=${SECRET_ACCESS_KEY} --amazonec2-region=us-west-2 --amazonec2-vpc-id=vpc-ae59f0d6 --amazonec2-ami=ami-78a22900 --amazonec2-instance-type=t2.micro kafka-swarm-node1

Listing out the Nodes

[Captains-Bay]🚩 >  docker-machine ls
NAME                ACTIVE   DRIVER      STATE     URL                         SWARM   DOCKER     ERRORS
kafka-swarm-node1   -        amazonec2   Running   tcp://35.161.106.158:2376           v18.09.6   
kafka-swarm-node2   -        amazonec2   Running   tcp://54.201.99.75:2376             v18.09.6 

Initialiating Docker Swarm Manager Node

ubuntu@kafka-swarm-node1:~$ sudo docker swarm init --advertise-addr 172.31.53.71 --listen-addr 172.31.53.71:2377
Swarm initialized: current node (yui9wqfu7b12hwt4ig4ribpyq) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-xxxxxmr075to2v3k-decb975h5g5da7xxxx 172.31.53.71:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Adding Worker Node

ubuntu@kafka-swarm-node2:~$ sudo docker swarm join --token SWMTKN-1-2xjkynhin0n2zl7xxxk-decb975h5g5daxxxxxxxxn 172.31.53.71:2377
This node joined a swarm as a worker.

Verifying 2-Node Docker Swarm Mode Cluster

ubuntu@kafka-swarm-node1:~$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
yui9wqfu7b12hwt4ig4ribpyq *   kafka-swarm-node1   Ready               Active              Leader              18.09.6
vb235xtkejim1hjdnji5luuxh     kafka-swarm-node2   Ready               Active                                  18.09.6

Installing Docker Compose

curl -L https://github.com/docker/compose/releases/download/1.25.0-rc1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   617    0   617    0     0   2212      0 --:--:-- --:--:-- --:--:--  2211
100 15.5M  100 15.5M    0     0  8693k      0  0:00:01  0:00:01 --:--:-- 20.1M
root@kafka-swarm-node1:/home/ubuntu/dockerlabs/solution/kafka-swarm# chmod +x /usr/local/bin/docker-compose
root@kafka-swarm-node1:/home/ubuntu/dockerlabs/solution/kafka-

ubuntu@kafka-swarm-node1:~/dockerlabs/solution/kafka-swarm$ sudo docker-compose version
docker-compose version 1.25.0-rc1, build 8552e8e2
docker-py version: 4.0.1
CPython version: 3.7.3
OpenSSL version: OpenSSL 1.1.0j  20 Nov 2018

Building up Kafka Application

git clone https://github.com/collabnix/dockerlabs
cd dockerlabs/solution/kafka-swarm
ubuntu@kafka-swarm-node1:~/dockerlabs/solution/kafka-swarm$ sudo docker stack deploy -c docker-compose.yml mykafka
Creating network mykafka_default
Creating service mykafka_zkui
Creating service mykafka_broker
Creating service mykafka_manager
Creating service mykafka_producer
Creating service mykafka_zookeeper
ubuntu@kafka-swarm-node1:~/dockerlabs/solution/kafka-swarm$

Verifying Apache Kafka Stack

ubuntu@kafka-swarm-node1:~/dockerlabs/solution/kafka-swarm$ sudo docker stack lsNAME                SERVICES            ORCHESTRATOR
mykafka             5                   Swarm

Verifying Apache Kafka Services

ubuntu@kafka-swarm-node1:~/dockerlabs/solution/kafka-swarm$ sudo docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                                                                     PORTS
t04p6i8zky4z        mykafka_broker      replicated          0/3                 qnib/plain-kafka:2018-04-25_1.1.0                                                         *:9092->9092/tcp
r5f0x9clnwix        mykafka_manager     replicated          1/1                 qnib/plain-kafka-manager:2018-04-25                                                       *:9000->9000/tcp
jzwvrt4df66b        mykafka_producer    replicated          3/3                 qnib/golang-kafka-producer:2018-05-01.12                                                  
09lkbevsktt9        mykafka_zkui        replicated          1/1                 qnib/plain-zkui@sha256:30c4aa1236ee90e4274a9059a5fa87de2ee778d9bfa3cb48c4c9aafe7cfa1a13   *:9090->9090/tcp
b1hqfk1vc4lu        mykafka_zookeeper   replicated          1/1                 qnib/plain-zookeeper:2018-04-25                                                           *:2181->2181/tcp
ubuntu@kafka-swarm-node1:~/dockerlabs/solution/kafka-swarm$ 

In my next blog post, I will talk about Docker compose file which automates the overall Pico project right from video streaming captured from Raspberry Pi to object detection and analytics via AWS Rekognition Service. Stay tuned !

Credits:

How to create a Local Private Docker Registry on Play with Docker in 5 Minutes?

Estimated Reading Time: 4 minutes

DockerHub is a service provided by Docker for finding and sharing container images with your team. It is the world’s largest repository of container images with an array of content sources including container community developers, open source projects and independent software vendors (ISV) building and distributing their code in containers. Users get access to free public repositories for storing and sharing images or can choose subscription plan for private repos.

One can easily pull the Docker Image from Dockerhub and run application in their environment flawlessly. But in case you want to setup a private registry, it is still possible to accomplish. Under this blog post, I will demonstrate how to build a private registry on Play with Docker in just 5 minutes.

Caution – Please note that Play with Docker platform is just for demo or training purpose. As the instance wipe away after 4 hours of limited time, it’s better to save your data before it gets wiped up.

Tested Infrastructure

PlatformNumber of InstanceReading Time
Play with Docker15 min

Pre-requisite

  • Create an account with DockerHub
  • Open PWD Platform on your browser
  • Click on Add New Instance on the left side of the screen to bring up Alpine OS instance on the right side

Create a directory to permanently store images.

$ mkdir -p /opt/registry/data

Authenticate with DockerHub

$docker login

Start the registry container.

$ docker run -d \
  -p 5000:5000 \
  --name registry \
  -v /opt/registry/data:/var/lib/registry \
  --restart always \
  registry:2

Display running containers.

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS
                    NAMES
3a056bf96c6d        registry:2          "/entrypoint.sh /etc…"   About an hour ago   Up About an hour    0.0.0
.0:5000->5000/tcp   registry

Pull Debian Stretch image from official repository.

$ docker pull debian:stretch

stretch: Pulling from library/debian
723254a2c089: Pull complete
Digest: sha256:0a5fcee6f52d5170f557ee2447d7a10a5bdcf715dd7f0250be0b678c556a501b
Status: Downloaded newer image for debian:stretch

Tag local Debian Stretch image with an additional tag – local repository address.

$ docker tag debian:stretch localhost:5000/debian:stretch

Push image to the local repository.

[node1] (local) root@192.168.0.23 ~
$ docker push localhost:5000/debian:stretch
The push refers to repository [localhost:5000/debian]
90d1009ce6fe: Pushed
stretch: digest: sha256:38236c068c393272ad02db100e09cac36a5465149e2924a035ee60d6c60c38fe size: 529

Remove local images.

[node1] (local) root@192.168.0.23 ~
$ docker image remove debian:stretch
Untagged: debian:stretch
Untagged: debian@sha256:df6ebd5e9c87d0d7381360209f3a05c62981b5c2a3ec94228da4082ba07c4f05
[node1] (local) root@192.168.0.23 ~
$ docker image remove localhost:5000/debian:stretch
Untagged: localhost:5000/debian:stretch
Untagged: localhost:5000/debian@sha256:38236c068c393272ad02db100e09cac36a5465149e2924a035ee60d6c60c38fe
Deleted: sha256:4879790bd60d439cfe39c063660eef7af525d5f6f1cbb701a14c7cfc11cbfcf7

Pull Debian Stretch image from local repository.

[node1] (local) root@192.168.0.23 ~
$ docker pull localhost:5000/debian:stretch
stretch: Pulling from debian
54f7e8ac135a: Pull complete
Digest: sha256:38236c068c393272ad02db100e09cac36a5465149e2924a035ee60d6c60c38fe
Status: Downloaded newer image for localhost:5000/debian:stretch

List stored images.

[node1] (local) root@192.168.0.23 ~
$ docker image ls
REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE
localhost:5000/debian   stretch             4879790bd60d        12 days ago         101MB
registry                2                   2e2f252f3c88        2 months ago        33.3MB

Shared local registry

Create a directory to permanently store images.

$ mkdir -p /srv/registry/data

Create a directory to permanently store certificates and authentication data.

$ mkdir -p /srv/registry/security

Store domain and intermediate certificates using /srv/registry/security/registry.crt file, private key using /srv/registry/security/registry.key file. Use valid certificate and do not waste time with self-signed one. This step is required do use basic authentication.

Install apache2-utils to use htpasswd utility.

[node1] (local) root@192.168.0.23 ~
$ apk add apache2-utils
OK: 302 MiB in 110 packages

Create initial username and password. The only supported password format is bcrypt.

$ : | sudo tee /srv/registry/security/htpasswd
[node1] (local) root@192.168.0.23 ~
$ echo "password" | sudo htpasswd -iB /srv/registry/security/htpasswd username
Adding password for user username

Adding password for user username

$(local) root@192.168.0.23 ~ $ cat /srv/registry/security/htpasswd username:$2y$05$q9R5FSNYpAppB4Vw/AGWb.RqMCGE8DmZ4q5HZC/1wC87oTWyvB9vy 

Stop and Remove all old containers

$ docker stop $(docker ps -a -q)
3a056bf96c6d
$ docker rm -f $(docker ps -a -q) 
3a056bf96c6d 

Start the registry container.

[node1] (local) root@192.168.0.23 ~
$ docker run -d   -p 443:5000   --name registry   -v /srv/registry/data:/var/lib/registry   -v /srv/registry/security:/etc/security   -e REGISTRY_HTTP_TLS_CERTIFICATE=/etc/security/registry.crt   -e REGISTRY_HTTP_TLS_KEY=/etc/security/registry.key   -e REGISTRY_AUTH=htpasswd   -e REGISTRY_AUTH_HTPASSWD_PATH=/etc/security/htpasswd   -e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm"   --restart always   registry:2
e7755af8cbd70ea84ab77237a87cb97fd1abb18c7726fbc116c40f081d3b7098

Display running containers.

[node1] (local) root@192.168.0.23 ~
$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS        PORTS               NAMES
e7755af8cbd7        registry:2          "/entrypoint.sh /etc…"   About a minute ago   Restarting (1) 22 seconds ago                       registry

Pull Debian Stretch image from official repository.

$ docker pull debian:stretch

stretch: Pulling from library/debian
723254a2c089: Pull complete
Digest: sha256:0a5fcee6f52d5170f557ee2447d7a10a5bdcf715dd7f0250be0b678c556a501b
Status: Downloaded newer image for debian:stretch

Tag local Debian Stretch image with an additional tag – local repository address.

$ docker tag debian:stretch registry.collabnix.com/debian:stretch

This time you need to provide login credentials to use local repository.

$ docker push registry.collabnix.com/debian:stretch

e27a10675c56: Preparing
no basic auth credentials
$ docker pull registry.collabnix.com/debian:stretch

Error response from daemon: Get https://registry.collabnix.com/v2/debian/manifests/stretch: no basic auth credentials

Log in to the local registry.

$ docker login --username username registry.collabnix.com
Password: ********

Login Succeeded

Push image to the local repository.

$ docker push registry.collabnix.com/debian:stretch
The push refers to repository [registry.collabnix.com/debian]
e27a10675c56: Pushed
stretch: digest: sha256:02741df16aee1b81c4aaff4c48d75cc2c308bade918b22679df570c170feef7c size: 529

Remove local images.

$ docker image remove debian:stretch

Untagged: debian:stretch
Untagged: debian@sha256:0a5fcee6f52d5170f557ee2447d7a10a5bdcf715dd7f0250be0b678c556a501b
$ docker image remove registry.collabnix.com/debian:stretch

Untagged: registry.collabnix.com/debian:stretch
Untagged: registry.sl.collabnix.com/debian@sha256:02741df16aee1b81c4aaff4c48d75cc2c308bade918b22679df570c170feef7c
Deleted: sha256:da653cee0545dfbe3c1864ab3ce782805603356a9cc712acc7b3100d9932fa5e
Deleted: sha256:e27a10675c5656bafb7bfa9e4631e871499af0a5ddfda3cebc0ac401dfe19382

Pull Debian Stretch image from local repository.

$ docker pull registry.collabnix.com/debian:stretch

stretch: Pulling from debian
723254a2c089: Pull complete
Digest: sha256:02741df16aee1b81c4aaff4c48d75cc2c308bade918b22679df570c170feef7c
Status: Downloaded newer image for registry.collabnix.com/debian:stretch

List stored images.

$ docker image ls

REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
registry                             2                   d1fd7d86a825        4 weeks ago         33.3MB
registry.collabnix.com/debian   stretch             da653cee0545        2 months ago        100MB
hello-world                          latest              f2a91732366c        2 months ago     

Running Docker Containers on EC2 A1 Instances powered by Arm-Based AWS Graviton Processors

Estimated Reading Time: 7 minutes

2 week back, I wrote a blog post on how Developers can now build ARM containers on Docker Desktop using docker buildx CLI Plugin. Usually developers are restricted to build Arm-based application right on top of Arm-based system.Using this plugin, developers can build their application for Arm platform right on their laptop(x86) and then deploy onto the Cloud flawlessly without any cross-compilation pain anymore.

Wait…Did you say “ARM containers on Cloud?”

Yes, you heard it right. It is possible to deploy Arm containers on Cloud. Thanks to new Amazon EC2 A1 instances powered by custom AWS Graviton processors based on the Arm architecture, which brings Arm to the public cloud as a first class citizen. Docker Developers can now build ARM containers on AWS Cloud Platform.

A Brief about AWS Graviton Processors..

Amazon announced the availability of EC2 instances on its Arm-based servers during AWS re:Invent(December 2018). AWS Graviton processors are a new line of processors that are custom designed by AWS targeted in building platform solutions for cloud applications running at scale.The Graviton based instances are known as EC2 A1. These instances are targeted at scale-out workloads and applications such container based microservices, web sites, and scripting language-based applications (e.g., Ruby, Python, etc.)

EC2 A1 instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which maximizes resource efficiency for customers while still supporting familiar AWS and Amazon EC2 instance capabilities such as EBS, Networking, and AMIs. Amazon Linux 2, Red Hat Enterpise Linux (RHEL), Ubuntu and ECS optimized AMIs are available today for A1 instances.  Built around Arm cores and making extensive use of custom-built silicon, the A1 instances are optimized for performance and cost.

Under this blog post, I will showcase how to deploy Containers on AWS EC2 A1 instance using Docker Machine running on Docker Desktop for Windows.

Pre-requisites:

My Image
  • Click on “Register for Public Beta”. This will open up various options to test drive Docker products
My Image
  • Don’t forget to Select “Docker Desktop CE with Multi-Arch images (Arm Enabled) – Edge Release Amazon Cloud Credits available for limited time” option.
  • Enter your details and this will open.
  • You will see an option to sign up for credits for Amazon EC2 A1 instances via https://www.surveymonkey.com/r/DockerCon19AWS.
  • Click on Sign Up

Creating AWS Account

  • Go to aws.amazon.com and create Free Tier Account
  • By now, you must have received email from Amazon on Free Credits of $50.
  • Open up https://aws.amazon.com/amazoncredits and add the Promo Code

Creating AWS A1 Instance

We will use Docker Desktop for Windows which comes installed with Docker Machine to bring up ARM instances quickly.

Go to My Security Credentials under your Account and Click “Access Keys” shown below to display Access Key IDs.


Run the below command to set the environmental variable for ACCESS_KEY_ID as well as SECRET_ACCESS_KEY ID.

PS C:\Users\Ajeet_Raina> set ACCESS_KEY_ID=XXX
PS C:\Users\Ajeet_Raina> set SECRET_ACCESS_KEY=XX

Running Docker Machine to bring up our first Docker Node on AWS A1 ARM instance

Docker Desktop for Windows comes with Docker Machine by default and there is NO need to install it separately.

PS C:\Users\Ajeet_Raina> docker-machine create  --driver amazonec2  --amazonec2-access-key=${ACCESS_KEY_ID}  --amazonec2-secret-key=${SECRET_ACCESS_KEY} --amazonec2-region=us-west-2 --amazonec2-vpc-id=vpc-ae59f0d6 --amazonec2-ami=ami-0db180c518750ee4f  --amazonec2-instance-type=a1.medium arm-node1

By now, you should be able to see arm-node1 up and running on your AWS environment.


Listing out the ARM Nodes

PS C:\Users\Ajeet_Raina> docker-machine ls
NAME        ACTIVE   DRIVER      STATE     URL                         SWARM   DOCKER     ERRORS
arm-node1   -        amazonec2   Running   tcp://34.218.208.175:2376           v18.09.6
PS C:\Users\Ajeet_Raina>

Login into the first Node

You can use docker-machine ssh to login into the AWS EC2 A1 instance directly.

PS C:\Users\Ajeet_Raina> docker-machine ssh arm-node1
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-1028-aws aarch64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Thu May 16 04:35:16 UTC 2019

  System load:  0.06              Processes:              116
  Usage of /:   9.1% of 15.34GB   Users logged in:        0
  Memory usage: 10%               IP address for ens5:    172.31.60.52
  Swap usage:   0%                IP address for docker0: 172.17.0.1


  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

178 packages can be updated.
86 updates are security updates.




This node comes with Docker 18.09.6 installed.

ubuntu@arm-node1:~$ sudo docker version
Client:
 Version:           18.09.6
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        481bc77
 Built:             Sat May  4 02:40:48 2019
 OS/Arch:           linux/arm64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.6
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       481bc77
  Built:            Sat May  4 02:00:10 2019
  OS/Arch:          linux/arm64
  Experimental:     false
ubuntu@arm-node1:~$




Checking the Node IP

PS C:\Users\Ajeet_Raina> docker-machine ip arm-node1
34.218.208.175

Running ARM-based Portainer v1.20.2 Container

Before we run Portainer, we need to ensure that the port 9000 is open for accessibility.

Click on Actions > Inbound Rules and add 9000 for Portainer. Allowing “All TCP” from 0-65535 is just for testing purpose and not recommended for the production environment.


ubuntu@ip-172-31-62-91:~$ sudo docker run --rm mplatform/mquery portainer/portainer
Unable to find image 'mplatform/mquery:latest' locally
latest: Pulling from mplatform/mquery
db6020507de3: Pull complete
713cdc222639: Pull complete
Digest: sha256:e15189e3d6fbcee8a6ad2ef04c1ec80420ab0fdcf0d70408c0e914af80dfb107
Status: Downloaded newer image for mplatform/mquery:latest
Image: portainer/portainer
 * Manifest List: Yes
 * Supported platforms:
   - linux/amd64
   - linux/arm
   - linux/arm64
   - linux/ppc64le
   - windows/amd64:10.0.14393.2551
   - windows/amd64:10.0.16299.967
   - windows/amd64:10.0.17134.590
   - windows/amd64:10.0.17763.253

Initialising Docker Swarm Mode on Arm-based EC1 instance

Follow the below steps to setup 2 Node Docker Swarm Mode cluster on AWS Platform using Docker Machine.

PS C:\Users\Ajeet_Raina> docker-machine create  --driver amazonec2  --amazonec2-access-key=${ACCESS_KEY_ID}  --amazonec2-secret-key=${SE
CRET_ACCESS_KEY} --amazonec2-region=us-west-2 --amazonec2-vpc-id=vpc-ae59f0d6 --amazonec2-ami=ami-0db180c518750ee4f --amazonec2-open-por
t 2377 --amazonec2-open-port 7946 --amazonec2-open-port 4789 --amazonec2-open-port 7946/udp --amazonec2-open-port 4789/udp --amazonec2-open-port 8080 --amazonec2-open-port 443 --amazonec2-open-port 80 --amazonec2-subnet-id=subnet-827651c9 --amazonec2-instance-type=a1.medi
um arm-swarm-node2
Running pre-create checks...
Creating machine...
(arm-swarm-node2) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...

You can open all ports on AWS using the below command:

PS C:\Users\Ajeet_Raina> aws ec2 authorize-security-group-ingress --group-name docker-machine --protocol -1 --cidr 0.0.0.0/0

Initialising Docker Swarm Manager


PS C:\Users\Ajeet_Raina> docker-machine ssh arm-swarm-node1 sudo docker swarm init
Swarm initialized: current node (oqk875mcldbn28ce2rip31fg5) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-6bw0zfd7vjpXX17usjhccjlg3rs 172.31.50.5:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

PS C:\Users\Ajeet_Raina> docker-machine ssh arm-swarm-node2 sudo docker swarm join --token SWMTKN-1-6XX23ye817usjhccjlg3rs 172.31.50.5:2377
This node joined a swarm as a worker.

Adding Worker Node

PS C:\Users\Ajeet_Raina> docker-machine ssh arm-swarm-node2 sudo docker swarm join --token SWMTKN-1-6bw0zfXXXhccjlg3rs 172.31.50.5:2377
This node joined a swarm as a worker.

Verifying 2-Node Swarm Cluster

ubuntu@arm-swarm-node1:~$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
oqk875mcldbn28ce2rip31fg5 *   arm-swarm-node1     Ready               Active              Leader              18.09.6
f3rwuj6f6mghte3630car83ia     arm-swarm-node2     Ready               Active                                  18.09.6
ubuntu@arm-swarm-node1:~$

Building Up Portainer Application Stack

ubuntu@ip-172-31-62-91:~$ sudo docker stack deploy --compose-file=portainer-agent-stack.yml portainer
Creating network portainer_agent_network
Creating service portainer_portainer
Creating service portainer_agent
ubuntu@ip-172-31-62-91:~$

Listing out Portainer Stack

ubuntu@arm-node1:~$ sudo docker stack ls
NAME                SERVICES            ORCHESTRATOR
portainer           2                   Swarm
ubuntu@arm-node1:~$ sudo docker service ls
ID                  NAME                  MODE                REPLICAS            IMAGE                        PORTS
k5651aoxgqhk        portainer_agent       global              1/1                 portainer/agent:latest       
yoembxxj25k8        portainer_portainer   replicated          1/1                 portainer/portainer:latest   *:9000->9000/tcp

Viewing Portainer Dashboard

Portainer UI showing a Single Node Swarm Mode Cluster

In my future post, I am going to showcase how I leveraged buildx CLI plugin & AWS EC2 A1 instance to build in-house project called “Pico” for Deep Learning using Apache Kafka, IoT & Amazon Rekognition Service. Stay tuned !

How I built ARM based Docker Images for Raspberry Pi using buildx CLI Plugin on Docker Desktop?

Estimated Reading Time: 11 minutes


2 weeks back in Dockercon 2019 San Francisco, Docker & ARM demonstrated the integration of ARM capabilities into Docker Desktop Community for the first time. Docker & ARM unveiled go-to-market strategy to accelerate Cloud, Edge & IoT Development. These two companies have planned to streamline the app development tools for cloud, edge, and internet of things environments built on ARM platform. The tools include AWS EC2 A1 instances based on AWS’ Graviton Processors (which feature 64-bit Arm Neoverse cores). Docker in collaboration with ARM will make new Docker-based solutions available to the Arm ecosystem as an extension of Arm’s server-tailored Neoverse platform, which they say will let developers more easily leverage containers — both remote and on-premises which is going to be pretty cool.

This integration is today available to the approximately 2 million developers using Docker Desktop Community Edition . As part of Docker Captain’s programme, we were lucky to get an early access to this build during Docker Captain Summit which took place on the first day of Dockercon 2019.

Introducing buildx

Under Docker 19.03.0 Beta 3, there is a new experimental CLI plugin called “buildx”. It is a pretty new Docker CLI plugin that extends the docker build command with the full support of the features provided by Moby BuildKit builder toolkit. It provides the same user experience as docker build with many new features like creating scoped builder instances and building against multiple nodes concurrently. As per the discussion with Docker staff, the “x” under buildx might get dropped in near future and features and flags are too subjected to change when the stable release is announced.

Buildx always build using the BuildKit engine and does not require DOCKER_BUILDKIT=1 environment variable for starting builds. Buildx build command supports the features available for docker build including the new features in Docker 19.03 such as outputs configuration, inline build caching or specifying target platform. In addition, buildx supports new features not yet available for regular docker build like building manifest lists, distributed caching, exporting build results to OCI image tarballs etc.

How does builder Instance work?

Buildx allows you to create new instances of isolated builders. This can be used for getting a scoped environment for your CI builds that does not change the state of the shared daemon or for isolating the builds for different projects. You can create a new instance for a set of remote nodes, forming a build farm, and quickly switch between them.

New instances can be created with docker buildx create command. This will create a new builder instance with a single node based on your current configuration. To use a remote node you can specify the DOCKER_HOST or remote context name while creating the new builder. After creating a new instance you can manage its lifecycle with the inspectstop and rm commands and list all available builders with ls. After creating a new builder you can also append new nodes to it.

To switch between different builders use docker buildx use <name>. After running this command the build commands would automatically keep using this builder.

Docker 19.03 also features a new docker context command that can be used for giving names for remote Docker API endpoints. Buildx integrates with docker context so that all of your contexts automatically get a default builder instance. While creating a new builder instance or when adding a node to it you can also set the context name as the target.

Enough theory !!! Do you really want to see it in action? Under this blog post, I will showcase how I built ARM-based Docker Images for my tiny Raspberry cluster using `docker buildx’ utility which runs on my Docker Desktop for Mac.

Installing Docker Desktop for Mac 2.0.4.1

Open up https://www.docker.com/products/docker-desktop for an early access of Docker Desktop. Do switch to Edge release if in case you’ve installed stable version of Docker Desktop.

As of today, Docker Desktop 2.0.4.1 Edge Community Edition comes with Engine 19.03.0 Beta 3, latest Kubernetes v1.1.4.1 and Compose 1.24.0 Release.

Verifying the docker buildx CLI

[Captains-Bay]🚩 >  docker buildx --help

Usage:	docker buildx COMMAND

Build with BuildKit

Management Commands:
  imagetools  Commands to work on images in registry

Commands:
  bake        Build from a file
  build       Start a build
  create      Create a new builder instance
  inspect     Inspect current builder instance
  ls          List builder instances
  rm          Remove a builder instance
  stop        Stop builder instance
  use         Set the current builder instance
  version     Show buildx version information 

Run 'docker buildx COMMAND --help' for more information on a command.

Listing all builder instances and the nodes for each instance

[Captains-Bay]🚩 >  docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS  PLATFORMS
default * docker                  
  default default         running linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6

We are currently using the default builder, which is basically the old builder.

Creating a New Builder Instance

The `docker buildx create` makes a new builder instance pointing to a docker context or endpoint, where context is the name of a context from docker context ls and endpoint is the address for docker socket (eg. DOCKER_HOST value).

By default, the current docker configuration is used for determining the context/endpoint value.

Builder instances are isolated environments where builds can be invoked. All docker contexts also get the default builder instance.

Let’s create a new builder, which gives us access to some new multi-arch features.

[Captains-Bay]🚩 >  docker buildx create --help

Usage:	docker buildx create [OPTIONS] [CONTEXT|ENDPOINT]

Create a new builder instance

Options:
      --append                 Append a node to builder instead of changing it
      --driver string          Driver to use (eg. docker-container)
      --leave                  Remove a node from builder instead of changing it
      --name string            Builder instance name
      --node string            Create/modify node with given name
      --platform stringArray   Fixed platforms for current node
      --use                    Set the current builder instance

Creating a new builder called “testbuilder”

[Captains-Bay]🚩 >  docker buildx create --name testbuilder
testbuilder
[Captains-Bay]🚩 >  docker buildx ls
NAME/NODE      DRIVER/ENDPOINT             STATUS   PLATFORMS
testbuilder    docker-container                     
  testbuilder0 unix:///var/run/docker.sock inactive 
default *      docker                               
  default      default                     running  linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
[Captains-Bay]🚩 > 

Switching to “testbuilder” builder instance

[Captains-Bay]🚩 >  docker buildx use testbuilder
[Captains-Bay]🚩 >  docker buildx ls
NAME/NODE      DRIVER/ENDPOINT             STATUS   PLATFORMS
testbuilder *  docker-container                     
  testbuilder0 unix:///var/run/docker.sock inactive 
default        docker                               
  default      default                     running  linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
[Captains-Bay]🚩 > 

Here I created a new builder instance with the name mybuilder, switched to it, and inspected it. Note that –bootstrap isn’t needed, it just starts the build container immediately. Next we will test the workflow, making sure we can build, push, and run multi-arch images.

What is –bootstrap all about?

The `docker buildx inspect –bootstrap ensures that the builder is running before inspecting it. If the driver is docker-container, then --bootstrap starts the buildkit container and waits until it is operational. Bootstrapping is automatically done during build, it is thus not necessary. The same BuildKit container is used during the lifetime of the associated builder node (as displayed in buildx ls).

[Captains-Bay]🚩 >  docker buildx inspect --bootstrap
[+] Building 22.4s (1/1) FINISHED                                                                                        
 => [internal] booting buildkit                                                                                    22.4s
 => => pulling image moby/buildkit:master                                                                          21.5s
 => => creating container buildx_buildkit_testbuilder0                                                              0.9s
Name:   testbuilder
Driver: docker-container

Nodes:
Name:      testbuilder0
Endpoint:  unix:///var/run/docker.sock
Status:    running
Platforms: linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
[Captains-Bay]🚩 >  

Authenticating with Dockerhub

[Captains-Bay]🚩 >  docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: ajeetraina
Password: 
Login Succeeded

Cloning the Repository

[Captains-Bay]🚩 >  git clone https://github.com/collabnix/docker-cctv-raspbian
Cloning into 'docker-cctv-raspbian'...
remote: Enumerating objects: 47, done.
remote: Counting objects: 100% (47/47), done.
remote: Compressing objects: 100% (31/31), done.
remote: Total 47 (delta 20), reused 32 (delta 14), pack-reused 0
Unpacking objects: 100% (47/47), done.

Peeping into Dockerfile

[Captains-Bay]🚩 >  cat Dockerfile 
FROM resin/rpi-raspbian:latest

RUN apt update && apt upgrade && apt install motion
RUN mkdir /mnt/motion && chown motion /mnt/motion
COPY motion.conf /etc/motion/motion.conf

VOLUME /mnt/motion
EXPOSE 8081
ENTRYPOINT ["motion"]
[Captains-Bay]🚩 >

Building ARM-based Docker Image

[Captains-Bay]🚩 >  docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t ajeetraina/docker-cctv-raspbian --push .

[Captains-Bay]🚩 >  docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t ajeetraina/docker-cctv-raspbian --push .
[+] Building 254.2s (12/21)                                                                                              
[+] Building 2174.9s (22/22) FINISHED                                                                                    
 => [internal] load build definition from Dockerfile                                                                0.1s
 => => transferring dockerfile: 268B                                                                                0.0s
 => [internal] load .dockerignore                                                                                   0.1s
 => => transferring context: 2B                                                                                     0.0s
 => [linux/arm/v7 internal] load metadata for docker.io/resin/rpi-raspbian:latest                                   8.8s
 => [linux/arm64 internal] load metadata for docker.io/resin/rpi-raspbian:latest                                    8.8s
 => [linux/amd64 internal] load metadata for docker.io/resin/rpi-raspbian:latest                                    8.8s
 => [linux/amd64 1/4] FROM docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a99  0.1s
 => => resolve docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5  0.0s
 => [internal] load build context                                                                                   0.1s
 => => transferring context: 27.72kB                                                                                0.0s
 => [linux/arm64 1/4] FROM docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a9  17.3s
 => => resolve docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5  0.0s
 => => sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5f88d055b 2.81kB / 2.81kB                      0.0s
 => => sha256:67a3f53689ffd00b59f9a7b7f8af0da5b36c9e11e12ba43b35969586e0121303 1.56kB / 1.56kB                      2.3s
 => => sha256:ef16d706debb7a56d89bf14958aca2da76bd3d0a42a6762c37302c9527a47d10 305B / 305B                          3.5s
 => => sha256:2734b459704280a5238405384487095392e5414d973554b27545cec0e984a0f2 256B / 256B                          3.1s
 => => sha256:9d5b46a00008070b009b685af5846c225ad170f38768416b63917ea7ac94062d 7.75kB / 7.75kB                      4.4s
 => => sha256:5fb2b6f7daac67a8465e8201c76828550b811619be473ab594972da35b4b7ee7 354B / 354B                          4.4s
 => => sha256:a04aef7b1e2fc4905fea2a685c63bc1eeb94c86149fd1286459206c22794e610 177B / 177B                          4.4s
 => => sha256:3eae71a21b9db8bd54d6a189b7587a10f52d6ffee3d868705a89974fe574c268 234B / 234B                          2.3s
 => => sha256:28f1ee4d4d5aa8bb96b3ba6a5e2177b2b58b595edaf601b9aae69fd02f78a6c6 7.48kB / 7.48kB                      0.0s
 => => sha256:6bddb275e70b0083d76083d01be2c3da11f67f526a123adcc980c5a3260d46e8 51.54MB / 51.54MB                   13.4s
 => => sha256:873755612f304f066db70c4015fdeadc9a81c0e6e25fb1aa833aeba80a7aeffc 229B / 229B                          3.9s
 => => sha256:78ba3f0466312c019467b178339a89120f2dce298d7c3d5e6840b3566749f5c0 250B / 250B                          3.2s
 => => sha256:b98db37cf25231afe68852e2cb21fb8aa465bb7a32ecc9afc6dec100ec0ba9b0 367B / 367B                          3.5s
 => => sha256:6b6c68e7ac8567569cee8da92431637e561e7aef5addb70373401d0887447a00 363B / 363B                          4.4s
 => => unpacking docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15  3.8s
 => [linux/arm/v7 1/4] FROM docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a9  0.0s
 => => resolve docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5  0.0s
 => [linux/amd64 2/4] RUN cat /.resin/deprecation-warning                                                           0.6s
 => [linux/arm/v7 2/4] RUN cat /.resin/deprecation-warning                                                          0.5s
 => [linux/arm64 2/4] RUN cat /.resin/deprecation-warning                                                           0.6s
 => [linux/arm/v7 3/4] RUN apt update && apt upgrade && apt install motion                                        481.2s
 => [linux/amd64 3/4] RUN apt update && apt upgrade && apt install motion                                        1431.5s
 => [linux/arm64 3/4] RUN apt update && apt upgrade && apt install motion                                        1665.5s
 => [linux/arm/v7 4/4] RUN mkdir /mnt/motion && chown motion /mnt/motion                                            0.6s
 => [linux/arm/v7 5/4] COPY motion.conf /etc/motion/motion.conf                                                     0.1s
 => [linux/amd64 4/4] RUN mkdir /mnt/motion && chown motion /mnt/motion                                             0.6s
 => [linux/amd64 5/4] COPY motion.conf /etc/motion/motion.conf                                                      0.1s
 => [linux/arm64 4/4] RUN mkdir /mnt/motion && chown motion /mnt/motion                                             0.6s
 => [linux/arm64 5/4] COPY motion.conf /etc/motion/motion.conf                                                      0.1s
 => exporting to image                                                                                            481.7s
 => => exporting layers                                                                                            19.6s
 => => exporting manifest sha256:55ddd5c67557190344efea7327a5b2f8de0bdc8ba184f856b1086baac6bed702                   0.0s
 => => exporting config sha256:ebb6e951b8f206d258e8552313633c35f4fe4c82fe7a7fcc51475022ae089c2d                     0.0s
 => => exporting manifest sha256:7b6919de7edd7d1be695877f827e7ee6d302d3acfd3f69ed73bf2ffaa4a80632                   0.0s
 => => exporting config sha256:a83b02bf9cbcb408110c4b773f4e5edde04f35851e2a42d4ff1d947c132bed6d                     0.0s
 => => exporting manifest sha256:d379c0f79b6a72a770124bb2ee94d91d9afef031c81ac20ea7a4c51f4f13ddf2                   0.0s
 => => exporting config sha256:86c2e637fe39036d51779c3bb5f800d3e8da14122907c5fd03bdace47b03bb38                     0.0s
 => => exporting manifest list sha256:daec2787002024c07addf56a8099a866d7f1cd85ed8c33818beb64a5a208cd54              0.0s
 => => pushing layers                                                                                             455.4s
 => => pushing manifest for docker.io/ajeetraina/docker-cctv-raspbian:latest                                        6.4s

Awesome. It worked ! The –platform flag enabled buildx to generate Linux images for Intel 64-bit, Arm 32-bit, and Arm 64-bit architectures. The –push flag generates a multi-arch manifest and pushes all the images to Docker Hub. Cool, isn’t it?

What is this ImageTools all about?

Imagetools contains commands for working with manifest lists in the registry. These commands are useful for inspecting multi-platform build results. It creates a new manifest list based on source manifests. The source manifests can be manifest lists or single platform distribution manifests and must already exist in the registry where the new manifest is created. If only one source is specified create performs a carbon copy.

Let’s use imagetools to inspect what we did.

[Captains-Bay]🚩 >  docker buildx imagetools inspect docker.io/ajeetraina/docker-cctv-raspbian:latest
Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest
MediaType: application/vnd.docker.distribution.manifest.list.v2+json
Digest:    sha256:daec2787002024c07addf56a8099a866d7f1cd85ed8c33818beb64a5a208cd54
           
Manifests: 
  Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest@sha256:55ddd5c67557190344efea7327a5b2f8de0bdc8ba184f856b1086baac6bed702
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/amd64
             
  Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest@sha256:7b6919de7edd7d1be695877f827e7ee6d302d3acfd3f69ed73bf2ffaa4a80632
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/arm64
             
  Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest@sha256:d379c0f79b6a72a770124bb2ee94d91d9afef031c81ac20ea7a4c51f4f13ddf2
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/arm/v7
[Captains-Bay]🚩 >

The image is now available on Docker Hub with the tag ajeetraina/docker-cctv-raspbian:latest. You can run a container from that image on Intel laptops, Amazon EC2 A1 instances, Raspberry Pis, and more. Docker pulls the correct image for the current architecture, so Raspberry Pis run the 32-bit Arm version and EC2 A1 instances run 64-bit Arm.

Verifying this Image on Raspberry Pi Node

root@node2:/home/pi# cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
root@node2:/home/pi#

Verify the Docker version

root@node2:/home/pi# docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:57:21 2018
 OS/Arch:           linux/arm
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:17:57 2018
  OS/Arch:          linux/arm
  Experimental:     false

Running the Docker Image

root@node2:/home/pi# docker pull ajeetraina/docker-cctv-raspbian:latest
latest: Pulling from ajeetraina/docker-cctv-raspbian
6bddb275e70b: Already exists
ef16d706debb: Already exists
2734b4597042: Already exists
873755612f30: Already exists
67a3f53689ff: Already exists
5fb2b6f7daac: Already exists
a04aef7b1e2f: Already exists
9d5b46a00008: Already exists
b98db37cf252: Already exists
78ba3f046631: Already exists
6b6c68e7ac85: Already exists
3eae71a21b9d: Already exists
db08b1848bc3: Pull complete
9c84b2387994: Pull complete
3579f37869a5: Pull complete
Digest: sha256:daec2787002024c07addf56a8099a866d7f1cd85ed8c33818beb64a5a208cd54
Status: Downloaded newer image for ajeetraina/docker-cctv-raspbian:latest
root@node2:/home/pi#


Inspecting the Docker Image

 docker image inspect ajeetraina/docker-cctv-raspbian:latest | grep arm
                "QEMU_CPU=arm1176",
        "Architecture": "arm",

Running the Docker Container

 docker run -dit -p 8000:8000 ajeetraina/docker-cctv-raspbian:latest
c43f25cd06672883478908e71ad6f044766270fcbf413e69ad63c8020610816f
root@node2:/home/pi# docker ps
CONTAINER ID        IMAGE                                    COMMAND             CREATED             STATUS              PORTS                              NAMES
c43f25cd0667        ajeetraina/docker-cctv-raspbian:latest   "motion"            7 seconds ago       Up 2 seconds        0.0.0.0:8000->8000/tcp, 8081/tcp   zen_brown

Verifying the Logs

root@node2:/home/pi# docker logs -f c43
[0] [NTC] [ALL] conf_load: Processing thread 0 - config file /etc/motion/motion.conf
[0] [NTC] [ALL] motion_startup: Motion 3.2.12+git20140228 Started
[0] [NTC] [ALL] motion_startup: Logging to syslog
[0] [NTC] [ALL] motion_startup: Using log type (ALL) log level (NTC)
[0] [NTC] [ENC] ffmpeg_init: ffmpeg LIBAVCODEC_BUILD 3670016 LIBAVFORMAT_BUILD 3670272
[0] [NTC] [ALL] main: Thread 1 is from /etc/motion/motion.conf
[0] [NTC] [ALL] main: Thread 1 is device: /dev/video0 input -1
[0] [NTC] [ALL] main: Stream port 8081
[0] [NTC] [ALL] main: Waiting for threads to finish, pid: 1
[1] [NTC] [ALL] motion_init: Thread 1 started , motion detection Enabled
[1] [NTC] [VID] vid_v4lx_start: Using videodevice /dev/video0 and input -1
[1] [ALR] [VID] vid_v4lx_start: Failed to open video device /dev/video0:
[1] [WRN] [ALL] motion_init: Could not fetch initial image from camera Motion continues using width and height from config file(s)
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items
[1] [NTC] [STR] http_bindsock: motion-stream testing : IPV4 addr: 0.0.0.0 port: 8081
[1] [NTC] [STR] http_bindsock: motion-stream Bound : IPV4 addr: 0.0.0.0 port: 8081
[1] [NTC] [ALL] motion_init: Started motion-stream server in port 8081 auth Disabled
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 3 items
[0] [NTC] [STR] httpd_run: motion-httpd testing : IPV4 addr: 127.0.0.1 port: 8080
[0] [NTC] [STR] httpd_run: motion-httpd Bound : IPV4 addr: 127.0.0.1 port: 8080
[0] [NTC] [STR] httpd_run: motion-httpd/3.2.12+git20140228 running, accepting connections
[0] [NTC] [STR] httpd_run: motion-httpd: waiting for data on 127.0.0.1 port TCP 8080

Hence, it is now so easy to build ARM-based Docker Images on your Docker Desktop and then share it to DockerHub so that anyone can pull this image on their Raspberry Pi and run it flawlessly.

Reference:

New Docker CLI API Support for NVIDIA GPUs under Docker Engine 19.03.0 Pre-Release

Estimated Reading Time: 7 minutes

Let’s talk about Docker in a GPU-Accelerated Data Center…

Docker is the leading container platform which provides both hardware and software encapsulation by allowing multiple containers to run on the same system at the same time each with their own set of resources (CPU, memory, etc) and their own dedicated set of dependencies (library version, environment variables, etc.). Docker  can now be used to containerize GPU-accelerated applications. In case you’re new to GPU-accelerated computing, it is basically the use of graphics processing unit  to accelerates high performance computing workloads and applications. This means you can easily containerize and isolate accelerated application without any modifications and deploy it on any supported GPU-enabled infrastructure.

Yes, you heard it right. Today Docker does natively support NVIDIA GPUs within containers. This is possible with the latest Docker 19.03.0 Beta 3 Release which is the latest pre-release and is available for download here. With this release, Docker  can now be flawlessly be used to containerize GPU-accelerated applications.

Let’s go back to 2017…

2 year back, I wrote a blog post titled “Running NVIDIA Docker in a GPU Accelerated Data center”. The nvidia-docker is an open source project hosted on GITHUB and it provides driver-agnostic CUDA images  & docker command line wrapper that mounts the user mode components of the driver and the GPUs (character devices) into the container at launch. With this enablement, the NVIDIA Docker plugin enabled deployment of GPU-accelerated applications across any Linux GPU server with NVIDIA Docker support. Under the same blog post, I showcased how to get started with nvidia-docker to interact with NVIDIA GPU system and then look at few of interesting applications which can be build for GPU-accelerated data center.

With the recent 19.03.0 Beta Release, now you don’t need to spend time in downloading the NVIDIA-DOCKER plugin and rely on nvidia-wrapper to launch GPU containers. All you can now use –gpus option with docker run CLI to allow containers to use GPU devices seamlessly.

Under this blog post, I will showcase how to get started with this new CLI API for NVIDIA GPU.

Prerequisite:

  • Ubuntu 18.04 instance running on Google Cloud Platform
  • Verify that NVIDIA card is detected
$ lspci -vv | grep -i nvidia
00:04.0 3D controller: NVIDIA Corporation GP100GL [Tesla P100 PCIe 16GB] (rev a1)
        Subsystem: NVIDIA Corporation GP100GL [Tesla P100 PCIe 16GB]
        Kernel modules: nvidiafb

Installing NVIDIA drivers first

$ apt-get install ubuntu-drivers-common \
	&& sudo ubuntu-drivers autoinstall
- Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.18.0-1009-gcp/updates/dkms/

nvidia-uvm.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.18.0-1009-gcp/updates/dkms/

depmod...

DKMS: install completed.
Setting up xserver-xorg-video-nvidia-390 (390.116-0ubuntu0.18.10.1) ...
Processing triggers for libc-bin (2.28-0ubuntu1) ...
Processing triggers for systemd (239-7ubuntu10.13) ...
Setting up nvidia-driver-390 (390.116-0ubuntu0.18.10.1) ...
Setting up adwaita-icon-theme (3.30.0-0ubuntu1) ...
update-alternatives: using /usr/share/icons/Adwaita/cursor.theme to provide /usr/share/icons/default/index.theme (x-cursor-theme) in auto mode
Setting up humanity-icon-theme (0.6.15) ...
Setting up libgtk-3-0:amd64 (3.24.4-0ubuntu1.1) ...
Setting up libgtk-3-bin (3.24.4-0ubuntu1.1) ...
Setting up policykit-1-gnome (0.105-6ubuntu2) ...
Setting up screen-resolution-extra (0.17.3build1) ...
Setting up ubuntu-mono (16.10+18.10.20181005-0ubuntu1) ...
Setting up nvidia-settings (390.77-0ubuntu1) ...
Processing triggers for libgdk-pixbuf2.0-0:amd64 (2.38.0+dfsg-6) ...
Processing triggers for initramfs-tools (0.131ubuntu15.1) ...
update-initramfs: Generating /boot/initrd.img-4.18.0-1009-gcp
cryptsetup: WARNING: The initramfs image may not contain cryptsetup binaries 
    nor crypto modules. If that's on purpose, you may want to uninstall the 
    'cryptsetup-initramfs' package in order to disable the cryptsetup initramfs 
    integration and avoid this warning.
Processing triggers for libc-bin (2.28-0ubuntu1) ...
Processing triggers for dbus (1.12.10-1ubuntu2) ...

Go ahead and reboot the system

$ reboot

Follow the below steps once the Ubuntu instance comes back.

Installing NVIDIA Container Runtime

Create a file named nvidia-container-runtime-script.sh and save it

$ cat nvidia-container-runtime-script.sh
 
curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | \
  sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
sudo apt-get update

Execute the script

sh nvidia-container-runtime-script.sh

OK
deb https://nvidia.github.io/libnvidia-container/ubuntu18.04/$(ARCH) /
deb https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/$(ARCH) /
Hit:1 http://archive.canonical.com/ubuntu bionic InRelease
Get:2 https://nvidia.github.io/libnvidia-container/ubuntu18.04/amd64  InRelease [1139 B]                
Get:3 https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/amd64  InRelease [1136 B]           
Hit:4 http://security.ubuntu.com/ubuntu bionic-security InRelease                                       
Get:5 https://nvidia.github.io/libnvidia-container/ubuntu18.04/amd64  Packages [4076 B]                 
Get:6 https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/amd64  Packages [3084 B]            
Hit:7 http://us-east4-c.gce.clouds.archive.ubuntu.com/ubuntu bionic InRelease
Hit:8 http://us-east4-c.gce.clouds.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:9 http://us-east4-c.gce.clouds.archive.ubuntu.com/ubuntu bionic-backports InRelease
Fetched 9435 B in 1s (17.8 kB/s)                   
Reading package lists... Done
$ apt-get install nvidia-container-runtime
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages were automatically installed and are no longer required:
  grub-pc-bin libnuma1
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
Get:1 https://nvidia.github.io/libnvidia-container/ubuntu18.04/amd64  libnvidia-container1 1.0.2-1 [59.1 kB]
Get:2 https://nvidia.github.io/libnvidia-container/ubuntu18.04/amd64  libnvidia-container-tools 1.0.2-1 [15.4 kB]
Get:3 https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/amd64  nvidia-container-runtime-hook 1.4.0-1 [575 kB]

...
Unpacking nvidia-container-runtime (2.0.0+docker18.09.6-3) ...
Setting up libnvidia-container1:amd64 (1.0.2-1) ...
Setting up libnvidia-container-tools (1.0.2-1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Setting up nvidia-container-runtime-hook (1.4.0-1) ...
Setting up nvidia-container-runtime (2.0.0+docker18.09.6-3) ...
which nvidia-container-runtime-hook
/usr/bin/nvidia-container-runtime-hook

Installing Docker 19.03 Beta 3 Test Build

curl -fsSL https://test.docker.com -o test-docker.sh 

Execute the script

sh test-docker.sh

Verifying Docker Installation

$ docker version
Client:
 Version:           19.03.0-beta3
 API version:       1.40
 Go version:        go1.12.4
 Git commit:        c55e026
 Built:             Thu Apr 25 02:58:59 2019
 OS/Arch:           linux/amd64
 Experimental:      false
Server:
 Engine:
  Version:          19.03.0-beta3
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.4
  Git commit:       c55e026
  Built:            Thu Apr 25 02:57:32 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Verifying –gpus option under docker run

$ docker run --help | grep -i gpus
      --gpus gpu-request               GPU devices to add to the container ('all' to pass all GPUs)

Running a Ubuntu container which leverages GPUs

 $ docker run -it --rm --gpus all ubuntu nvidia-smi
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
f476d66f5408: Pull complete 
8882c27f669e: Pull complete 
d9af21273955: Pull complete 
f5029279ec12: Pull complete 
Digest: sha256:d26d529daa4d8567167181d9d569f2a85da3c5ecaf539cace2c6223355d69981
Status: Downloaded newer image for ubuntu:latest
Tue May  7 15:52:15 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.116                Driver Version: 390.116                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla P4            Off  | 00000000:00:04.0 Off |                    0 |
| N/A   39C    P0    22W /  75W |      0MiB /  7611MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
:~$ 

Troubleshooting:

Did you encounter the below error message:

$ docker run -it --rm --gpus all debian
docker: Error response from daemon: linux runtime spec devices: could not select device driver "" with capabilities: [[gpu]].

The above error means that Nvidia could not properly register with Docker. What it actually mean is the drivers are not properly installed on the host. This could also mean the nvidia container tools were installed without restarting the docker daemon: you need to restart the docker daemon.

I suggest you to go back and verify if nvidia-container-runtime is installed or not OR restart the Docker daemon.

Listing out GPU devices

$ docker run -it --rm --gpus all ubuntu nvidia-smi -L
GPU 0: Tesla P4 (UUID: GPU-fa974b1d-3c17-ed92-28d0-805c6d089601)
$ docker run -it --rm --gpus all ubuntu nvidia-smi  --query-gpu=index,name,uui
d,serial --format=csv
index, name, uuid, serial
0, Tesla P4, GPU-fa974b1d-3c17-ed92-28d0-805c6d089601, 0325017070224

A Quick Look at NVIDIA Deep Learning..

The NVIDIA Deep Learning GPU Training System, a.k.a DIGITS is a webapp for training deep learning models. It puts  the power of deep learning into the hands of engineers & data scientists. It can be used to rapidly train the highly accurate deep neural network (DNNs) for image classification, segmentation and object detection tasks.The currently supported frameworks are: Caffe, Torch, and Tensorflow.

DIGITS simplifies common deep learning tasks such as managing data, designing and training neural networks on multi-GPU systems, monitoring performance in real time with advanced visualizations, and selecting the best performing model from the results browser for deployment. DIGITS is completely interactive so that data scientists can focus on designing and training networks rather than programming and debugging.

To test-drive DIGITS, you can get it up and running in a single Docker container:

$docker run -itd --gpus all -p 5000:5000 nvidia/digits

You can open up web browser and verify if its running on the below address:

w3m http://<dockerhostip>:5000

Verifying again with nvidia-smi

$ docker run -it --rm --gpus all ubuntu nvidia-smi
Tue May  7 16:27:37 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.116                Driver Version: 390.116                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla P4            Off  | 00000000:00:04.0 Off |                    0 |
| N/A   51C    P0    24W /  75W |    129MiB /  7611MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

If you want to see it in action, here’s a quick video

Can I use Docker compose for NVIDIA GPU?

Not Yet. I have raised a feature request under this link few minutes back.

Special Thanks to Tibor Vass, Docker Staff Engineer for reviewing this blog.

Sysctl Support for Docker Swarm Cluster for the first time in Docker 19.03.0 Pre-Release

Estimated Reading Time: 7 minutes

Docker CE 19.03.0 Beta 1 went public 2 week back. It was the first release which arrived with sysctl support for Docker Swarm Mode for the first time. This is definitely a great news for popular communities like Elastic Stack, Redis etc. as they rely on tuning the kernel parameter to get rid of memory exceptions. For example, Elasticsearch uses a mmapfs directory by default to store its indices. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions, hence one need to  increase the limits everytime using sysctl tool. Great to see that Docker Inc. acknowledged the fact that kernel tuning is required sometimes and provides explicit support under Docker 19.03.0 Pre-Release. Great Job !

Wait..Do I really need sysctl?

Say, you have deployed your application on Docker Swarm. it’s pretty simple and it’s working great. Your application is growing day by day and now you just need to scale it. How are you going to do it? The simple answer is: docker service scale app=<number of tasks>.Surely, it is possible today but your containers can quickly hit kernel limits. One of the most popular kernel parameter is net.core.somaxconn. This parameter represents the maximum number of connections that can be queued for acceptance. The default value on Linux is 128, which is rather low.

The Linux kernel is flexible, and you can even modify the way it works on the fly by dynamically changing some of its parameters, thanks to the sysctl command. The sysctl programs allow to limit system-wide resource use. This can help a lot in system administration, e.g. when a user starts too many processes and therefore makes the system unresponsive for other users. Sysctl basically provides an interface that allows you to examine and change several hundred kernel parameters in Linux or BSD. Changes take effect immediately, and there’s even a way to make them persist after a reboot. By using sysctl judiciously, you can optimize your box without having to recompile the kernel and get the results immediately.

Please note that Not all sysctls are namespaced as of Docker 19.03.0 CE Pre-Release. Docker does not support changing sysctls inside of a container that also modify the host system.

Docker does support setting namespaced kernel parameters at runtime & runc honors this. Have a look:

$ docker run --runtime=runc --sysctl net.ipv4.ip_forward=1 -it alpine sh
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
bdf0201b3a05: Pull complete 
Digest: sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913
Status: Downloaded newer image for alpine:latest
/ # sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
/ # 

It is important to note that sysctl support is not new to Docker. The support for sysctl in Docker Compose all started during compose file format v2.1.

For example, to set Kernel parameters in the container, you can use either an array or a dictionary.

sysctls:
  net.core.somaxconn: 1024
  net.ipv4.tcp_syncookies: 0

sysctls:
  - net.core.somaxconn=1024
  - net.ipv4.tcp_syncookies=0

Under this blog post, I will showcase how to use sysctl under 2-Node Docker Swarm Cluster. Let us get started –

Installing a Node with Docker 19.03.0 Beta 1 Test Build on Ubuntu 18.10

Method:I

Downloading the static binary archive. Go to https://download.docker.com/linux/static/stable/ (or change stable to nightly or test), choose your hardware platform, and download the .tgz file relating to the version of Docker CE you want to install.

Captain'sBay==>wget https://download.docker.com/linux/static/test/x86_64/docker-19.03.0-beta1.tgz
--2019-04-10 09:20:01--  https://download.docker.com/linux/static/test/x86_64/docker-19.03.0-beta1.tgz
Resolving download.docker.com (download.docker.com)... 54.230.75.15, 54.230.75.117, 54.230.75.202, ...
Connecting to download.docker.com (download.docker.com)|54.230.75.15|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 62701372 (60M) [application/x-tar]
Saving to: ‘docker-19.03.0-beta1.tgz’
docker-19.03.0-beta1.tgz  100%[=====================================>]  59.80M  10.7MB/s    in 7.1s    
2019-04-10 09:20:09 (8.38 MB/s) - ‘docker-19.03.0-beta1.tgz’ saved [62701372/62701372]
Extract the archive
You can use the tar utility. The dockerd and docker binaries are extracted.

Extract the tar file

Captain'sBay==>tar xzvf docker-19.03.0-beta1.tgz 
docker/
docker/ctr
docker/containerd-shim
docker/dockerd
docker/docker-proxy
docker/runc
docker/containerd
docker/docker-init
docker/docker
Captain'sBay==>

Move the binaries to executable path

Move the binaries to a directory on your executable path It could be such as /usr/bin/. If you skip this step, you must provide the path to the executable when you invoke docker or dockerd commands.

Captain'sBay==>sudo cp -rf docker/* /usr/local/bin/

Start the Docker daemon:

$ sudo dockerd &
Client: Docker Engine - Community
 Version:           19.03.0-beta1
 API version:       1.40
 Go version:        go1.12.1
 Git commit:        62240a9
 Built:             Thu Apr  4 19:15:07 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.0-beta1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.1
  Git commit:       62240a9
  Built:            Thu Apr  4 19:22:34 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
Captain'sBay==>

Testing with hello-world

Captain'sBay==>sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete 
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest
INFO[2019-04-10T09:26:23.338596029Z] shim containerd-shim started                  address="/containerd-shim/m
oby/5b23a7045ca683d888c9d1026451af743b7bf4005c6b8dd92b9e95e125e68134/shim.sock" debug=false pid=2953
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/
For more examples and ideas, visit:
 https://docs.docker.com/get-started/
## Verifying the new `docker context` command

Verifying Docker version

root@DebianBuster:~# docker version
Client:
 Version:           19.03.0-beta1
 API version:       1.40
 Go version:        go1.11.5
 Git commit:        62240a9
 Built:             Thu Apr  4 19:18:53 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.0-beta1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.11.5
  Git commit:       62240a9
  Built:            Thu Apr  4 19:17:35 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
root@DebianBuster:~#

Creating a 2-Node Docker Swarm Mode Cluster

swarm-node-1:~$ sudo docker swarm init --advertise-addr 10.140.0.6 --listen-addr 10.140.0
.6:2377
Swarm initialized: current node (c78wm1g99q1a1g2sxiuawqyps) is now a manager.
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-1bc88158q1v4b4gdof8k0u532bxzdvrgxfztwgj2r443337mja-cmhuu258lu0327
e32l0g4pl47 10.140.0.6:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Run the below command on worker node:

swarm-node2:~$ sudo docker swarm join --token SWMTKN-1-1bc88158q1v4b4gdof8k0u532bxzdvrgxf
ztwgj2r443337mja-cmhuu258lu0327e32l0g4pl47 10.140.0.6:2377
This node joined a swarm as a worker.

Listing the Swarm Mode CLuster

$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
      ENGINE VERSION
rf3xns913p4tlprmu98z2o8hi     swarm-node2         Ready               Active                            
      19.03.0-beta1
isbcijzlrft3ahpbzhgipwr9a *   swarm-node-1        Ready               Active              Leader        
      19.03.0-beta1

Running Multi-service Docker Compose for Redis

Redis is an open source, in-memory data structure store, used as a database, cache and message broker. Redis Commander is an application that allows users to explore a Redis instance through a browser. Let us look at the below Docker compose file for Redis as well as Redis Commander shown below:

version: '3'
services:
  redis:
    hostname: redis
    image: redis

  redis-commander:
    hostname: redis-commander
    image: rediscommander/redis-commander:latest
    restart: always
    environment:
    - REDIS_HOSTS=local:redis:6379
    ports:
    - "8081:8081"

Ensure that Docker Compose is installed on your system using the below commands:

curl -L https://github.com/docker/compose/releases/download/1.24.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Run the below command to bring up Redis application running on Docker Swarm Mode:

sudo docker stack deploy -c docker-compose.yml myapp

$ sudo docker stack deploy -c docker-compose.yml myapp
Ignoring unsupported options: restart
Creating network myapp_default
Creating service myapp_redis-commander
Creating service myapp_redis

Verifying if the services are up and running:

~$ sudo docker stack ls
NAME                SERVICES            ORCHESTRATOR
myapp               2                   Swarm

~$ sudo docker service ls
ID                  NAME                    MODE                REPLICAS            IMAGE               
                    PORTS
ucakpqi7ozg1        myapp_redis             replicated          1/1                 redis:latest        
                    
fxor8v90a4m0        myapp_redis-commander   replicated          0/1                 rediscommander/redis
-commander:latest   *:8081->8081/tcp

Checking the service logs:


$ docker service logs -f myapp3_redis
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:C 17 Apr 2019 06:26:08.006 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:C 17 Apr 2019 06:26:08.006 # Redis version=5.0.4, bits=64, commit=00000000, modified=0, pid=1, just started
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:C 17 Apr 2019 06:26:08.006 # Warning: no config file specified, using the default config. In order to specify a configfile use redis-server /path/to/redis.conf
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:M 17 Apr 2019 06:26:08.009 * Running mode=standalone, port=6379.
myapp3_redis.1.7jpnbigi8kek@manager1    | 1:M 17 Apr 2019 06:26:08.009 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.

As you see above, there is a warning around /proc/sys/net/core/somaxconn lower value set to 128.

Building Docker Compose File using Sysctl parameter

Let us try to build a new Docker compose file with sysctl parameter specified:

Copy the below content and save it as a docker-compose.yml file.

version: '3'
services:
  redis:
    hostname: redis
    image: redis
  sysctls:
    net.core.somaxconn: 1024
  redis-commander:
    image: rediscommander/redis-commander:latest
    restart: always
    environment:
    - REDIS_HOSTS=local:redis:6379
    ports:
    - "8081:8081"

Running Your Redis application

$ sudo docker stack deploy -c docker-compose.yml myapp
Ignoring unsupported options: restart
Creating network myapp_default
Creating service myapp_redis
Creating service myapp_redis-commander

$ sudo docker service ls
ID                  NAME                    MODE                REPLICAS            IMAGE               
                    PORTS
2oxhaychob7s        myapp_redis             replicated          1/1                 redis:latest        
                    
pjdwti7hkg1q        myapp_redis-commander   replicated          1/1                 rediscommander/redis
-commander:latest   *:80->8081/tcp

Verifying the Redis service logs

$ sudo docker service logs -f myapp_redis
myapp_redis.1.mp57syo3okka@swarm-node-1    | 1:C 17 Apr 2019 06:59:44.510 # oO0OoO0OoO0Oo Redis is start
ing oO0OoO0OoO0Oo
myapp_redis.1.mp57syo3okka@swarm-node-1    | 1:C 17 Apr 2019 06:59:44.510 # Redis version=5.0.4, bits=64
, commit=00000000, modified=0, pid=1, just started
myapp_redis.1.mp57syo3okka@swarm-node-1    | 1:M 17 Apr 2019 06:59:44.511 * Running mode=standalone, port=6379.

You can see that the warning around /proc/sys/net/core/somaxconn is no longer being displayed which shows that the sysctls parameter has really worked.

In my next blog post, I will talk around rootless Docker and how to get it tested. Stay tuned !

Docker 19.03.0 Pre-Release: Fast Context Switching, Rootless Docker, Sysctl support for Swarm Services

Estimated Reading Time: 9 minutes

Last week Docker Community Edition 19.03.0 Beta 1 was announced and release notes went public here.Under this release, there were numerous exciting features which were introduced for the first time. Some of the notable features include – fast context switching, rootless docker, sysctl support for Swarm services, device support for Microsoft Windows.

Not only this, there were numerous enhancement around Docker Swarm, Docker Engine API, networking, Docker client, security & Buildkit. Below are the list of features and direct links to GitHub.

Let’s talk about Context Switching..

A context is essentially the configuration that you use to access a particular cluster. Say, for example, in my particular case, I have 4 different clusters – mix of Swarm and Kubernetes running locally and remotely. Assume that I have a default cluster running on my Desktop machine , 2 node Swarm Cluster running on Google Cloud Platform, 5-Node Cluster running on Play with Docker playground and a single-node Kubernetes cluster running on Minikube and that I need to access pretty regularly. Using docker context CLI I can easily switch from one cluster(which could be my development cluster) to test to production cluster in seconds.

Under this blog post, I will focus on fast context switching feature which was introduced for the first time. Let’s get started:

Tested Infrastructure:

  • A Node with Docker 19.03.0 Beta1 installed on Ubuntu 18.10
  • 2 Docker Swarm Node Cluster(swarm-node1 and swarm-node2) setup on installed on Ubuntu 18.10
  • 5-Node Swarm Cluster running on Play with Docker Platform
  • Create 3 Node Kubernetes Cluster using GKE

Installing a Node with Docker 19.03.0 Beta 1 Test Build on Ubuntu 18.10

Method:I

Downloading the static binary archive. Go to https://download.docker.com/linux/static/stable/ (or change stable to nightly or test), choose your hardware platform, and download the .tgz file relating to the version of Docker CE you want to install.

Captain'sBay==>wget https://download.docker.com/linux/static/test/x86_64/docker-19.03.0-beta1.tgz
--2019-04-10 09:20:01--  https://download.docker.com/linux/static/test/x86_64/docker-19.03.0-beta1.tgz
Resolving download.docker.com (download.docker.com)... 54.230.75.15, 54.230.75.117, 54.230.75.202, ...
Connecting to download.docker.com (download.docker.com)|54.230.75.15|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 62701372 (60M) [application/x-tar]
Saving to: ‘docker-19.03.0-beta1.tgz’
docker-19.03.0-beta1.tgz  100%[=====================================>]  59.80M  10.7MB/s    in 7.1s    
2019-04-10 09:20:09 (8.38 MB/s) - ‘docker-19.03.0-beta1.tgz’ saved [62701372/62701372]
Extract the archive
You can use the tar utility. The dockerd and docker binaries are extracted.

Extract the tar file

Captain'sBay==>tar xzvf docker-19.03.0-beta1.tgz 
docker/
docker/ctr
docker/containerd-shim
docker/dockerd
docker/docker-proxy
docker/runc
docker/containerd
docker/docker-init
docker/docker
Captain'sBay==>

Move the binaries to executable path

Move the binaries to a directory on your executable path It could be such as /usr/bin/. If you skip this step, you must provide the path to the executable when you invoke docker or dockerd commands.

Captain'sBay==>sudo cp -rf docker/* /usr/local/bin/

Start the Docker daemon:

$ sudo dockerd &
Client: Docker Engine - Community
 Version:           19.03.0-beta1
 API version:       1.40
 Go version:        go1.12.1
 Git commit:        62240a9
 Built:             Thu Apr  4 19:15:07 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.0-beta1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.1
  Git commit:       62240a9
  Built:            Thu Apr  4 19:22:34 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
Captain'sBay==>

Testing with hello-world

Captain'sBay==>sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete 
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest
INFO[2019-04-10T09:26:23.338596029Z] shim containerd-shim started                  address="/containerd-shim/m
oby/5b23a7045ca683d888c9d1026451af743b7bf4005c6b8dd92b9e95e125e68134/shim.sock" debug=false pid=2953
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/
For more examples and ideas, visit:
 https://docs.docker.com/get-started/
## Verifying the new `docker context` command

Method:II

If you have less time and want a single liner command to handle this, check this out –


root@DebianBuster:~# curl https://get.docker.com | CHANNEL=test sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 13063  100 13063    0     0   2305      0  0:00:05  0:00:05 --:--:--  2971
# Executing docker install script, commit: 2f4ae48
+ sh -c apt-get update -qq >/dev/null
+ sh -c apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
+ sh -c curl -fsSL "https://download.docker.com/linux/debian/gpg" | apt-key add -qq - >/dev/null
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sh -c echo "deb [arch=amd64] https://download.docker.com/linux/debian buster test" > /etc/apt/sources.list.d/docker.list
+ sh -c apt-get update -qq >/dev/null
+ [ -n  ]
+ sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/null
+ sh -c docker version
Client:
 Version:           19.03.0-beta1
 API version:       1.40
 Go version:        go1.11.5
 Git commit:        62240a9
 Built:             Thu Apr  4 19:18:53 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.0-beta1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.11.5
  Git commit:       62240a9
  Built:            Thu Apr  4 19:17:35 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker your-user

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.
root@DebianBuster:~#

Verifying Docker version

root@DebianBuster:~# docker version
Client:
 Version:           19.03.0-beta1
 API version:       1.40
 Go version:        go1.11.5
 Git commit:        62240a9
 Built:             Thu Apr  4 19:18:53 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.0-beta1
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.11.5
  Git commit:       62240a9
  Built:            Thu Apr  4 19:17:35 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
root@DebianBuster:~#

Verifying docker context CLI

$ sudo docker context --help
Usage:  docker context COMMAND
Manage contexts
Commands:
  create      Create a context
  export      Export a context to a tar or kubeconfig file
  import      Import a context from a tar file
  inspect     Display detailed information on one or more contexts
  ls          List contexts
  rm          Remove one or more contexts
  update      Update a context
  use         Set the current docker context
Run 'docker context COMMAND --help' for more information on a command.

Creating a 2 Node Swarm Cluster

Install Docker 19.03.0 Beta 1 on both the nodes(using the same method discussed above). You can use GCP Free Tier account to create 2-Node Swarm Cluster.

Configuring remote access with systemd unit file

Use the command sudo systemctl edit docker.service to open an override file for docker.service in a text editor.

Add or modify the following lines, substituting your own values.

Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://10.140.0.6:2375

Save the file.

Reload the systemctl configuration

$ sudo systemctl daemon-reload Restart Docker.
$ sudo systemctl restart docker.service

Repeat it for other nodes which you are planning to include for building Swarm Mode cluster.

swarm-node-1:~$ sudo docker swarm init --advertise-addr 10.140.0.6 --listen-addr 10.140.0
.6:2377
Swarm initialized: current node (c78wm1g99q1a1g2sxiuawqyps) is now a manager.
To add a worker to this swarm, run the following command:
    docker swarm join --token SWMTKN-1-1bc88158q1v4b4gdof8k0u532bxzdvrgxfztwgj2r443337mja-cmhuu258lu0327
e32l0g4pl47 10.140.0.6:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Run the below command on worker node:

swarm-node2:~$ sudo docker swarm join --token SWMTKN-1-1bc88158q1v4b4gdof8k0u532bxzdvrgxf
ztwgj2r443337mja-cmhuu258lu0327e32l0g4pl47 10.140.0.6:2377
This node joined a swarm as a worker.

Listing the Swarm Mode CLuster

root@swarm-node-1:~# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
0v5r9xmpbxzqpy72u41ihfck0     swarm-node2         Ready               Active                                  19.03.0-beta1
xwmay5i48xxbzlp7is7a3uord *   swarm-node-1        Ready               Active              Leader              19.03.0-beta1

Switching the Context

Listing the Context

node-1:~$ sudo docker context ls
NAME                DESCRIPTION                               DOCKER ENDPOINT               KUBERNETES E
NDPOINT   ORCHESTRATOR
default *           Current DOCKER_HOST based configuration   unix:///var/run/docker.sock               
         swarm

Adding the new Context

docker context create --docker host=tcp://10.140.0.6:2375 swarm-context1

Using the new context for Swarm

docker context use swarm-context1

Listing the Swarm Mode Cluster

 sudo docker context ls
NAME                DESCRIPTION                               DOCKER ENDPOINT               KUBERNETES E
NDPOINT   ORCHESTRATOR
default             Current DOCKER_HOST based configuration   unix:///var/run/docker.sock               
         swarm
swarm-context1 *                                              tcp://10.140.0.6:2375             
 sudo docker context ls --format '{{json .}}' | jq .
{
  "Current": true,
  "Description": "Current DOCKER_HOST based configuration",
  "DockerEndpoint": "unix:///var/run/docker.sock",
  "KubernetesEndpoint": "",
  "Name": "default",
  "StackOrchestrator": "swarm"
}
{
  "Current": false,
  "Description": "",
  "DockerEndpoint": "tcp://10.140.0.6:2375",
  "KubernetesEndpoint": "",
  "Name": "swarm-context1",
  "StackOrchestrator": ""
}
$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
      ENGINE VERSION
xwmay5i48xxbzlp7is7a3uord *   swarm-node-1        Ready               Active              Leader        
      19.03.0-beta1
$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
0v5r9xmpbxzqpy72u41ihfck0     swarm-node2         Ready               Active                                  19.03.0-beta1
xwmay5i48xxbzlp7is7a3uord *   swarm-node-1        Ready               Active              Leader              19.03.0-beta1
tanvirkour1985@sys1:~$ 

Context Switching to remotely running Play with Docker(PWD) Platform

This is one of the most exciting part of this blog. I simply love PWD platform as I find it perfect playground for test driving Docker Swarm cluster. Just a click and you get 5-Node Docker Swarm Cluster in just 5 seconds.

Just click on 3-Manager and 2 workers and you get 5-Node Docker Swarm cluster for FREE.

Let us try to access this PWD cluster using docker context CLI.

Say, by default we have just 1 context for local Docker Host.

[:)Captain'sBay=>sudo docker context ls
NAME                DESCRIPTION                               DOCKER ENDPOINT               KUBERNETES ENDPOINT                 ORCHESTRATOR
default *           Current DOCKER_HOST based configuration   unix:///var/run/docker.sock   https://127.0.0.1:16443 (default)   swarm
swarm-context1                                                tcp://10.140.0.6:2375                                             

Let us go ahead and add PWD context. As shown below, you need to pick up the FQDN name but remember to remove “@” and replace it with “.”(dot)

[:)Captain'sBay=>sudo docker context create --docker host=tcp://ip172-18-0-5-biosq9o6chi000as1470.direct.labs.play-with-docker.com:2375 pwd-clu
ster1
pwd-cluster1
Successfully created context "pwd-cluster1"

This creates a context by name “pwd-cluster1”. You can verify it by listing out the current contexts available.

[:)Captain'sBay=>sudo docker context ls
NAME                DESCRIPTION                               DOCKER ENDPOINT                                                                 K
UBERNETES ENDPOINT                 ORCHESTRATOR
default *           Current DOCKER_HOST based configuration   unix:///var/run/docker.sock                                                     h
ttps://127.0.0.1:16443 (default)   swarm
pwd-cluster1                                                  tcp://ip172-18-0-5-biosq9o6chi000as1470.direct.labs.play-with-docker.com:2375    
                                   
swarm-context1                                                tcp://10.140.0.6:2375                                                            
                                   
[:)Captain'sBay=>

Let us switch to pwd-cluster1 by simply typing docker context use CLI.

[:)Captain'sBay=>sudo docker context use pwd-cluster1
pwd-cluster1
Current context is now "pwd-cluster1"

Listing out the context and verifying if it points to right PWD cluster.

[:)Captain'sBay=>sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
wnrz5fks5drzs9agkyl8z3ffi *   manager1            Ready               Active              Leader              18.09.4
dcweon0icoolfs3kirj0p3qgg     manager2            Ready               Active              Reachable           18.09.4
f78bkvfbzot2jkr2n6cen7240     manager3            Ready               Active              Reachable           18.09.4
xla6nb5ql5i6pkjruyxpc1hzk     worker1             Ready               Active                                  18.09.4
45nk1t94ympplgaasiryunwvk     worker2             Ready               Active                                  18.09.4
[:)Captain'sBay=>

In my next blog post, I will showcase how to switch to Kubernetes cluster running on Minikube and GKE using compose on Kubernetes.

If you are a beginner and looking out to build your career in Docker | Kubernetes | Cloud, do visit DockerLabs. Join our community by clicking here. Thank You.

Meet K3s – A Lightweight Kubernetes Distribution for Raspberry Pi Cluster

Estimated Reading Time: 9 minutes

To implement a microservice architecture and a multi-cloud strategy, Kubernetes today has become a key enabling technology. The bedrock of Kubernetes remains the orchestration and management of Linux containers, to create a powerful distributed system for deploying applications across a hybrid cloud environment. Said that, Kubernetes has become the de-facto standard container orchestration framework for cloud-native deployments. Development teams have turned to Kubernetes to support their migration to new microservices architectures and a DevOps culture for continuous integration and continuous deployment.

Why Docker & Kubernetes on IoT devices?

Today many organizations are going through a digital transformation process. Digital transformation is the integration of digital technology into almost all areas of a business, fundamentally changing how you operate and deliver value to customers. It’s basically a cultural change.  The common goal for all these organization is to change how they connect with their customers, suppliers and partners. These organizations are taking advantage of innovations offered by technologies such as IoT platforms, big data analytics, or machine learning to modernize their enterprise IT and OT systems. They realize that the complexity of development and deployment of new digital products require new development processes. Consequently, they turn to agile development and infrastructure tools such as Kubernetes.

At the same time, that there has been a major increase in the demand for Kubernetes outside the datacenter. Kubernetes is pushing out of the data center into stores and factories. DevOps teams do find Kubernetes quite interesting as it provides predictable operations and a cloud-like provisioning experience on just about any infrastructure.

Docker containers & Kubernetes are an excellent choice for deploying complex software to the Edge. The reasons are listed below:

  • Containers are awesome
  • Consistent across a wide variety of Infrastructure
  • Capable of standalone or clustered operations
  • Easy to upgrade and/or replace containers
  • Support for different infrastructure configs(storage,CPU etc.)
  • Strong Ecosystem(Monitoring, logging, CI, management etc.)

Introducing K3s – A Stripped Down version of Kubernetes

K3s is a brand new distribution of Kubernetes that is designed for teams that need to deploy applications quickly and reliably to resource-constrained environments. K3s is a Certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.

K3s is lightweight certified K8s distribution built for production operations. It is just 40MB binary with 512MB memory consumption. It is based on a single process w/t integrated K8s master, Kubelet and containerd. It includes SQLite in addition to etcd. Simultaneously released for x86_64, ARM64 and ARMv7. It is an open Source project, not yet Rancher product. It is wrapped in a simple package that reduces the dependencies and steps needed to run a production Kubernetes cluster. Packaged as a single binary, k3s makes installation and upgrade as simple as copying a file. TLS certificates are automatically generated to ensure that all communication is secure by default.

k3s bundles the Kubernetes components (kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy) into combined processes that are presented as a simple server and agent model. Running k3s server will start the Kubernetes server and automatically register the local host as an agent. This will create a one node Kubernetes cluster. To add more nodes to the cluster just run k3s agent --server ${URL} --token ${TOKEN} on another host and it will join the cluster. It’s really that simple to set up a Kubernetes cluster with k3s.

Minimum System Requirements

  • Linux 3.10+
  • 512 MB of ram per server
  • 75 MB of ram per node
  • 200 MB of disk space
  • x86_64, ARMv7, ARM64

Under this blog post, I will showcase how to get started with K3s on 2-Node Raspberry Pi’s cluster.

Prerequisite:

Hardware:

  1. Raspberry Pi 3 ( You can order it from Amazon in case you are in India for 2590 INR)
  2. Micro-SD card reader ( I got it from here )
  3. Any Windows/Linux/MacOS
  4. HDMI cable ( I used the HDMI cable of my plasma TV)
  5. Internet Connectivity(WiFi/Broadband/Tethering using Mobile) – to download Docker 18.09.0 package
  6. Keyboard & mouse connected to Pi’s USB ports

Software:

  1. SD-Formatter – to format microSD card (in case of Windows Laptop)
  2. Win32DiskImager(in case you have Windows OS running on your laptop) – to burn Raspbian format directly into microSD card.(No need to extract XZ using any tool). You can use Etcher tool if you are using macbook.

Steps to Flash Raspbian OS on Pi Boxes:

  1. Format the microSD card using SD Formatter as shown below:
4

2. Download Raspbian OS from here and use Win32 imager(in case you are on Windows OS  running on your laptop) to burn it on microSD card.

3. Insert the microSD card into your Pi box. Now connect the HDMI cable  from one end of Pi’s HDMI slot to your TV or display unit and mobile charger(recommended 5.1V@1.5A).

4. Let the Raspbian OS boot up on your Pi box. It takes hardly 2 minutes for OS to come up.

5. Configure WiFi via GUI. All you need is to input the right password for your WiFi.

6. The default username is “pi” and password is “raspberry”. You can use this credentials to login into the Pi system.

7. You can use “FindPI” Android application to search for IP address if you don’t want to look out for Keyboard or mouse to search for the right IP address.

Enable SSH to perform remote login

To login via your laptop, you need to allow SSH service running. You can verify IP address command via ifconfig command.

[Captains-Bay]🚩 >  ssh pi@192.168.1.5
pi@192.168.1.5's password:
Linux raspberrypi 4.14.98-v7+ #1200 SMP Tue Feb 12 20:27:48 GMT 2019 armv7l

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Feb 26 12:30:00 2019 from 192.168.1.4
pi@raspberrypi:~ $ sudo su
root@raspberrypi:/home/pi# cd

Verifying Raspbian OS Version

root@raspberrypi:~# cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
root@raspberrypi:~#
</code></pre>

Enable container features in Kernel

Edit /boot/cmdline.txt on both the Raspberry Pi nodes and add the following to the end of the line:

cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

Reboot the devices.

Installing K3s

root@raspberrypi:~# curl -sfL https://get.k3s.io | sh -
[INFO]  Finding latest release
[INFO]  Using v0.2.0 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.2.0/sha256sum-arm.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.2.0/k3s-armhf
^Croot@raspberrypi:~# wget https://github.com/rancher/k3s/releases/download/v0.2.0/k3s-mhf && \
>   chmod +x k3s-armhf && \
>   sudo mv k3s-armhf /usr/local/bin/k3s
--2019-03-28 22:47:22--  https://github.com/rancher/k3s/releases/download/v0.2.0/k3s-armhf
Resolving github.com (github.com)... 192.30.253.112, 192.30.253.113
Connecting to github.com (github.com)|192.30.253.112|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/135516270/4010d900-41db-11e9-9992-cc2248364eac?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20190328%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190328T171725Z&X-Amz-Expires=300&X-Amz-Signature=75c5a361f0219d443dfa0754250c852257f1b8512e54094da0bcc6fbb92327cc&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dk3s-armhf&response-content-type=application%2Foctet-stream [following]
--2019-03-28 22:47:25--  https://github-production-release-asset-2e65be.s3.amazonaws.com/135516270/4010d900-41db-11e9-9992-cc2248364eac?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20190328%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190328T171725Z&X-Amz-Expires=300&X-Amz-Signature=75c5a361f0219d443dfa0754250c852257f1b8512e54094da0bcc6fbb92327cc&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dk3s-armhf&response-content-type=application%2Foctet-stream
Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.216.0.56
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.216.0.56|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 34684224 (33M) [application/octet-stream]
Saving to: ‘k3s-armhf’

k3s-armhf             100%[========================>]  33.08M  93.1KB/s    in 8m 1s

2019-03-28 22:55:28 (70.4 KB/s) - ‘k3s-armhf’ saved [34684224/34684224]

Boostrapping Your K3s Server

root@raspberrypi:~# sudo k3s server
INFO[2019-03-29T10:52:06.995054811+05:30] Starting k3s v0.2.0 (2771ae1)
INFO[2019-03-29T10:52:07.082595332+05:30] Running kube-apiserver --watch-cache=false --cert-dir /var/lib/rancher/k3s/server/tls/temporary-certs --allow-privileged=true --authorization-mode Node,RBAC --service-account-signing-key-file /var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range 10.43.0.0/16 --advertise-port 6445 --advertise-address 127.0.0.1 --insecure-port 0 --secure-port 6444 --bind-address 127.0.0.1 --tls-cert-file /var/lib/rancher/k3s/server/tls/localhost.crt --tls-private-key-file /var/lib/rancher/k3s/server/tls/localhost.key --service-account-key-file /var/lib/rancher/k3s/server/tls/service.key --service-account-issuer k3s --api-audiences unknown --basic-auth-file /var/lib/rancher/k3s/server/cred/passwd --kubelet-client-certificate /var/lib/rancher/k3s/server/tls/token-node.crt --kubelet-client-key /var/lib/rancher/k3s/server/tls/token-node.key
INFO[2019-03-29T10:52:08.094785384+05:30] Running kube-scheduler --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --port 10251 --address 127.0.0.1 --secure-port 0 --leader-elect=false
INFO[2019-03-29T10:52:08.105366477+05:30] Running kube-controller-manager --kubeconfig /var/lib/rancher/k3s/server/cred/kubeconfig-system.yaml --service-account-private-key-file /var/lib/rancher/k3s/server/tls/service.key --allocate-node-cidrs --cluster-cidr 10.42.0.0/16 --root-ca-file /var/lib/rancher/k3s/server/tls/token-ca.crt --port 10252 --address 127.0.0.1 --secure-port 0 --leader-elect=false
Flag --address has been deprecated, see --bind-address instead.
INFO[2019-03-29T10:52:10.410557414+05:30] Listening on :6443
INFO[2019-03-29T10:52:10.519075956+05:30] Node token is available at /var/lib/rancher/k3s/server/node-token
INFO[2019-03-29T10:52:10.519226216+05:30] To join node to cluster: k3s agent -s https://192.168.43.134:6443 -t ${NODE_TOKEN}
INFO[2019-03-29T10:52:10.543022102+05:30] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml
INFO[2019-03-29T10:52:10.548766216+05:30] Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml

I encountered the below error message while running K3s server for the first time.


INFO[2019-04-04T15:52:44.710450122+05:30] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused"
containerd: exit status 1

You can fix it by editing /etc/hosts and adding the right entry for your Pis boxes

127.0.0.1       raspberrypi-node3

By now, you should be able to get k3s nodes listed.

root@raspberrypi:~# sudo k3s kubectl get node -o wide
NAME          STATUS   ROLES    AGE     VERSION         INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
raspberrypi   Ready    <none>   2m13s   v1.13.4-k3s.1   192.168.43.134   <none>        Raspbian GNU/Linux 9 (stretch)   4.14.98-v7+      containerd://1.2.4+unknown

Listing K3s Nodes

root@raspberrypi:~# k3s kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
raspberrypi   Ready    <none>   2m26s   v1.13.4-k3s.1

Listing K3s Pods

root@raspberrypi:~# k3s kubectl get po
No resources found.
root@raspberrypi:~# k3s kubectl get po,svc,deploy
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.43.0.1    <none>        443/TCP   11h
root@raspberrypi:~#

containerd and Docker

k3s by default uses containerd. If you want to use it with Docker, all you just need to run the agent with the --docker flag

 k3s agent -s ${SERVER_URL} -t ${NODE_TOKEN} --docker &

Running Nginx Pods

To launch a pod using the container image nginx and exposing a HTTP API on port 80, execute:

root@raspberrypi:~# k3s kubectl run mynginx --image=nginx --replicas=3 --port=80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/mynginx created

Listing the Nginx Pods

You can now see that the pod is running:

root@raspberrypi:~# k3s kubectl get po
NAME                       READY   STATUS    RESTARTS   AGE
mynginx-84b8d48d44-ggpcp   1/1     Running   0          119s
mynginx-84b8d48d44-hkdg8   1/1     Running   0          119s
mynginx-84b8d48d44-n4r6q   1/1     Running   0          119s

Exposing the Deployment

Create a Service object that exposes the deployment:


root@raspberrypi:~# k3s kubectl expose deployment mynginx --port 80
service/mynginx exposed

Verifying the endpoints controller for Pods

The below command verifies if endpoints controller has found the correct Pods for your Service:

root@raspberrypi:~# k3s kubectl get endpoints mynginx
NAME      ENDPOINTS                                   AGE
mynginx   10.42.0.10:80,10.42.0.11:80,10.42.0.12:80   17s

Testing if Nginx application is up & running:


root@raspberrypi:~# curl 10.42.0.10:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Adding a new Node to K3s Cluster


To add more nodes to the cluster just run k3s agent --server ${URL} --token ${TOKEN} on another host and it will join the cluster. It’s really that simple to set up a Kubernetes cluster with k3s.

To test drive K3s on multi-node cluster, first you will need to copy the token which is stored on the below location:

root@raspberrypi:~# cat /var/lib/rancher/k3s/server/node-token
K108b8e370b380bea959e8017abea3e540d1113f55df2c3f303ae771dc73fc67aa3::node:42e3dfc68ee27cf7cbdae5e4c8ac91b2
root@raspberrypi:~#

Create a variable NODETOKEN with token ID and then pass it directly with k3s agent command as shown below:

root@pi-node1:~# NODETOKEN=K108b8e370b380bea959e8017abea3e540d1113f55df2c3f303ae771dc73fc67aa3::node:42e3dfc68ee27cf7cbdae5e4c8ac91b2
root@pi-node1:~# k3s agent --server https://192.168.1.5:6443 --token ${NODETOKEN}
INFO[2019-04-04T23:09:16.804457435+05:30] Starting k3s agent v0.3.0 (9a1a1ec)
INFO[2019-04-04T23:09:19.563259194+05:30] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log
INFO[2019-04-04T23:09:19.563629400+05:30] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
INFO[2019-04-04T23:09:19.613809334+05:30] Connecting to wss://192.168.1.5:6443/v1-k3s/connect
INFO[2019-04-04T23:09:19.614108395+05:30] Connecting to proxy                           url="wss://192.168.1.5:6443/v1-k3s/connect"
FATA[2019-04-04T23:09:19.907450499+05:30] Failed to start tls listener: listen tcp 127.0.0.1:6445: bind: address already in use
root@pi-node1:~# pkill -9 k3s
root@pi-node1:~# k3s agent --server https://192.168.1.5:6443 --token ${NODETOKEN}
INFO[2019-04-04T23:09:45.843235117+05:30] Starting k3s agent v0.3.0 (9a1a1ec)
INFO[2019-04-04T23:09:48.272160155+05:30] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log
INFO[2019-04-04T23:09:48.272542392+05:30] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
/run/k3s/containerd/containerd.sock: connect: connection refused"
INFO[2019-04-04T23:09:49.321863688+05:30] Waiting for containerd startup: rpc error: code = Unknown desc = server is not initialized yet
INFO[2019-04-04T23:09:50.347628159+05:30] Connecting to wss://192.168.1.5:6443/v1-k3s/connect

Listing the k3s Nodes

root@raspberrypi:~# k3s kubectl get nodes
NAME       STATUS   ROLES    AGE    VERSION
pi-node1   Ready    <none>   118s   v1.13.5-k3s.1
pi-node2   Ready    <none>   108m   v1.13.5-k3s.1

Setting up Nginx

As shown earlier, we will go ahead and test Nginx application on top of K3s cluster nodes

root@raspberrypi:~# k3s kubectl run mynginx --image=nginx --replicas=3 --port=80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/mynginx created

Verifying the endpoints controller for Pods

root@raspberrypi:~# k3s kubectl expose deployment mynginx --port 80
service/mynginx exposed

Test driving Kubernetes Dashboard

root@node1:/home/pi# k3s kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
root@node1:/home/pi#

We require kubectl proxy to get it accessible on our Web browser. This command creates a proxy server or application-level gateway between localhost and the Kubernetes API Server. It also allows serving static content over specified HTTP path. All incoming data enters through one port and gets forwarded to the remote kubernetes API Server port, except for the path matching the static content path.

root@node1:/home/pi# k3s kubectl proxy
Starting to serve on 127.0.0.1:8001

By now, you should be able to access Dashboard via 8001 port on your Raspberry Pi system browser.

Cleaning up

kubectl delete --all pods
pod "mynginx-84b8d48d44-9ghrl" deleted
pod "mynginx-84b8d48d44-bczsv" deleted
pod "mynginx-84b8d48d44-qqk9p" deleted

Hope you find this blog useful. In my future blog post, I will be talking about K3s Internals in detail.

References:

Docker Birthday #6: “Show-And-Tell” Event in Bangalore

Estimated Reading Time: 8 minutes

Docker’s Birthday Celebration is not just about cakes, food and party. It’s actually a global tradition that is near and dear to our heart because it gives each one of us an opportunity to express our gratitude to our huge community of contributors. The goal of this global celebration is to welcome every single Docker community users who are keen to understand and adopt this technology and influence others to grow this amazing community.

This year, celebrations all over the world took place during March 18-31, 2019 at across 75 Users groups events worldwide. Interestingly, Docker Inc. came up with really good idea of “Show-And-Tell: How do you Docker?” theme this time. Docker User Groups all over the world hosted local birthday show-and-tell celebrations. Each speaker got chance for 15-20 minutes of stage time to present how they’ve been using Docker. Every single speaker who presented their work got a Docker Birthday #6 T-shirt and have the opportunity to submit their Docker Birthday Show-and-tell to present at DockerCon. We celebrated Docker’s 6th Birthday in Bangalore at DellEMC Office, Mahadevapura on 30th March. Around 100+ audience participated for this Meetup event out of which 70% of the audience were beginners.

An Early Preparation..

Planning for Docker Birthday #6 celebration all started during the early first week of February. First of all, I came up with Docker Bangalore Meetup event page as to keep Community aware of upcoming “Show-And-Tell” event.

Soon after posting this event, my Google Forms got flooded with project stories. Received 30+ entries on the first 2 weeks which was just amazing. Out of overall 60 projects, I found 10 of the projects really powerful and hence started working with individuals to come up with better way to present it on the stage. In parallel, I placed an early order for a Birthday banner, Birthday stickers, T-shirts(Men & Women) for speakers, T-shirts(Men & Women) for the audience. Thanks to Docker, Inc for shipping it to Bangalore on-time.

Let’s talk about #7 Cool Projects…

After 3-4 week of continuous interaction, I finalized the list of 7 projects which looked really promising. In case you missed out this event, here is the short brief around each of these project work –

Project #1: Box-Exec

Akshit Grover, student of ACM VIT Student Chapter was one among the young engineer who came up with cool project idea titled “Box-Exec”.

Box Execute is an npm package to compile/run codes (like C,CPP, Python) in a virtualized environment, Here virtualized environment used is a docker container. This packages is built to ease the task of running a code against test cases as done by websites used to practice algorithmic coding. It supports automatic CPU time sharing configuration between containers and load balancing between multiple containers allocated for same language.

The project is hosted under Github Repo: https://github.com/akshitgrover/box-exec

Project #2: Flipper

Shivam Yaduka, 3rd Year student of ACM VIT Student Chapter was the next speaker on stage who talked about his super cool project idea titled “Flipper – Build, Ship & Navigate”.

Flipper is a Docker playground which allows users to run Docker-in-Docker-In-Docker commands ~ all on a web browser in a matter of seconds. It gives the experience of having a free RHEL Virtual Machine in browser, where you can build and run Docker containers.

Highlights:

  • Flipper uses Python as a backend scripting language
  • Flipper uses Python-CGI in order to interact through a Web server with a client running a Web browser.
  • It uses HTML/CSS as a Front-end language.
  • It can be hosted on your Laptop flawlessly

I got chance to interlock with Shivam during my Docker session in VIT Campus where we discussed around this idea. I liked the idea and suggested him to talk around it in the upcoming Birthday Meetup. In his own words…

“…. I use Docker to provide Cloud Virtualization services like Software-As-A-Service(SaaS) and Container-As-A-Service(CaaS). The Hybrid Cloud is made and deployed on my own workstation, No commercial platforms have been used. In SaaS, whenever a user launches a particular software, in the background a docker shell is created with the particular software installed in it and the user gets to access the service. In CaaS, if a user wants a quick Linux terminal, all he need is to click on the start button after he enters his username and a random number. A docker shell is created in the background and is displayed on the browser itself. So the user can quickly do his tasks. I have used Ansible to execute docker commands but a major portion of SaaS and CaaS is done on python36 and python-cgi to integrate front-end and back-end…”

His project is hosted on GITHUB and you can access it via
https://github.com/yshivam/Flipper

Project #3: GigaHex

Shad Amez from Expedia was next to present his amazingly cool project rightly titled “Gigahex – SandBox for Big Data Developers”

He started his talk around the challenges around existing Sandbox tools like VirtualBox, bloated Docker images for Big Data applications. He claimed a smaller footprint, low CPU & system Overhead and automation with his promising “GigaHex” platform.

He is yet to launch his cool project in Q2 this year. You can expect much more at https://launcher.gigahex.com/

Project #4: Z10N : Device simulation at scale using Docker Swarm

Hemanth Gaikwad, Validation Architect from DellEMC, headed over to the stage to talk around his active project titled ” Z10N : Device simulation at scale using Docker Swarm “.


Hemanth initiated his presentation talking about the existing challenges in delivering products to the customer on time with high quality. He stated that the major challenge is to develop and test as thoroughly and efficiently as we can, given our time and resource constraints. Essentially a company needs improved quality and reduced software lifecycle time to be able to survive in the competitive software landscape and reap the benefits of being early to market with high quality software features. Hardware availability happens to be scarce which results in design, development and tests getting pushed right. Products are developed and tested under non-scaled environments for just a few finite states, again impacting quality. Reduced quality would inherently further increase the costs and efforts.

An in-house tool called “Z10N” (pronounced zee-on) can help create real world lab environment with thousands of hardware devices at the fraction of cost for physical devices. Z10N would help the organization to:

  • Cut through the massive costs/efforts
  • Reduce vendor/hardware dependency
  • Enable rapid prototyping
  • Drastically simplify design & validation of complex sensor states/error conditions
  • Seamlessly design & develop products, execute automation & non-functional tests at will without worrying about the hardware availability Learn how you could simulate/emulate a hardware device and create thousands of clones for the same in just a few minutes with 99% reduction in expenditure.

He claimed that Z10N is already helping make better, faster products and with its capabilities it’s surely getting you the “Power to do more”.

Project #5: JAAS :
Distributed WorkLoad Testing using Containers

Next, Vishnu Murty, Senior Principal Engineer from DellEMC delivered a talk around “JAAS” – DellEMC in-house project for distributed workload testing using Docker containers.

Vishnu initiated his talk around challenges with existing Loading testing tools for various workloads like FTP, Web, Database, Mail etc. He stated that Load testing tools available in market comes with its own challenges like Cost, Learning Curve and Workloads Support. To cope with these challenges, he started looking at possible solution and hence JAAS (JMeter As A Service) was born. JAAS uses Containers and open source tools to deliver servers validation efforts.

Tech Stack behind JAAS:

  • Containers and Docker SWARM: For auto deploying of JMeter Apps, we use Docker containers. We use Docker SWARM service for creating Virtual JMETER Users for Generating the Load.
  • JMeter: Performance/Load testing framework from Apache, has been widely accepted as a Performance/Load testing tool for multiple applications.
  • Python: Python responsible for communicating across all individual components (Docker SWARM and ELK Stack) using Rest API.
  • ELK Stack: We store all logs, beats Data, JMeter results in Elastic Search. Visualize in Kibana.

His talk has been selected for Containerday happening June 24-26 at Hamburg https://www.containerdays.io/ . Don’t miss out his talk if you get chance to attend this conference.

Project #6: Comparative Study of Hadoop over VMs Vs Docker containers

Shivankit Bagla was our next young speaker who talked about his recent International Journal of Applied Engineering Research document https://www.ripublication.com/ijaer18/ijaerv13n6_166.pdf and his talk was titled around “Comparative study of Hadoop over VM Vs Docker containers.

He talked about his project which was a comparative study of the performance of Hadoop cluster in a containerised environment Vs virtual machine. He demonstrated on how running a Hadoop cluster in a Docker environment actually increases the performance of a Hadoop cluster and decreases the time taken by Hadoop system to perform certain actions.

Project #7: Turn Your Raspberry Pi into CCTV Surveillance Camera using Docker Containers

I presented my recent experiment around Docker containers on Raspberry Pi cluster showcasing how to turn Raspberry Pi into CCTV Camera using a single Docker container. I couldn’t get much time to demonstrate due to time constraint but I would suggest you to check out my detailed blog post around it via
http://collabnix.com/turn-your-raspberry-pi-into-low-cost-cctv-surveillance-camerawith-night-vision-in-5-minutes-using-docker/




Special thanks to the below list of individuals who provided support to make this event successful –

  • Shad Amez, Expedia
  • Akshit Grover, VIT
  • Shivam Yaduka, VIT
  • Shavitha Pareek, DellEMC
  • Vishnu Murty, DellEMC
  • Hemanth Gaikwad, DellEMC
  • Vineeth Abhraham, DellEMC
  • Shivankit Bagla, Acceletrade

Below are the list of topics as well as slides links for all the above listed project ideas:

TopicPresentation
GigaHex – Sandbox Environment for Big Data Application by Shad Amez, ExpediaSlides
Box-Exec – by Akshit Grover, VITSlides
Device Simulation at Scale using Docker Swarm by Hemanth Gaikwad, DellEMC
Distributed WorkLoad Testing Using Docker Containers by Visnu Murty, DellEMC
Flipper – Tiny Cloud on Browser using Docker by Shivam Yaduka, VITSlides
Comparative Study of Hadoop over Virtual Machine Vs Docker Containers by Shivankit BaglaSlides

References:

A First Look at Docker Desktop Enterprise

Estimated Reading Time: 11 minutes

If you are looking out for Desktop Enterprise software solution for creating & delivering production-ready containerized applications in a simplified & secure way, Docker Desktop Enterprise is the right tool for you.

Last Dockercon, Docker announced the release of the new Docker Desktop Enterprise which is a new commercial Desktop offering from Docker, Inc. It is the only enterprise-ready Desktop platform that enable IT organizations automate the delivery of legacy and modern applications using an agile operating model with integrated security. With work performed locally, developers can leverage a rapid feedback loop before pushing code or docker images to shared servers / continuous integration infrastructure.

Imagine you are a developer & your organization have a production-ready environment running Docker Enterprise 2.1. To ensure that you don’t use any APIs or incompatible features that will break when you push an application to production environment, you would like to be certain your working environment exactly matches what’s running in Docker Enterprise production systems. With Docker Desktop Enterprise you can easily bridge such kind of gaps. It is basically a cohesive extension of the Docker Enterprise container platform that runs right on developers’ systems. Developers code and test locally using the same tools they use today and Docker Desktop Enterprise helps to quickly iterate and then produce a containerized service that is ready for their production Docker Enterprise clusters.

The Enterprise-Ready Solution for Dev & Ops

Docker Desktop Enterprise is a perfect devbed for Enterprise Developers. It allows developers to select from a variety of their favourite frameworks languages and IDE. Because of those options, it can also help organizations target every platform. So basically, your organization can provide application templates that include production-approved application configurations. And developers can take those templates and quickly modify and replicate them right from their desktop.

With Docker Desktop Enterprise, IT organizations can ensure developers are working with the same version of Docker Desktop Enterprise and can easily distribute Docker Desktop Enterprise to large teams using a number of third-party endpoint management applications. With the Docker Desktop Enterprise graphical user interface (GUI), developers are no longer required to work with lower-level Docker commands and can auto-generate Docker artifacts.

A Flawless Integration with 3rd Party Developer Tool

Docker Desktop Enterprise is designed to integrate with existing development environments (IDEs) such as Visual Studio and IntelliJ. And with support for defined application templates, Docker Desktop Enterprise allows organizations to specify the look and feel of their applications.

Exclusive features of Docker Desktop Enterprise 

Let us talk about the various features of Docker Desktop Enterprise 2.0.0.0 which is discussed below:

  • Version selection: Configurable version packs ensure the local instance of Docker Desktop Enterprise is a precise copy of the production environment where applications are deployed, and developers can switch between versions of Docker and Kubernetes with a single click.
    • Docker and Kubernetes versions match UCP cluster versions.
    • Administrator command line tool simplifies version pack installation.
  • Application Designer: Application Designer provides a library of application and service templates to help Docker developers quickly create new Docker applications. Application templates allow you to choose a technology stack and focus on business logic and code, and require only minimal Docker syntax knowledge.
    • Template support includes .NET, Spring, and more.
  • Device management:
    • The Docker Desktop Enterprise installer is available as standard MSI (Win) and PKG (Mac) downloads, which allows administrators to script an installation across many developer workstations.
  • Administrative control:
    • IT organizations can specify and lock configuration parameters for creation of a standardized development environment, including disabling drive sharing and limiting version pack installations. Developers can then run commands using the command line without worrying about configuration settings.

Under this blog post, we will look at two of the promising features of Docker Desktop Enterprise 2.0.0.0:

  • Application Designer &
  • Version packs

Installing Docker Desktop Enterprise

Docker Desktop Enterprise is available both for Microsoft Windows and MacOS. One can download via the below links:

The above installer includes:

  • Docker Engine,
  • Docker CLI client, and
  • Docker Compose.

Please note that you will have to clean up Docker Desktop Community Edition before you install Enterprise edition. Also, Enterprise version will require a seperate License key which you need to buy from Docker, Inc.,.

To install Docker Desktop Enterprise, double-click the .msi or .pkg file and initiate the Setup wizard:



Click “Next” to proceed further and accept the End-User license agreement as shown below:

Click “Next” to proceed with the installation.

Once installed, you will see Docker Desktop icon on the Windows Desktop as shown below:

License file

As stated earlier, to use Docker Desktop Enterprise, you must purchase Docker Desktop Enterprise license file from Docker, Inc.

The license file must be installed and placed under the following location: C:\Users\Docker\AppData\Roaming\Docker\docker_subscription.lic

If the license file is missing, you will be asked to provide it when you try to run Docker Desktop Enterprise. Once the license file is supplied, Docker Desktop Enterprise should come up flawlessly.

What’s New in Docker Desktop UI?

Docker Desktop Enterprise provides you with additional features compared to the Community edition. Right click on whale icon on Task Manager and select “About Docker Desktop” to show up the below window.

Open up Powershell to verify Docker version up and running :

Click on “Settings” option to get list of various sections like shared drives, advanced settings, network, proxies, Docker daemon and Kubernetes.

One of the new feature introduced with Docker Desktop Enterprise is to allow Docker Desktop to start whenever you login automatically. This feature can be enabled by selecting “Start Desktop when you login” under General Tab. One can automatically check for updates by enabling this feature.

Docker Desktop Enterprise gives you flexibility to pre-select resources limitation so as to make it available for Docker Engine as shown below. Based on your system configuration and type of application you are planning to host, you can increase or decrease the resource limit.


Docker Desktop Enterprise includes a standalone Kubernetes server that runs on your Windows laptop, so that you can test deploying your Docker workloads on Kubernetes.

The Kubectl is a command line interface for running commands against Kubernetes clusters. It comes with Docker Desktop by default and one cn verify by running the below command:

Running Your First Web Application

Let us try running the custom built Web application using the below command:

Open up the browser to verify that web page is up and running as shown below:

Application Designer

Application Designer provides a library of application and service templates to help Docker developers quickly create new Docker applications. Application templates allow you to choose a technology stack and focus on business logic and code, and require only minimal Docker syntax knowledge.

Building Linux-based Application using Application Designer

Under this section, I will show you how to get started with Application Designer feature which was introduced for the first time.

Right click on whale-icon in the Taskbar and choose “Design New Application”. Once you click on it, it will open the below window:

Let us first try using the set of preconfigured application by clicking on “Choose a template”

Let us test drive Linux-based application. Click on “Linux” option and proceed further. This opens up variety of ready-made templates as shown below:

Spring application is also included as part of Docker Desktop Enterprise which is basically a sample Java application with Spring framework and a Postgres database as shown below:

Let us go ahead and try out a sample python/Flask application with an Nginx proxy and a MySQL database. Select the desired application template and choose your choice of Python version and accessible port. You can select your choice of MySQL version and Nginx proxy. For this example, I chose Python version 3.6, MySQL 5.7 and Nginx proxy exposed on port 80.

Click on “Continue” to build up this application stack.

Click on “Assemble” and this should build up your application stack.

Done. Click on “Run Application” to bring up your web application stack.

Once you click on “Run Application”, you can see the output right there on the screen as shown below:

As shown above, one can open up code repository in Visual Studio Code & Windows explorer. You get options to start, stop and restart your application stack.

To verify its functionality, let us try to open up the web application as shown below:

Cool, isn’t it?

Building Windows based Application using Application Designer

Under this section, we will see how to build Windows-based application using the same Application Designer tool.

Before you proceed, we need to choose “Switch to Windows container” as shown below to allow Windows based container to run on our Desktop.




Right click on whale-icon in the Taskbar and choose “Design New Application”. Once you click on it, it will open the below window:

Click on “Choose a template” and select Windows this time as shown below:

Once you click on Windows, it will open up a sample ASP.Net & MS-SQL application.

Once clicked, it will show frontend and backend with option to set up desired port for your application.

I will go ahead and choose port 82 for this example. Click on “Continue” and supply your desired application name. I named it as “mywinapp” as shown below:

Click on “Assemble” to build up your application stack.

Click on “Start” to run your application stack.

While the application stack is coming up, you can open up Visual Studio to view files like Docker Compose, Dockerfile as shown below:

One can view logs to see what’s going on in the backend. Under Application Designer, one can select “Debug” option to open up “View Logs” to view the real time logs.

By now, you should be able to access your application via web browser.

Version Packs

Docker Desktop Enterprise 2.0.0 is bundled with default version pack Enterprise 2.1 which includes Docker Engine 18.09 and Kubernetes 1.11.5. You can download it via this link.

If you want to use a different version of the Docker Engine and Kubernetes for development work install version pack Enterprise 2.0, you can download version pack Enterprise 2.0 via this link.

Version packs are installed manually or, for administrators, by using the command line tool. Once installed, version packs can be selected for use in the Docker Desktop Enterprise menu.

Installing Version Pack

When you install Docker Desktop Enterprise, the tool is installed under C:\Program Files\Docker\Desktop location. Version packs can be installed by double-clicking a .ddvp file. Ensure that Docker Desktop is stopped before installing a version pack. The easiest way to add Version Pack is through CLI running the below command:

Open up Windows Powershell via “Run as Administrator” and run the below command:

dockerdesktop-admin.exe’ -InstallVersionPack=’C:\Program Files
\Docker\Docker\enterprise-2.0.ddvp’

Uninstalling Version Pack

Uninstalling Version Pack is a matter of single-line command as shown below:

dockerdesktop-admin.exe’ -UninstallVersionPack <VersionPack>

In my next blog post, I will show you how to leverage Application Designer tool to build custom application.

References: