Top 5 Cool Projects around Docker, Raspberry Pi & Blinkt! ~ Monitoring Docker Swarm using LEDs – Part I

Estimated Reading Time: 5 minutes

Two week back, I travelled to Jaipur, around 1000+ miles from Bangalore for delivering one of Docker Session. I was invited as a Guest Speaker for “IIEC Connect” event conducted by LinuxWorld Inc. held in GD Badaya Auditorium, Jaipur which accommodated around 500-600+ engineering students.

It was an amazing experience with dozens of questions at the end of the session. The session lasted for 3 hours and I was just amazed when 90% hands got raised when I asked “How many of you know about Docker?”. I compiled 120+ slides for this session but skipped immediately to advanced talk so as to keep the audience intact. I talked about how industry is using Docker with some real time in-house projects like Pico, OpenUSM & Docker in the data centre.

During the end of the session, I showcased an interesting demonstration around monitoring Docker Swarm cluster using Blinkt! Pironomi LED. It was a great opportunity for me to excite the students showcasing such a cool project around Docker containers running on Raspberry Pi. Under this blog post, I will talk about how to achieve it in a detailed way.

Pre-requisite:

[rml_read_more]

ItemsLink Cost
Raspberry Pi 3 Model BBuy2849 INR
Raspberry Pi 3 Model B 4-layer Dog Bone
Stack Clear Case Box Enclosure
Buy4772 INR
Pimoroni Blinkt!Buy749 INR
Raspberry Pi 3 Heat Sink Set
Buy159 INR
  • A Raspberry Pi Node Cluster Stack
  • Blinkt

Blinkt! is an eight super-bright RGB LED indicators that are ideal for adding visual notifications to your Raspberry Pi. Inspired by OpenFaas founder ~ Alex Ellis’ work with his Raspberry Pi Zero Docker Cluster, Pironomi developed these boards for him to use as status indicators. Blinkt! offers eight APA102 pixels in the smallest (and cheapest) form factor to plug straight onto your Raspberry Pi.

Features

  • Eight APA102 RGB LEDs
  • Individually controllable pixels
  • Sits directly on top of your Pi in a tiny footprint
  • Fits inside most Pi cases
  • Doesn’t interfere with PWM audio
  • Blinkt! pinout
  • Compatible with Raspberry Pi 3B+, 3, 2, B+, A+, Zero, and Zero W
  • Python library
  • Comes fully assembled

Installing Docker on all 3 Nodes – 1 Manager and 2 worker nodes

Follow link to install Docker 18.09 on all the Raspberry Node cluster

root@raspberrypi:/home/pi# docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:57:21 2018
 OS/Arch:           linux/arm
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:17:57 2018
  OS/Arch:          linux/arm
  Experimental:     false
root@raspberrypi:/home/pi# 

Setting up Swarm Manager Node

root@raspberrypi:/home/pi# docker swarm init --advertise-addr 192.168.43.134 --listen-addr 192.168.43.134:2377
Swarm initialized: current node (j7i394an31gsevxt3fndzvum5) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-1zbsutds2u5gk5qwx0qbf95uccogrjx1ukszxxxxx-bcptng4inxxxldvvx17tn2l 192.168.43.134:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

root@raspberrypi:/home/pi# 
pi@raspberrypi:~ $ sudo docker swarm join --token SWMTKN-1-1zbsutds2u5gk5qwx0qbf95uccogrjx1ukszysmxxxbcptng4invy1abldvvx17tn2l 192.168.43.134:2377
This node joined a swarm as a worker.
pi@raspberrypi:~ $ 

Listing the Nodes

root@raspberrypi:/home/pi# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
ijnqkk7vybzts7ohgt63fteoo     raspberrypi         Ready               Active                                  18.09.0
j7i394an31gsevxt3fndzvum5 *   raspberrypi         Ready               Active              Leader              18.09.0
let43cp6uoankngeg5lmd91mn     raspberrypi         Ready               Active                                  18.09.0
root@raspberrypi:/home/pi# 

Running Monitor Service

A special credit to Docker Captain Stefan Schere and his work for building these piece of Docker container.

root@raspberrypi:/home/pi# docker service create --name monitor --mode global --restart-condition any --mount type=bind,src=/sys,dst=/sys --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock stefanscherer/monitor:1.1.0
kvgvohexsc2e8yapol0ulwq5q
overall progress: 3 out of 3 tasks 
ijnqkk7vybzt: running   [==================================================>] 
let43cp6uoan: running   [==================================================>] 
j7i394an31gs: running   [==================================================>] 
verify: Service converged 
root@raspberrypi:/home/pi# 
root@raspberrypi:/home/pi# docker service create --name whoami stefanscherer/whoami:1.1.0
jd5e5hlswu8ruxgfhgbwtww84
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service converged 

Scaling the Service to 3

root@raspberrypi:/home/pi# docker service scale whoami=3
whoami scaled to 3
overall progress: 3 out of 3 tasks 
1/4: running   [==================================================>] 
2/4: running   [==================================================>] 
3/4: running   [==================================================>] 
verify: Service converged 

Scaling the service to 16

root@raspberrypi:/home/pi# docker service scale whoami=16
whoami scaled to 16
overall progress: 16 out of 16 tasks 
1/16: running   [==================================================>] 
2/16: running   [==================================================>] 
3/16: running   [==================================================>] 
4/16: running   [==================================================>] 
5/16: running   [==================================================>] 
6/16: running   [==================================================>] 
7/16: running   [==================================================>] 
8/16: running   [==================================================>] 
9/16: running   [==================================================>] 
10/16: running   [==================================================>] 
11/16: running   [==================================================>] 
12/16: running   [==================================================>] 
13/16: running   [==================================================>] 
14/16: running   [==================================================>] 
15/16: running   [==================================================>] 
16/16: running   [==================================================>] 
verify: Service converged 

Scaling the Service to 32

root@raspberrypi:/home/pi# docker service scale whoami=32
whoami scaled to 32
overall progress: 32 out of 32 tasks 
verify: Service converged 

Scaling the Service back to 4

root@raspberrypi:/home/pi# docker service scale whoami=4
whoami scaled to 4
overall progress: 4 out of 4 tasks 
1/4: running   [==================================================>] 
2/4: running   [==================================================>] 
3/4: running   [==================================================>] 
4/4: running   [==================================================>] 
verify: Service converged 




Listing out the service

root@raspberrypi:/home/pi# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                         PORTS
h7ap83sidbw8        monitor             global              2/2                 stefanscherer/monitor:1.1.0   
root@raspberrypi:/home/pi# 

Listing the Nodes

root@raspberrypi:/home/pi# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
ijnqkk7vybzts7ohgt63fteoo     raspberrypi         Ready               Active                                  18.09.0
j7i394an31gsevxt3fndzvum5 *   raspberrypi         Ready               Active              Leader              18.09.0
let43cp6uoankngeg5lmd91mn     raspberrypi         Down                Active                                  18.09.0
root@raspberrypi:/home/pi# 

Rolling Updates

root@raspberrypi:/home/pi# docker service update --image stefanscherer/whoami:1.2.0 \
>   --update-parallelism 4  --update-delay 2s whoami
whoami
overall progress: 2 out of 4 tasks 
1/4: preparing [=================================>                 ] 
2/4: running   [==================================================>] 
3/4: preparing [=================================>                 ] 
4/4: running   [==================================================>] 
root@raspberrypi:/home/pi# 

Overall, it was an amazing experience to showcase the power of Raspberry Pi to monitor Docker Swarm Cluster using Pironomi Blinkt! LEDs. In my future post, I will bring another interesting use cases around Docker & Raspberry Pi.

References:

Docker Enterprise 3.0: Now with New Built-in Docker cluster CLI Plugin

Estimated Reading Time: 8 minutes

Last Dockercon, dozens of new Docker CLI Plugin were introduced. All of these CLI plugins will be available in upcoming Docker Enterprise 3.0 GA release this year. Docker Desktop Enterprise 3.0 Public Beta was made available soon after Dockercon event during 2nd week of May 2019. This public beta consists of Desktop Enterprise 2.0.0.4-ent, Universal Control Plane 3.2, Docker Trusted Registry 2.7, and Engine Enterprise 19.03.0. Similar to previous deployments, Docker Enterprise components except Docker Engine are deployed as containers. Please note that only a limited subset of operating systems have been tested for the current beta release, including RHEL 7.6, and Ubuntu 16.04 and 18.04, and Windows Server 2019.

What is DCI all about?

One of the primary focus of this public beta is enhancement around expanding choices. Docker Certified Infrastructure(DCI) is Docker’s prescriptive approach to deploying Docker Enterprise Edition on a range of infrastructures. DCI is designed to automate and reliably deliver a secure, enterprise-ready container platform, integrated with your existing management and infrastructure tools.

Is DCI targeted only for Enterprise customers?

The short answer is “Yes”. DCI is installed in Docker Engine – Enterprise and Desktop Enterprise by default. DCI provides a declarative way to build and manage Docker clusters. It implements a Docker CLI plugin that exposes a `docker cluster` top-level command, and lets you define a cluster in a YAML file.

How does it work?

At a high-level, you define a cluster in a YAML file and instantiate it with `docker cluster create`. The DCI back-end then performs the hard work of building the cluster.

What Platform does it support?

DCI currently supports building and managing clusters on AWS during the Public beta with upcoming support Azure, and VMware vSphere by General Availability.

In my last blog, I talked about “What’s New in Docker Desktop Enterprise 3.0” which introduced a new way to build, share and run multi-service apps on any infrastructure with Docker Applications. Under this blog post, I will showcase how to get started with docker clusterCLI plugin

Pre-requisite:

[Captains-Bay]🚩 >  aws --version
aws-cli/1.11.107 Python/2.7.10 Darwin/17.7.0 botocore/1.5.70
[Captains-Bay]🚩 >  
  • AWS Access Keys

If you already have an `~/.aws/credentials` file, you can skip this step.  Use the `aws configure` command to specify your AWS credentials.

You will require a Docker ID with access to a Docker UCP subscription either:

  • Docker Enterprise 3.0 Beta License for Docker Enterprise 3.0 Beta
  • An active Docker Enterprise license (paid or trial) to install generally available Docker Enterprise version

Also, An AWS account with security credentials you will need AWS credentials with the following IAM policies:

  • AmazonEC2FullAccess
  • AmazonElasticFileSystemFullAccess
  • AmazonRoute53DomainsFullAccess
  • AmazonS3FullAccess
  • IAMFullAccess (for creating instance profiles with roles and policies)

  • Under Docker Beta registration page, sign in with your DockerID
  • Once you complete your registration, you will see the links for Docker Desktop Enterprise for Mac and Windows. Download your preferred software based on your desktop OS.

Installing Docker Desktop Enterprise

[rml_read_more]

You can directly download Desktop Enterprise for Mac too with the below link:

https://download.docker.com/mac/enterprise/Docker.pkg

To install double click the .pkg file. For Mac administrators, the following command line options support fine tuning and mass installation, after which Docker Desktop Enterprise can be run from the Applications folder on each individual machine.

sudo installer -pkg Docker.pkg -target /

The license file must then be either installed in the following location:

~/Library/Group Containers/group.com.docker/docker_subscription.lic


Or can be provided in the UI when starting the application for the first time (/Applications/Docker.app).

Click on “whale” icon which appear at the right top of the screen to verify if Docker Desktop Enterprise comes up well.

[Captains-Bay]🚩 >  docker version
Client: Docker Engine - Enterprise
 Version:           19.03.0-beta4
 API version:       1.40
 Go version:        go1.12.5
 Git commit:        d9934ea
 Built:             Tue May 14 06:40:00 2019
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Enterprise
 Engine:
  Version:          19.03.0-beta4
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.5
  Git commit:       d9934ea
  Built:            Tue May 14 06:46:25 2019
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          v1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Login to Docker Hub

Login to Docker Hub with a Docker ID that has access to a Docker EE/UCP repository.

[Captains-Bay]🚩 >  docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: ajeetraina
Password: 
Login Succeeded

Testing the inbuilt docker cluster CLI Plugin

[Captains-Bay]🚩 >  docker cluster version
Version:  v0.3.0
Commit:   dc3d07a
Build:    Plugin

Cluster Declaration

It’s time to declare our cluster. We’ll use the following YAML file to deploy a new cluster to AWS. By default, `docker cluster create` will look for a cluster.yml file in the current working directory.  Alternatively, you can give the file any name you choose. Let’s create a cluster.yml file with the following contents of a simple cluster definition.  The below YAML will allow you to install Docker Enterprise 3.0 beta on 1 manager and 1 DTR node.

variable:
  region: us-east-1
  subscription_url: https://storebits.docker.com/ee/m/sub-zxxxx/  ## Don't forget to add / at the end as shown 
  ucp_password:
    type: "prompt"

provider:
  aws:
    region: ${region}

cluster:
  engine:
    url: ${subscription_url}
    version: "ee-test-19.03"
  ucp:
    version: "docker/ucp:3.2.0-beta4"
    username: "admin"
    password: ${ucp_password}
  dtr:
    version: "docker/dtr:2.7.0-beta4"
 
resource:
  aws_instance:
    managers:
      quantity: 1
    registry:
      quantity: 1

Let us go through each of the below section one by one –

The YAML has four top-level resources:

– variable- provider
– cluster
– resource

The `variable` section declares variables that will be used in the cluster declaration.  The ucp_password uses type “prompt” to indicate that `docker cluster` will request a value at cluster creation.

The `provider` section declares that this cluster will be deployed in AWS, and references the region parameter.

The `cluster` section defines the Docker Engine and UCP versions to deploy. It also specifies the UCP admin credentials to apply to the cluster.

The `resource` section requests a single AWS instance to be configured as a UCP manager.

Spinning up Docker Enterprise 3.0 on AWS Platform

[Captains-Bay]🚩 >  docker cluster create -f cluster.yml --log-level debug
Please provide a value for ucp_password
DEBU[0009] Image Ref: sha256:ea8a7a832f839d48f478e37602cb7f67207be6f612c3a00aeafa42ca9f155214 
DEBU[0009] Generating public/private rsa key pair.      
DEBU[0010] Your identification has been saved in /data/keys/ssh/id_rsa. 
DEBU[0010] Your public key has been saved in /data/keys/ssh/id_rsa.pub. 
DEBU[0010] The key fingerprint is:                      
DEBU[0010] SHA256:CnQ4M5/f+2AOXj+azUVReBXXXXX cluster@a1f8091cbb6a 
DEBU[0010] The key's randomart image is:                
DEBU[0010] +---[RSA 2048]----+                          
DEBU[0010] |       ..      +o|                          
DEBU[0010] |     .  ..    o.+|                          
DEBU[0010] |    * .o.     .o.|                          
DEBU[0010] |   . *+.oo    o. |                          
DEBU[0010] |    ..o=S    ... |                          
DEBU[0010] |     .oo o  o.   |                          
DEBU[0010] |      ..+.Oo  .  |                          
DEBU[0010] |      o+.E.B..   |                          
DEBU[0010] |     o+oo =o=.   |                          
DEBU[0010] +----[SHA256]-----+                          
DEBU[0010] Planning cluster on aws       

Sit back & Relax ! This is going to take couple of minutes to bring up your Docker Enterprise 3.0

Troubleshooting Tips:

In case you encounter issue around unable to pull dockereng/cluster:v0.3.0

there is a quick workaround. Reason – The dockereng/cluster:v0.3.0 is a private Docker image which would fail to get pulled from Dockerhub. You might need to follow the below steps:

[Captains-Bay]🚩 >  docker pull docker/cluster:v0.3.0
v0.3.0: Pulling from docker/cluster
bdf0201b3a05: Pull complete 
227965e0be77: Pull complete 
656c27da0276: Downloading  10.18MB/98.87MB
6bc49ae6e7fa: Download complete 
ddbd7883b3bf: Download complete 
90dd03face76: Download complete 
cb5cae322035: Download complete 
c0c9485136e8: Download complete 
a5ab55def61b: Download complete 
ddbd7b624dc0: Download complete

Now you need to tag it to dickering/cluster:v0.3.0so as to let CLI plugin consider it locally and pick it up for building the cluster.

docker tag docker/cluster:v0.3.0 dockereng/cluster:v0.3.0

Please note that this issue has been fixed under cluster CLI version 0.3.3. By now, you should be able to see the below window while accessing it over the browser.


Once you upload License, you should be able to access Docker Enterprise 3.0 UI as shown below:

Inspecting the cluster

You can use docker cluster ls to list out the cluster. Even you can inspect the cluster using the below command:

[Captains-Bay]🚩 >  docker cluster inspect fervent_taussig
name: fervent_taussig
shortid: 67fb8cb05043
variable:
  region: us-east-1
  subscription_url: https://storebits.docker.com/ee/m/sub-a3dd83ed-d9db-440f-a175-e11347fb1037/
  ucp_password: Oracle9ias
provider:
  aws:
    region: us-east-1
    tags:
      pet: "true"
      project: CSG-DCI
    version: ~> 1.0
cluster:
  dtr:
    version: docker/dtr:2.7.0-beta4
  engine:
    storage_volume: /dev/xvdb
    url: https://storebits.docker.com/ee/m/sub-a3dd83ed-d9db-440f-a175-e11347fb1037/
    version: ee-test-19.03
  registry:
    url: https://index.docker.io/v1/
    username: ajeetraina
  ucp:
    username: admin
    version: docker/ucp:3.2.0-beta4
resource:
  aws_instance:
    managers:
      _running:
        managers_ids:
        - i-088036137bdf5564a
        managers_ips:
        - 35.170.33.58
      instance_type: t2.xlarge
      os: Ubuntu 16.04
      quantity: 1
      role: manager
    registry:
      _running:
        registry_ids:
        - i-016770ea989a55a0a
        registry_ips:
        - 18.208.208.51
      instance_type: t2.xlarge
      os: Ubuntu 16.04
      quantity: 1
      role: dtr

Using context switching to switch from Docker Desktop to remote AWS cluster

[Captains-Bay]🚩 >  docker context ls
NAME                DESCRIPTION                               DOCKER ENDPOINT               KUBERNETES ENDPOINT                ORCHESTRATOR
default *           Current DOCKER_HOST based configuration   unix:///var/run/docker.sock   https://localhost:6443 (default)   swarm
fervent_taussig     fervent_taussig                           tcp://35.170.33.58:443                                           
[Captains-Bay]🚩 >  docker context use fervent_taussig
fervent_taussig
Current context is now "fervent_taussig"
[Captains-Bay]🚩 >  docker node ls
ID                            HOSTNAME                       STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
fbe5k12hyoit5qcdtatamz907 *   ip-172-31-9-19.ec2.internal    Ready               Active              Leader              19.03.0-beta4
hs8jz9vnuwqjjjukzh9s2rejc     ip-172-31-10-73.ec2.internal   Ready               Active                                  19.03.0-beta4
[Captains-Bay]🚩 >

As you can see, it shows up UCP nodes cluster running on remote AWS Cloud Platform.

Open up the browser and you shall be able to access Docker Enterprise v3.2.0-beta4 Release.

In my next blog post, I will talk around docker registry as well as docker gmsa CLI Plugin. Stay tuned !




How to Deploy Apache Kafka on AWS Platform using Docker Swarm Mode?

Estimated Reading Time: 7 minutes

I am thrilled and excited to start a new open source project called “Pico”. Pico is a beta project which is targeted at object detection and analytics using Apache Kafka, Docker, Raspberry Pi & AWS Rekognition Service. The whole idea of Pico project is to simplify object detection and analytics process using few bunch of Docker containers. A cluster of Raspberry Pi nodes installed at various location points are coupled with camera modules and sensors with motion detection activated on them. Docker containers running on these Raspberry Pis are able to convert these nodes into CCTV camera. After producing images of all these cameras, the real-time data are then consumed on any of the five containers because of the replication factor of Kafka. The camera captured video streams and processed by Apache Kafka. The data is consumed inside a different container which runs on all of these nodes. AWS Rekognition analyses the real time video data & searches object on screen against a collection of objects.

In my past blog, I already demonstrated how to convert Raspberry Pi into CCTV camera using Docker container. The next target was to understand what is Apache Kafka and how to implement it on AWS Cloud so that the real time data can be sent to AWS Rekognition Service. I spent considerable amount of time understanding the basics of Apache Kafka before I jump directly into Docker Compose to containerize the various services which falls under this piece of software stack.

Apache Kafka is an open-source stream-processing software platform developed by LinkedIn and donated to the Apache Software Foundation. It is written in Scala and Java. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.

Apache Kafka is a distributed, partitioned, and replicated publish-subscribe messaging system that is used to send high volumes of data, in the form of messages, from one point to another. It replicates these messages across a cluster of servers in order to prevent data loss and allows both online and offline message consumption. This in turn shows the fault-tolerant behaviour of Kafka in the presence of machine failures that also supports low latency message delivery. In a broader sense, Kafka is considered as a unified platform which guarantees zero data loss and handles real-time data feeds.

Architecture of Apache Kafka

The overall architecture of Kafka is shown below.

It is composed of three server machines which together act as a cluster computing platform. In a typical Kafka cluster, each server is configured to behave as a single broker system that shows the persistence and replication of message data. In other words, we can say that there is more than one broker in a typical Kafka cluster. Essentially, broker is the key component of Kafka cluster which is basically responsible for maintaining published data. Each broker instance can easily handle thousands of reads and writes per topic, as they have a stateless behavior.



At a basic level, Kafka broker uses topics to handle message data. The topic is first created and then divided into multiple partitions in order to balance load. The above diagram illustrates the basic concept of topic which is divided into three partitions. Each partition has multiple offsets in which messages are stored. As an example, suppose that the topic has a replication factor of value β€˜3’, then Kafka will create three identical replicas of each partition regarding the topic and distribute them across the cluster. In order to balance load and maintaining data replication, each broker stores one or more partition replicas. Suppose that there are N brokers and N number of partitions then each broker will store one partition.

What’s the role of Zookeeper?

Kafka uses Zookeeper to maintain cluster state. Zookeeper is a synchronization and coordination service for managing Kafka brokers and its main functionality is to perform leader election across multiple broker instances. Under zookeeper, one server acts as a leader and the other two servers act as followers. Leader node handles all reads and writes per partition. Follower node just follows the instructions given by the leader node. If the leader fails, then the follower node will be automatically appointed as a new leader.

Benefits of Apache Kafka

Kafka has the following benefits.

  • Durability: Kafka allows messages to persist on the disk in order to prevent data loss. It uses distributed commit log for replicating messages across the cluster, and thus making it a durable system.
  • Scalability: It can easily be expanded without any downtime. Since a single Kafka cluster is acting as a central backbone for handling the large organization, we can elastically spread it to multiple clusters.
  • Reliability: It is reliable over time, as it is considered as a distributed, repli- cated, and fault tolerant messaging system.
  • Efficiency: Kafka publishes and subscribes messages efficiently which shows high system throughput. It can store terabytes of messages without any performance impact.

Under this blog post, I will showcase how to implement Apache Kafka on 2 Node Docker Swarm Cluster running on AWS via Docker Desktop.

Pre-requisites:

  • Docker Desktop for Mac
  • AWS Account ( You will require t2.medium instances for this)
  • AWS CLI installed

Adding Your Credentials:

[Captains-Bay]🚩 >  cat ~/.aws/credentials
[default]
aws_access_key_id = XXXA 
aws_secret_access_key = XX

Verifying AWS Version

[Captains-Bay]🚩 >  aws --version
aws-cli/1.11.107 Python/2.7.10 Darwin/17.7.0 botocore/1.5.70

Setting up Environmental Variable

[Captains-Bay]🚩 >  export VPC=vpc-ae59f0d6
[Captains-Bay]🚩 >  export REGION=us-west-2a
[Captains-Bay]🚩 >  export SUBNET=subnet-827651c9
[Captains-Bay]🚩 >  export ZONE=a
[Captains-Bay]🚩 >  export REGION=us-west-2

Building up First Node using Docker Machine

[Captains-Bay]🚩 >  docker-machine create  --driver amazonec2  --amazonec2-access-key=${ACCESS_KEY_ID}  --amazonec2-secret-key=${SECRET_ACCESS_KEY} --amazonec2-region=us-west-2 --amazonec2-vpc-id=vpc-ae59f0d6 --amazonec2-ami=ami-78a22900 --amazonec2-open-port 2377 --amazonec2-open-port 7946 --amazonec2-open-port 4789 --amazonec2-open-port 7946/udp --amazonec2-open-port 4789/udp --amazonec2-open-port 8080 --amazonec2-open-port 443 --amazonec2-open-port 80 --amazonec2-subnet-id=subnet-72dbdb1a --amazonec2-instance-type=t2.micro kafka-swarm-node1

Listing out the Nodes

[Captains-Bay]🚩 >  docker-machine ls
NAME                ACTIVE   DRIVER      STATE     URL                         SWARM   DOCKER     ERRORS
kafka-swarm-node1   -        amazonec2   Running   tcp://35.161.106.158:2376           v18.09.6   
kafka-swarm-node2   -        amazonec2   Running   tcp://54.201.99.75:2376             v18.09.6 

Initialiating Docker Swarm Manager Node

ubuntu@kafka-swarm-node1:~$ sudo docker swarm init --advertise-addr 172.31.53.71 --listen-addr 172.31.53.71:2377
Swarm initialized: current node (yui9wqfu7b12hwt4ig4ribpyq) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-xxxxxmr075to2v3k-decb975h5g5da7xxxx 172.31.53.71:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Adding Worker Node

ubuntu@kafka-swarm-node2:~$ sudo docker swarm join --token SWMTKN-1-2xjkynhin0n2zl7xxxk-decb975h5g5daxxxxxxxxn 172.31.53.71:2377
This node joined a swarm as a worker.

Verifying 2-Node Docker Swarm Mode Cluster

ubuntu@kafka-swarm-node1:~$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
yui9wqfu7b12hwt4ig4ribpyq *   kafka-swarm-node1   Ready               Active              Leader              18.09.6
vb235xtkejim1hjdnji5luuxh     kafka-swarm-node2   Ready               Active                                  18.09.6

Installing Docker Compose

curl -L https://github.com/docker/compose/releases/download/1.25.0-rc1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   617    0   617    0     0   2212      0 --:--:-- --:--:-- --:--:--  2211
100 15.5M  100 15.5M    0     0  8693k      0  0:00:01  0:00:01 --:--:-- 20.1M
root@kafka-swarm-node1:/home/ubuntu/dockerlabs/solution/kafka-swarm# chmod +x /usr/local/bin/docker-compose
root@kafka-swarm-node1:/home/ubuntu/dockerlabs/solution/kafka-

ubuntu@kafka-swarm-node1:~/dockerlabs/solution/kafka-swarm$ sudo docker-compose version
docker-compose version 1.25.0-rc1, build 8552e8e2
docker-py version: 4.0.1
CPython version: 3.7.3
OpenSSL version: OpenSSL 1.1.0j  20 Nov 2018

Building up Kafka Application

git clone https://github.com/collabnix/dockerlabs
cd dockerlabs/solution/kafka-swarm
ubuntu@kafka-swarm-node1:~/dockerlabs/solution/kafka-swarm$ sudo docker stack deploy -c docker-compose.yml mykafka
Creating network mykafka_default
Creating service mykafka_zkui
Creating service mykafka_broker
Creating service mykafka_manager
Creating service mykafka_producer
Creating service mykafka_zookeeper
ubuntu@kafka-swarm-node1:~/dockerlabs/solution/kafka-swarm$

Verifying Apache Kafka Stack

ubuntu@kafka-swarm-node1:~/dockerlabs/solution/kafka-swarm$ sudo docker stack lsNAME                SERVICES            ORCHESTRATOR
mykafka             5                   Swarm

Verifying Apache Kafka Services

ubuntu@kafka-swarm-node1:~/dockerlabs/solution/kafka-swarm$ sudo docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                                                                     PORTS
t04p6i8zky4z        mykafka_broker      replicated          0/3                 qnib/plain-kafka:2018-04-25_1.1.0                                                         *:9092->9092/tcp
r5f0x9clnwix        mykafka_manager     replicated          1/1                 qnib/plain-kafka-manager:2018-04-25                                                       *:9000->9000/tcp
jzwvrt4df66b        mykafka_producer    replicated          3/3                 qnib/golang-kafka-producer:2018-05-01.12                                                  
09lkbevsktt9        mykafka_zkui        replicated          1/1                 qnib/plain-zkui@sha256:30c4aa1236ee90e4274a9059a5fa87de2ee778d9bfa3cb48c4c9aafe7cfa1a13   *:9090->9090/tcp
b1hqfk1vc4lu        mykafka_zookeeper   replicated          1/1                 qnib/plain-zookeeper:2018-04-25                                                           *:2181->2181/tcp
ubuntu@kafka-swarm-node1:~/dockerlabs/solution/kafka-swarm$ 

In my next blog post, I will talk about Docker compose file which automates the overall Pico project right from video streaming captured from Raspberry Pi to object detection and analytics via AWS Rekognition Service. Stay tuned !

Credits: