5 Minutes to RedisInsight running on Single Node Kubernetes Cluster on Docker Desktop for Mac

If you are looking out for a tool which can inspect your Redis data, monitor health, and perform runtime server configuration with a browser-based management interface for your Redis deployment, then I would recommend “RedisInsight”. RedisInsight is built by RedisLabs, offered free of cost and help in gaining insights into real-time performance metrics, inspect slow commands, and manage Redis configuration directly through the interface.

RedisInsight offers the following features –

  • Easy to use browser based interface to search keys, view and edit data
  • Only GUI tool to support Redis Cluster
  • Supports SSL/TLS based connections
  • Memory Analysis

Redis is an in-memory but persistent on disk database, so it represents a different trade off where very high write and read speed is achieved with the limitation of data sets that can’t be larger than memory. Hence, it becomes important to keep track of Memory usage. One of the exciting feature of RedisInsight is Memory analysis. Memory analysis help you analyze your Redis instance and helps in reducing memory usage and improving application performance. Analysis can be done in two ways: online mode – In this mode, RedisInsight downloads a rdb file from your connected Redis instance and analyzes it to create a temp file with all the keys and meta data required for analysis. In case there is a master-slave connection, RedisInsight downloads the dump from the slave instead of the master in order to avoid affecting the performance of the master.

RedisInsight today runs on bare metal, VM, on your desktop and inside container too. Running it inside Docker container is an one-liner command:

docker run -v redisinsight:/db -p 8001:8001 redislabs/redisinsight

Under this blog post, I will showcase how to run RedisInsight on a single-node Kubernetes cluster running on Docker Desktop for Mac.

Pre-requisite

  • Docker Desktop for Mac
  • Enable Kubernetes

Install Redis

brew install redis

Running Redis-Server with auth


~ % redis-server --requirepass redis12#
1825:C 29 Mar 2020 00:52:56.943 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1825:C 29 Mar 2020 00:52:56.943 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=1825, just started
1825:C 29 Mar 2020 00:52:56.943 # Configuration loaded
1825:M 29 Mar 2020 00:52:56.945 * Increased maximum number of open files to 10032 (it was originally set to 2560).
                _._                                                  
           _.-``__ ''-._                                             
      _.-``    `.  `_.  ''-._           Redis 5.0.7 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._                                   
 (    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 1825
  `-._    `-._  `-./  _.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |           http://redis.io        
  `-._    `-._`-.__.-'_.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |                                  
  `-._    `-._`-.__.-'_.-'    _.-'                                   
      `-._    `-.__.-'    _.-'                                       
          `-._        _.-'                                           
              `-.__.-'                                               

1825:M 29 Mar 2020 00:52:56.946 # Server initialized
1825:M 29 Mar 2020 00:52:56.947 * DB loaded from disk: 0.000 seconds
1825:M 29 Mar 2020 00:52:56.947 * Ready to accept connections

Push Hash keys into Redis using Python

If you’re a Python developer, here’s a bonus. Use the below code snippet to get your dataset into Redis Database. I could push over 1 million dataset into Redis database in just 43 seconds. Interesting, isn’t it?

Copy the below content into a file called push-keys-auth.python

import redis
# Create connection object
r = redis.StrictRedis(host='localhost', port=6379, db=0, password='redis12#')
# set a value for the foo object
r.set('foo', 'bar')
# retrieve and print the value for the foo object
print(r.get('foo'))

Running Python Script

python push-keys-auth.py

Verifying Single Node Kubernetes Cluster

kubectl get nodes
NAME             STATUS   ROLES    AGE   VERSION
docker-desktop   Ready    master   15h   v1.15.5

Running RedisInsight inside Kubernetes Pods

You can download YAML file from this link

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redisinsight #deployment name
  labels:
    app: redisinsight #deployment label
spec:
  replicas: 1 #a single replica pod
  selector:
    matchLabels:
      app: redisinsight #which pods is the deployment managing, as defined by the pod template
  template: #pod template
    metadata:
      labels:
        app: redisinsight #label for pod/s
    spec:
      containers:
      - name:  redisinsight #Container name (DNS_LABEL, unique)
        image: redislabs/redisinsight #repo/image
        imagePullPolicy: Always #Always pull image
        volumeMounts:
        - name: db #Pod volumes to mount into the container's filesystem. Cannot be updated.
          mountPath: /db
        ports:
        - containerPort: 8001 #exposed conainer port and protocol
          protocol: TCP
      volumes:
      - name: db
        emptyDir: {} # node-ephemeral volume https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
ajeetraina@Ajeet-Rainas-Macbook-Pro ~ % kubectl apply -f redisinsight.yaml
ajeetraina@Ajeet-Rainas-Macbook-Pro ~ % kubectl get po,svc,deploy
NAME                                READY   STATUS    RESTARTS   AGE
pod/redisinsight-6b6b9d69c5-sfztk   1/1     Running   0          4m24s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   15h

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/redisinsight   1/1     1            1           4m24s
ajeetraina@Ajeet-Rainas-Macbook-Pro ~ % 

Port Forwarding

Once the deployment has been successfully applied and the deployment complete, access RedisInsight. This can be accomplished by exposing the deployment as a K8s Service or by using port forwarding, as in the example below:

kubectl port-forward deployment/redisinsight 8001

Open your browser and point to http://localhost:8001



As you see above, RedisInsight displays one key which we inserted using Python at the beginning of this blog.

In my next blog post, I will talk about Redis Operator for Kubernetes. Stay tuned !

How I built ARM based Docker Images for Raspberry Pi using buildx CLI Plugin on Docker Desktop?


2 weeks back in Dockercon 2019 San Francisco, Docker & ARM demonstrated the integration of ARM capabilities into Docker Desktop Community for the first time. Docker & ARM unveiled go-to-market strategy to accelerate Cloud, Edge & IoT Development. These two companies have planned to streamline the app development tools for cloud, edge, and internet of things environments built on ARM platform. The tools include AWS EC2 A1 instances based on AWS’ Graviton Processors (which feature 64-bit Arm Neoverse cores). Docker in collaboration with ARM will make new Docker-based solutions available to the Arm ecosystem as an extension of Arm’s server-tailored Neoverse platform, which they say will let developers more easily leverage containers — both remote and on-premises which is going to be pretty cool.

This integration is today available to the approximately 2 million developers using Docker Desktop Community Edition . As part of Docker Captain’s programme, we were lucky to get an early access to this build during Docker Captain Summit which took place on the first day of Dockercon 2019.

Introducing buildx

Under Docker 19.03.0 Beta 3, there is a new experimental CLI plugin called “buildx”. It is a pretty new Docker CLI plugin that extends the docker build command with the full support of the features provided by Moby BuildKit builder toolkit. It provides the same user experience as docker build with many new features like creating scoped builder instances and building against multiple nodes concurrently. As per the discussion with Docker staff, the “x” under buildx might get dropped in near future and features and flags are too subjected to change when the stable release is announced.

Buildx always build using the BuildKit engine and does not require DOCKER_BUILDKIT=1 environment variable for starting builds. Buildx build command supports the features available for docker build including the new features in Docker 19.03 such as outputs configuration, inline build caching or specifying target platform. In addition, buildx supports new features not yet available for regular docker build like building manifest lists, distributed caching, exporting build results to OCI image tarballs etc.

How does builder Instance work?

Buildx allows you to create new instances of isolated builders. This can be used for getting a scoped environment for your CI builds that does not change the state of the shared daemon or for isolating the builds for different projects. You can create a new instance for a set of remote nodes, forming a build farm, and quickly switch between them.

New instances can be created with docker buildx create command. This will create a new builder instance with a single node based on your current configuration. To use a remote node you can specify the DOCKER_HOST or remote context name while creating the new builder. After creating a new instance you can manage its lifecycle with the inspectstop and rm commands and list all available builders with ls. After creating a new builder you can also append new nodes to it.

To switch between different builders use docker buildx use <name>. After running this command the build commands would automatically keep using this builder.

Docker 19.03 also features a new docker context command that can be used for giving names for remote Docker API endpoints. Buildx integrates with docker context so that all of your contexts automatically get a default builder instance. While creating a new builder instance or when adding a node to it you can also set the context name as the target.

Enough theory !!! Do you really want to see it in action? Under this blog post, I will showcase how I built ARM-based Docker Images for my tiny Raspberry cluster using `docker buildx’ utility which runs on my Docker Desktop for Mac.

Installing Docker Desktop for Mac 2.0.4.1

Open up https://www.docker.com/products/docker-desktop for an early access of Docker Desktop. Do switch to Edge release if in case you’ve installed stable version of Docker Desktop.

As of today, Docker Desktop 2.0.4.1 Edge Community Edition comes with Engine 19.03.0 Beta 3, latest Kubernetes v1.1.4.1 and Compose 1.24.0 Release.

Verifying the docker buildx CLI

[Captains-Bay]🚩 >  docker buildx --help

Usage:	docker buildx COMMAND

Build with BuildKit

Management Commands:
  imagetools  Commands to work on images in registry

Commands:
  bake        Build from a file
  build       Start a build
  create      Create a new builder instance
  inspect     Inspect current builder instance
  ls          List builder instances
  rm          Remove a builder instance
  stop        Stop builder instance
  use         Set the current builder instance
  version     Show buildx version information 

Run 'docker buildx COMMAND --help' for more information on a command.

Listing all builder instances and the nodes for each instance

[Captains-Bay]🚩 >  docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS  PLATFORMS
default * docker                  
  default default         running linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6

We are currently using the default builder, which is basically the old builder.

Creating a New Builder Instance

The `docker buildx create` makes a new builder instance pointing to a docker context or endpoint, where context is the name of a context from docker context ls and endpoint is the address for docker socket (eg. DOCKER_HOST value).

By default, the current docker configuration is used for determining the context/endpoint value.

Builder instances are isolated environments where builds can be invoked. All docker contexts also get the default builder instance.

Let’s create a new builder, which gives us access to some new multi-arch features.

[Captains-Bay]🚩 >  docker buildx create --help

Usage:	docker buildx create [OPTIONS] [CONTEXT|ENDPOINT]

Create a new builder instance

Options:
      --append                 Append a node to builder instead of changing it
      --driver string          Driver to use (eg. docker-container)
      --leave                  Remove a node from builder instead of changing it
      --name string            Builder instance name
      --node string            Create/modify node with given name
      --platform stringArray   Fixed platforms for current node
      --use                    Set the current builder instance

Creating a new builder called “testbuilder”

[Captains-Bay]🚩 >  docker buildx create --name testbuilder
testbuilder
[Captains-Bay]🚩 >  docker buildx ls
NAME/NODE      DRIVER/ENDPOINT             STATUS   PLATFORMS
testbuilder    docker-container                     
  testbuilder0 unix:///var/run/docker.sock inactive 
default *      docker                               
  default      default                     running  linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
[Captains-Bay]🚩 > 

Switching to “testbuilder” builder instance

[Captains-Bay]🚩 >  docker buildx use testbuilder
[Captains-Bay]🚩 >  docker buildx ls
NAME/NODE      DRIVER/ENDPOINT             STATUS   PLATFORMS
testbuilder *  docker-container                     
  testbuilder0 unix:///var/run/docker.sock inactive 
default        docker                               
  default      default                     running  linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
[Captains-Bay]🚩 > 

Here I created a new builder instance with the name mybuilder, switched to it, and inspected it. Note that –bootstrap isn’t needed, it just starts the build container immediately. Next we will test the workflow, making sure we can build, push, and run multi-arch images.

What is –bootstrap all about?

The `docker buildx inspect –bootstrap ensures that the builder is running before inspecting it. If the driver is docker-container, then --bootstrap starts the buildkit container and waits until it is operational. Bootstrapping is automatically done during build, it is thus not necessary. The same BuildKit container is used during the lifetime of the associated builder node (as displayed in buildx ls).

[Captains-Bay]🚩 >  docker buildx inspect --bootstrap
[+] Building 22.4s (1/1) FINISHED                                                                                        
 => [internal] booting buildkit                                                                                    22.4s
 => => pulling image moby/buildkit:master                                                                          21.5s
 => => creating container buildx_buildkit_testbuilder0                                                              0.9s
Name:   testbuilder
Driver: docker-container

Nodes:
Name:      testbuilder0
Endpoint:  unix:///var/run/docker.sock
Status:    running
Platforms: linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
[Captains-Bay]🚩 >  

Authenticating with Dockerhub

[Captains-Bay]🚩 >  docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: ajeetraina
Password: 
Login Succeeded

Cloning the Repository

[Captains-Bay]🚩 >  git clone https://github.com/collabnix/docker-cctv-raspbian
Cloning into 'docker-cctv-raspbian'...
remote: Enumerating objects: 47, done.
remote: Counting objects: 100% (47/47), done.
remote: Compressing objects: 100% (31/31), done.
remote: Total 47 (delta 20), reused 32 (delta 14), pack-reused 0
Unpacking objects: 100% (47/47), done.

Peeping into Dockerfile

[Captains-Bay]🚩 >  cat Dockerfile 
FROM resin/rpi-raspbian:latest

RUN apt update && apt upgrade && apt install motion
RUN mkdir /mnt/motion && chown motion /mnt/motion
COPY motion.conf /etc/motion/motion.conf

VOLUME /mnt/motion
EXPOSE 8081
ENTRYPOINT ["motion"]
[Captains-Bay]🚩 >

Building ARM-based Docker Image

[Captains-Bay]🚩 >  docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t ajeetraina/docker-cctv-raspbian --push .

[Captains-Bay]🚩 >  docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t ajeetraina/docker-cctv-raspbian --push .
[+] Building 254.2s (12/21)                                                                                              
[+] Building 2174.9s (22/22) FINISHED                                                                                    
 => [internal] load build definition from Dockerfile                                                                0.1s
 => => transferring dockerfile: 268B                                                                                0.0s
 => [internal] load .dockerignore                                                                                   0.1s
 => => transferring context: 2B                                                                                     0.0s
 => [linux/arm/v7 internal] load metadata for docker.io/resin/rpi-raspbian:latest                                   8.8s
 => [linux/arm64 internal] load metadata for docker.io/resin/rpi-raspbian:latest                                    8.8s
 => [linux/amd64 internal] load metadata for docker.io/resin/rpi-raspbian:latest                                    8.8s
 => [linux/amd64 1/4] FROM docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a99  0.1s
 => => resolve docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5  0.0s
 => [internal] load build context                                                                                   0.1s
 => => transferring context: 27.72kB                                                                                0.0s
 => [linux/arm64 1/4] FROM docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a9  17.3s
 => => resolve docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5  0.0s
 => => sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5f88d055b 2.81kB / 2.81kB                      0.0s
 => => sha256:67a3f53689ffd00b59f9a7b7f8af0da5b36c9e11e12ba43b35969586e0121303 1.56kB / 1.56kB                      2.3s
 => => sha256:ef16d706debb7a56d89bf14958aca2da76bd3d0a42a6762c37302c9527a47d10 305B / 305B                          3.5s
 => => sha256:2734b459704280a5238405384487095392e5414d973554b27545cec0e984a0f2 256B / 256B                          3.1s
 => => sha256:9d5b46a00008070b009b685af5846c225ad170f38768416b63917ea7ac94062d 7.75kB / 7.75kB                      4.4s
 => => sha256:5fb2b6f7daac67a8465e8201c76828550b811619be473ab594972da35b4b7ee7 354B / 354B                          4.4s
 => => sha256:a04aef7b1e2fc4905fea2a685c63bc1eeb94c86149fd1286459206c22794e610 177B / 177B                          4.4s
 => => sha256:3eae71a21b9db8bd54d6a189b7587a10f52d6ffee3d868705a89974fe574c268 234B / 234B                          2.3s
 => => sha256:28f1ee4d4d5aa8bb96b3ba6a5e2177b2b58b595edaf601b9aae69fd02f78a6c6 7.48kB / 7.48kB                      0.0s
 => => sha256:6bddb275e70b0083d76083d01be2c3da11f67f526a123adcc980c5a3260d46e8 51.54MB / 51.54MB                   13.4s
 => => sha256:873755612f304f066db70c4015fdeadc9a81c0e6e25fb1aa833aeba80a7aeffc 229B / 229B                          3.9s
 => => sha256:78ba3f0466312c019467b178339a89120f2dce298d7c3d5e6840b3566749f5c0 250B / 250B                          3.2s
 => => sha256:b98db37cf25231afe68852e2cb21fb8aa465bb7a32ecc9afc6dec100ec0ba9b0 367B / 367B                          3.5s
 => => sha256:6b6c68e7ac8567569cee8da92431637e561e7aef5addb70373401d0887447a00 363B / 363B                          4.4s
 => => unpacking docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15  3.8s
 => [linux/arm/v7 1/4] FROM docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a9  0.0s
 => => resolve docker.io/resin/rpi-raspbian:latest@sha256:18be79e066099cce5c676f7733a6ccb4733ed5962a996424ba8a15d5  0.0s
 => [linux/amd64 2/4] RUN cat /.resin/deprecation-warning                                                           0.6s
 => [linux/arm/v7 2/4] RUN cat /.resin/deprecation-warning                                                          0.5s
 => [linux/arm64 2/4] RUN cat /.resin/deprecation-warning                                                           0.6s
 => [linux/arm/v7 3/4] RUN apt update && apt upgrade && apt install motion                                        481.2s
 => [linux/amd64 3/4] RUN apt update && apt upgrade && apt install motion                                        1431.5s
 => [linux/arm64 3/4] RUN apt update && apt upgrade && apt install motion                                        1665.5s
 => [linux/arm/v7 4/4] RUN mkdir /mnt/motion && chown motion /mnt/motion                                            0.6s
 => [linux/arm/v7 5/4] COPY motion.conf /etc/motion/motion.conf                                                     0.1s
 => [linux/amd64 4/4] RUN mkdir /mnt/motion && chown motion /mnt/motion                                             0.6s
 => [linux/amd64 5/4] COPY motion.conf /etc/motion/motion.conf                                                      0.1s
 => [linux/arm64 4/4] RUN mkdir /mnt/motion && chown motion /mnt/motion                                             0.6s
 => [linux/arm64 5/4] COPY motion.conf /etc/motion/motion.conf                                                      0.1s
 => exporting to image                                                                                            481.7s
 => => exporting layers                                                                                            19.6s
 => => exporting manifest sha256:55ddd5c67557190344efea7327a5b2f8de0bdc8ba184f856b1086baac6bed702                   0.0s
 => => exporting config sha256:ebb6e951b8f206d258e8552313633c35f4fe4c82fe7a7fcc51475022ae089c2d                     0.0s
 => => exporting manifest sha256:7b6919de7edd7d1be695877f827e7ee6d302d3acfd3f69ed73bf2ffaa4a80632                   0.0s
 => => exporting config sha256:a83b02bf9cbcb408110c4b773f4e5edde04f35851e2a42d4ff1d947c132bed6d                     0.0s
 => => exporting manifest sha256:d379c0f79b6a72a770124bb2ee94d91d9afef031c81ac20ea7a4c51f4f13ddf2                   0.0s
 => => exporting config sha256:86c2e637fe39036d51779c3bb5f800d3e8da14122907c5fd03bdace47b03bb38                     0.0s
 => => exporting manifest list sha256:daec2787002024c07addf56a8099a866d7f1cd85ed8c33818beb64a5a208cd54              0.0s
 => => pushing layers                                                                                             455.4s
 => => pushing manifest for docker.io/ajeetraina/docker-cctv-raspbian:latest                                        6.4s

Awesome. It worked ! The –platform flag enabled buildx to generate Linux images for Intel 64-bit, Arm 32-bit, and Arm 64-bit architectures. The –push flag generates a multi-arch manifest and pushes all the images to Docker Hub. Cool, isn’t it?

What is this ImageTools all about?

Imagetools contains commands for working with manifest lists in the registry. These commands are useful for inspecting multi-platform build results. It creates a new manifest list based on source manifests. The source manifests can be manifest lists or single platform distribution manifests and must already exist in the registry where the new manifest is created. If only one source is specified create performs a carbon copy.

Let’s use imagetools to inspect what we did.

[Captains-Bay]🚩 >  docker buildx imagetools inspect docker.io/ajeetraina/docker-cctv-raspbian:latest
Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest
MediaType: application/vnd.docker.distribution.manifest.list.v2+json
Digest:    sha256:daec2787002024c07addf56a8099a866d7f1cd85ed8c33818beb64a5a208cd54
           
Manifests: 
  Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest@sha256:55ddd5c67557190344efea7327a5b2f8de0bdc8ba184f856b1086baac6bed702
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/amd64
             
  Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest@sha256:7b6919de7edd7d1be695877f827e7ee6d302d3acfd3f69ed73bf2ffaa4a80632
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/arm64
             
  Name:      docker.io/ajeetraina/docker-cctv-raspbian:latest@sha256:d379c0f79b6a72a770124bb2ee94d91d9afef031c81ac20ea7a4c51f4f13ddf2
  MediaType: application/vnd.docker.distribution.manifest.v2+json
  Platform:  linux/arm/v7
[Captains-Bay]🚩 >

The image is now available on Docker Hub with the tag ajeetraina/docker-cctv-raspbian:latest. You can run a container from that image on Intel laptops, Amazon EC2 A1 instances, Raspberry Pis, and more. Docker pulls the correct image for the current architecture, so Raspberry Pis run the 32-bit Arm version and EC2 A1 instances run 64-bit Arm.

Verifying this Image on Raspberry Pi Node

root@node2:/home/pi# cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
root@node2:/home/pi#

Verify the Docker version

root@node2:/home/pi# docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:57:21 2018
 OS/Arch:           linux/arm
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:17:57 2018
  OS/Arch:          linux/arm
  Experimental:     false

Running the Docker Image

root@node2:/home/pi# docker pull ajeetraina/docker-cctv-raspbian:latest
latest: Pulling from ajeetraina/docker-cctv-raspbian
6bddb275e70b: Already exists
ef16d706debb: Already exists
2734b4597042: Already exists
873755612f30: Already exists
67a3f53689ff: Already exists
5fb2b6f7daac: Already exists
a04aef7b1e2f: Already exists
9d5b46a00008: Already exists
b98db37cf252: Already exists
78ba3f046631: Already exists
6b6c68e7ac85: Already exists
3eae71a21b9d: Already exists
db08b1848bc3: Pull complete
9c84b2387994: Pull complete
3579f37869a5: Pull complete
Digest: sha256:daec2787002024c07addf56a8099a866d7f1cd85ed8c33818beb64a5a208cd54
Status: Downloaded newer image for ajeetraina/docker-cctv-raspbian:latest
root@node2:/home/pi#


Inspecting the Docker Image

 docker image inspect ajeetraina/docker-cctv-raspbian:latest | grep arm
                "QEMU_CPU=arm1176",
        "Architecture": "arm",

Running the Docker Container

 docker run -dit -p 8000:8000 ajeetraina/docker-cctv-raspbian:latest
c43f25cd06672883478908e71ad6f044766270fcbf413e69ad63c8020610816f
root@node2:/home/pi# docker ps
CONTAINER ID        IMAGE                                    COMMAND             CREATED             STATUS              PORTS                              NAMES
c43f25cd0667        ajeetraina/docker-cctv-raspbian:latest   "motion"            7 seconds ago       Up 2 seconds        0.0.0.0:8000->8000/tcp, 8081/tcp   zen_brown

Verifying the Logs

root@node2:/home/pi# docker logs -f c43
[0] [NTC] [ALL] conf_load: Processing thread 0 - config file /etc/motion/motion.conf
[0] [NTC] [ALL] motion_startup: Motion 3.2.12+git20140228 Started
[0] [NTC] [ALL] motion_startup: Logging to syslog
[0] [NTC] [ALL] motion_startup: Using log type (ALL) log level (NTC)
[0] [NTC] [ENC] ffmpeg_init: ffmpeg LIBAVCODEC_BUILD 3670016 LIBAVFORMAT_BUILD 3670272
[0] [NTC] [ALL] main: Thread 1 is from /etc/motion/motion.conf
[0] [NTC] [ALL] main: Thread 1 is device: /dev/video0 input -1
[0] [NTC] [ALL] main: Stream port 8081
[0] [NTC] [ALL] main: Waiting for threads to finish, pid: 1
[1] [NTC] [ALL] motion_init: Thread 1 started , motion detection Enabled
[1] [NTC] [VID] vid_v4lx_start: Using videodevice /dev/video0 and input -1
[1] [ALR] [VID] vid_v4lx_start: Failed to open video device /dev/video0:
[1] [WRN] [ALL] motion_init: Could not fetch initial image from camera Motion continues using width and height from config file(s)
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items
[1] [NTC] [STR] http_bindsock: motion-stream testing : IPV4 addr: 0.0.0.0 port: 8081
[1] [NTC] [STR] http_bindsock: motion-stream Bound : IPV4 addr: 0.0.0.0 port: 8081
[1] [NTC] [ALL] motion_init: Started motion-stream server in port 8081 auth Disabled
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 3 items
[0] [NTC] [STR] httpd_run: motion-httpd testing : IPV4 addr: 127.0.0.1 port: 8080
[0] [NTC] [STR] httpd_run: motion-httpd Bound : IPV4 addr: 127.0.0.1 port: 8080
[0] [NTC] [STR] httpd_run: motion-httpd/3.2.12+git20140228 running, accepting connections
[0] [NTC] [STR] httpd_run: motion-httpd: waiting for data on 127.0.0.1 port TCP 8080

Hence, it is now so easy to build ARM-based Docker Images on your Docker Desktop and then share it to DockerHub so that anyone can pull this image on their Raspberry Pi and run it flawlessly.

Reference:

Kubernetes Application Deployment Made Easy using Helm on Docker for Mac 18.05.0 CE

 

Docker for Mac 18.05.0 CE Release went GA last month. With this release, you can now select your orchestrator directly from the UI in the “Kubernetes” pane which allows “docker stack” commands to deploy to swarm clusters, even if Kubernetes is enabled in Docker for Mac. This feature was introduced for the first time under any Desktop Edition. To try out this feature, ensure that you are using Edge Release of Docker for Mac 18.05.0 CE. Once you update your Docker for Mac, you can find this new feature by clicking on Preference Pane UI and then selecting Kubernetes as shown below:

 

 

 

Whenever you select your choice of orchestrator, it updates ~/.docker/config.json file in the backend as shown below:

 

Docker for Mac is used everyday by hundreds of thousands of developers to build, test and debug containerized apps in their local and dev environment. Developers building both docker-compose and Swarm-based apps, and apps destined for deployment on Kubernetes now get a simple-to-use development system that takes optimal advantage of their laptop or workstation. All container tasks – build, run and push – will run on the same Docker instance with a shared set of images, volumes and containers. With the current release, it is way more simple to install, so you can have Docker containers running on your Mac in just a few minutes.

Check out the curated list of blogs around Docker for Mac

 

Docker for Mac is built with LinuxKit. How to access the LinuxKit VM
Top 5 Exclusive Features of Docker for Mac That you can’t afford to ignore
5 Minutes to Bootstrap Kubernetes Cluster on GKE using Docker for Mac 18.03.0
Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0
2-minutes to Kubernetes Cluster on Docker for Mac 18.01 using Swarm CLI
Docker For Mac 1.13.0 brings support for macOS Sierra, now runs ARM & AARCH64 based Docker containers
Docker for Mac 18.03.0 now comes with NFS Volume Sharing Support for Kubernetes

Docker for Mac provides docker stack command to deploy your application for both Swarm and Kubernetes. This become very useful for Docker Swarm users as they can use the same Swarm CLI to bring up Kubernetes users. But here is an extra bonus – Docker for Mac now works flawlessly for Helm Package Manager.

Why Yet another Package Manager?

Let’s accept the fact that Kubernetes can become very complex with all the objects you need to handle ― such as ConfigMaps, services, pods, Persistent Volumes ― in addition to the number of releases you need to manage. These can be managed with Kubernetes Helm, which offers a simple way to package everything into one simple application and advertises what you can configure.

In case you are completely new – Helm is an open source project that enables developers to create packages of containerized apps to make installation much simpler. Helm is the package manager for Kubernetes and it’s the best way to find, share, and deploy software to k8s.The project was initially created by Deis and has since been donated to the Cloud Native Computing Foundation (CNCF).

Users can install Helm with one click or configure it to suit their organization’s needs. For example, if you want to package and release version 1.0, making only certain parts configurable, this can be done with Helm. Then with version 2.0, additional parts can be made configurable.

Up until now, it was a sub-project of Kubernetes, the popular container orchestration tool, but as of today it is a stand-alone project.

 

Helm is built on three important concepts:

  • Charts – a bundle of information necessary to create an instance of a Kubernetes application
  • Config – contains configuration information that can be merged into a packaged chart to create a releasable object
  • Release – a running instance of a chart, combined with a specific config

Architecture of Helm:

Architecturally it’s built on two major components:

 

Helm Client, a command line tool with the following responsibilities

  • Interacting with the Tiller server
  • Sending charts to be installed
  • Upgrading or uninstalling of existing releases
  • Managing repositories

Tiller Server, an in-cluster server with the following responsibilities:

  • Interacts with the Helm client
  • Interfaces the Kubernetes API server
  • Combining a chart and configuration to build a release
  • Installing charts and tracking the release
  • Upgrading and uninstalling charts

Both the Helm client and Tiller are written in Go and uses gRPC to interact with each other. Tiller (as the server part running inside Kubernetes) provides a gRPC server to connect with the client and it uses the k8s client library to communicate with Kubernetes. It does not require it’s own database as the information is stored within Kubernetes as ConfigMaps.

 

Installing Helm

Pre-requisites:

  • Docker for Mac 18.05.0 CE – Edge Release
  • Enable Kubernetes under Preference Pane UI

To install Helm, you just need a single-liner command on your macOS:

[Captains-Bay]? >  brew install kubernetes-helm
[Captains-Bay]? >  helm
The Kubernetes package manager

To begin working with Helm, run the 'helm init' command:

	$ helm init

This will install Tiller to your running Kubernetes cluster.
It will also set up any necessary local configuration.

Common actions from this point include:

- helm search:    search for charts
- helm fetch:     download a chart to your local directory to view
- helm install:   upload the chart to Kubernetes
- helm list:      list releases of charts

Environment:
  $HELM_HOME          set an alternative location for Helm files. By default, these are stored in ~/.helm
  $HELM_HOST          set an alternative Tiller host. The format is host:port
  $HELM_NO_PLUGINS    disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins.
  $TILLER_NAMESPACE   set an alternative Tiller namespace (default "kube-system")
  $KUBECONFIG         set an alternative Kubernetes configuration file (default "~/.kube/config")

Usage:
  helm [command]

Available Commands:
  completion  Generate autocompletions script for the specified shell (bash or zsh)
  create      create a new chart with the given name
  delete      given a release name, delete the release from Kubernetes
  dependency  manage a chart's dependencies
  fetch       download a chart from a repository and (optionally) unpack it in local directory
  get         download a named release
  history     fetch release history
  home        displays the location of HELM_HOME
  init        initialize Helm on both client and server
  inspect     inspect a chart
  install     install a chart archive
  lint        examines a chart for possible issues
  list        list releases
  package     package a chart directory into a chart archive
  plugin      add, list, or remove Helm plugins
  repo        add, list, remove, update, and index chart repositories
  reset       uninstalls Tiller from a cluster
  rollback    roll back a release to a previous revision
  search      search for a keyword in charts
  serve       start a local http web server
  status      displays the status of the named release
  template    locally render templates
  test        test a release
  upgrade     upgrade a release
  verify      verify that a chart at the given path has been signed and is valid
  version     print the client/server version information

Flags:
      --debug                           enable verbose output
  -h, --help                            help for helm
      --home string                     location of your Helm config. Overrides $HELM_HOME (default "/Users/ajeetraina/.helm")
      --host string                     address of Tiller. Overrides $HELM_HOST
      --kube-context string             name of the kubeconfig context to use
      --tiller-connection-timeout int   the duration (in seconds) Helm will wait to establish a connection to tiller (default 300)
      --tiller-namespace string         namespace of Tiller (default "kube-system")

Use "helm [command] --help" for more information about a command.

Verify the Helm version. If Server and client version doesn’t match, you need to upgrade to deploy application seamlessly(as shown below):

[Captains-Bay]? >  helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
[Captains-Bay]? >  helm init --upgrade
$HELM_HOME has been configured at /Users/ajeetraina/.helm.

Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
[Captains-Bay]? >  helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

Installing WordPress Application using Helm

Say, you want to install WordPress application using Helm. First you need to update the repository and then you can search the application using helm search command as shown below:

[Captains-Bay]? > helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

[Captains-Bay]? >  helm search wordpress
NAME            	CHART VERSION	APP VERSION	DESCRIPTION
stable/wordpress	1.0.2        	4.9.4      	Web publishing platform for building blogs and ...

Just a single-liner command and your WordPress application is up and running:

[Captains-Bay]? >  helm install stable/wordpress --name mywp
NAME:   mywp
LAST DEPLOYED: Sat Jun  2 07:19:25 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Deployment
NAME            DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
mywp-mariadb    1        1        1           0          0s
mywp-wordpress  1        1        1           0          0s

==> v1/Pod(related)
NAME                             READY  STATUS             RESTARTS  AGE
mywp-mariadb-b689ddf74-mlprh     0/1    Init:0/1           0         0s
mywp-wordpress-774555bd4b-hcdc2  0/1    ContainerCreating  0         0s

==> v1/Secret
NAME            TYPE    DATA  AGE
mywp-mariadb    Opaque  2     1s
mywp-wordpress  Opaque  2     1s

==> v1/ConfigMap
NAME                DATA  AGE
mywp-mariadb        1     1s
mywp-mariadb-tests  1     1s

==> v1/PersistentVolumeClaim
NAME            STATUS   VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mywp-mariadb    Bound    pvc-2e1f1122-6607-11e8-8d79-025000000001  8Gi       RWO           hostpath      1s
mywp-wordpress  Pending  hostpath                                  1s

==> v1/Service
NAME            TYPE          CLUSTER-IP      EXTERNAL-IP  PORT(S)                     AGE
mywp-mariadb    ClusterIP     10.98.65.236    <none>       3306/TCP                    0s
mywp-wordpress  LoadBalancer  10.109.204.199  localhost    80:31016/TCP,443:30638/TCP  0s


NOTES:
1. Get the WordPress URL:

  NOTE: It may take a few minutes for the LoadBalancer IP to be available.
        Watch the status with: 'kubectl get svc --namespace default -w mywp-wordpress'

  export SERVICE_IP=$(kubectl get svc --namespace default mywp-wordpress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  echo http://$SERVICE_IP/admin

2. Login with the following credentials to see your blog

  echo Username: user
  echo Password: $(kubectl get secret --namespace default mywp-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode)

[Captains-Bay]? >

Checking the Status of WordPress Application

[Captains-Bay]? >  kubectl get svc --namespace default -w mywp-wordpress
NAME             TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
mywp-wordpress   LoadBalancer   10.109.204.199   localhost     80:31016/TCP,443:30638/TCP   47s

That’s it. Just browse the WordPress using your localhost IP:

 

Cleaning up WordPress

[Captains-Bay] > helm delete mywp
release "mywp" deleted

Installing Prometheus Stack using Helm

[Captains-Bay]? >  helm install stable/prometheus
NAME:   hasty-ladybug
LAST DEPLOYED: Sun Jun  3 09:00:30 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolumeClaim
NAME                                   STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
hasty-ladybug-prometheus-alertmanager  Bound   pvc-7732a80b-66de-11e8-b7d4-025000000001  2Gi       RWO           hostpath      2s
hasty-ladybug-prometheus-server        Bound   pvc-773541f3-66de-11e8-b7d4-025000000001  8Gi       RWO           hostpath      2s

==> v1/ServiceAccount
NAME                                         SECRETS  AGE
hasty-ladybug-prometheus-alertmanager        1        2s
hasty-ladybug-prometheus-kube-state-metrics  1        2s
hasty-ladybug-prometheus-node-exporter       1        2s
hasty-ladybug-prometheus-pushgateway         1        2s
hasty-ladybug-prometheus-server              1        2s

==> v1beta1/ClusterRoleBinding
NAME                                         AGE
hasty-ladybug-prometheus-kube-state-metrics  2s
hasty-ladybug-prometheus-server              2s

==> v1beta1/DaemonSet
NAME                                    DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR  AGE
hasty-ladybug-prometheus-node-exporter  1        1        0      1           0          <none>         2s

==> v1/Pod(related)
NAME                                                          READY  STATUS             RESTARTS  AGE
hasty-ladybug-prometheus-node-exporter-9ggqj                  0/1    ContainerCreating  0         2s
hasty-ladybug-prometheus-alertmanager-5c67b8b874-4xxtj        0/2    ContainerCreating  0         2s
hasty-ladybug-prometheus-kube-state-metrics-5cbcd4d86c-788p4  0/1    ContainerCreating  0         2s
hasty-ladybug-prometheus-pushgateway-c45b7fd6f-2wwzm          0/1    Pending            0         2s
hasty-ladybug-prometheus-server-799d6c7c75-jps8k              0/2    Init:0/1           0         2s

==> v1/ConfigMap
NAME                                   DATA  AGE
hasty-ladybug-prometheus-alertmanager  1     2s
hasty-ladybug-prometheus-server        3     2s

==> v1beta1/ClusterRole
NAME                                         AGE
hasty-ladybug-prometheus-kube-state-metrics  2s
hasty-ladybug-prometheus-server              2s

==> v1/Service
NAME                                         TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)   AGE
hasty-ladybug-prometheus-alertmanager        ClusterIP  10.96.193.91   <none>       80/TCP    2s
hasty-ladybug-prometheus-kube-state-metrics  ClusterIP  None           <none>       80/TCP    2s
hasty-ladybug-prometheus-node-exporter       ClusterIP  None           <none>       9100/TCP  2s
hasty-ladybug-prometheus-pushgateway         ClusterIP  10.97.92.108   <none>       9091/TCP  2s
hasty-ladybug-prometheus-server              ClusterIP  10.96.118.138  <none>       80/TCP    2s

==> v1beta1/Deployment
NAME                                         DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
hasty-ladybug-prometheus-alertmanager        1        1        1           0          2s
hasty-ladybug-prometheus-kube-state-metrics  1        1        1           0          2s
hasty-ladybug-prometheus-pushgateway         1        1        1           0          2s
hasty-ladybug-prometheus-server              1        1        1           0          2s


NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
hasty-ladybug-prometheus-server.default.svc.cluster.local


Get the Prometheus server URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9090


The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
hasty-ladybug-prometheus-alertmanager.default.svc.cluster.local


Get the Alertmanager URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9093


The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
hasty-ladybug-prometheus-pushgateway.default.svc.cluster.local


Get the PushGateway URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9091

For more information on running Prometheus, visit:
https://prometheus.io/
[Captains-Bay]? >  helm ls
NAME         	REVISION	UPDATED                 	STATUS  	CHART           	NAMESPACE
hasty-ladybug	1       	Sun Jun  3 09:00:30 2018	DEPLOYED	prometheus-6.7.0	default
mywp         	1       	Sat Jun  2 07:19:25 2018	DEPLOYED	wordpress-1.0.2 	default

Accessing Prometheus UI under Web Browser

export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9090

Now you can access Prometheus:

open http://localhost:9090

 

Cleaning Up

[Captains-Bay]? >  helm delete hasty-ladybug
release "hasty-ladybug" deleted

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.