How I built Elastic Stack for Docker Swarm using Docker Application Packages(docker-app)

Estimated Reading Time: 7 minutes

 

Let’s begin with Problem Statement !

DockerHub is a cloud-based registry service which allows you to link to code repositories, build your images, test them, store manually pushed images so you can deploy images to your hosts. It provides a centralized resource for container image discovery, distribution and change management as well as workflow automation throughout the development pipeline. We share Docker images all the time, but let’s agree to the fact that we don’t have a good way of sharing the multi-service applications that use them.

Let us take an example of Elastic Stack. Built on an open source foundation, the Elastic Stack lets you reliably and securely take data from any source, in any format, and search, analyze, and visualize it in real time with the help of Elasticsearch, Logstash, Kibana and multiple other tools and technique. In order to build these tools in the form of containers, one need to start building Docker Image for each of these tools. The recommended way is constructing a Dockerfile for each of these tools. In turn, Docker Compose uses these images to build required services. Whenever the docker stack deploy CLI is used to deploy the application stack, these Docker images are pulled from Dockerhub for the first time and then picked up locally once downloaded to your system. What if you could upload your whole application stack to DockerHub? Yes, it’s possible today and docker-app is the tool which can make Compose-based applications shareable on Docker Hub and DTR.

Docker-app v0.5.0 is now Available !

Docker Application Package v0.5.0 is the latest offering from  Docker, Inc. You can download it from this link. The binaries are available for Linux, Windows and MacOS Platform. If you are looking out for source code,  this is the direct link.

   

The docker-app v0.5.0 comes with notable features and improvements which are listed below:

  • The improved docker-app inspect command to shows a summary of services, networks, volumes and secrets.

  • The docker-app push CLI now works on Windows and bypasses the local docker daemon by talking directly to the registry.
  • The docker-app save and docker-app ls have been obsoleted.
  • All commands now accept an application package as a URL.
  • The docker-app push command now accepts a custom repository name.
  • The docker-app completion command can generate zsh completion in addition to bash.

In my last blog post, I talked about docker-app for the first time and showcased its usage soon after I returned back from Dockercon. Under this post, I will show how I built Elastic Stack using docker-app for 5-Node Docker Swarm cluster.

Prerequisite:

  • Click on Icon near to Instance to choose 3 Managers & 2 Worker Nodes

Deploy 5 Node Docker Swarm Cluster

$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUSENGINE VERSION
iy9mbeduxd4mmjxoikbn5ulds *   manager1            Ready               Active              Reachable18.03.1-ce
mx916kgqg6gfgqdr2gn1eksxy     manager2            Ready               Active              Leader18.03.1-ce
xaeq943o84g9spy6mebj64tw3     manager3            Ready               Active              Reachable18.03.1-ce
8umdv6m82nrpevuris1e45wnq     worker1             Ready               Active18.03.1-ce
o3yobqgg7wjvjw2ec5ythszgw     worker2             Ready               Active18.03.1-ce

Cloning the Repository

$ git clone https://github.com/ajeetraina/app
Cloning into 'app'...remote: Enumerating objects: 134, done.
remote: Counting objects: 100% (134/134), done.remote: Compressing objects: 100% (134/134), done.
remote: Total 14511 (delta 95), reused 0 (delta 0), pack-reused 14377Receiving objects: 100% (14511/14511), 17.37 MiB | 13.35 MiB/s, done.
Resolving deltas: 100% (5391/5391), done.

Install Docker-app

$ cd app/examples/elk/
[manager1] (local) root@192.168.0.30 ~/app/examples/elk$ ls
README.md          devel              elk.dockerapp      install-dockerapp  prod
[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$ chmod +x install-dockerapp[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$ sh install-dockerappConnecting to github.com (192.30.253.112:443)
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (52.216.131.187:443)docker-app-linux.tar 100% |*************************************************************|  8895k  0:00:00 ETA
[manager1] (local) root@192.168.0.30 ~/app/examples/elk

Verifying Docker-app Version

$ docker-app version
Version:      v0.4.0
Git commit:   525d93bc
Built:        Tue Aug 21 13:02:46 2018
OS/Arch:      linux/amd64
Experimental: off
Renderers:    none

I assume you have a docker compose file for ELK stack application already available with you. If not, you can download a sample file from this link. Place this YAML file under the same directory(app/examples/elk/). Now with docker-app installed, let’s create an Application Package based on this Compose file:

$ docker-app init elk

Once you run the above command, it create a new directory elk.dockerapp/ that contains three different YAML files:

docker-compose.yml  elk.dockerapp
[manager1] (local) root@192.168.0.30 ~/myelk
$ tree elk.dockerapp/
elk.dockerapp/
├── docker-compose.yml
├── metadata.yml
└── settings.yml

0 directories, 3 files

Edit each of these files as shown to look similar to what are placed under this link.

Rendering Docker Compose file

$ docker-app render elk
version: "3.4"
services:
  elasticsearch:    command:
    - elasticsearch    - -Enetwork.host=0.0.0.0
    - -Ediscovery.zen.ping.unicast.hosts=elasticsearch
    deploy:
      mode: replicated
      replicas: 2
    environment:
      ES_JAVA_OPTS: -Xms2g -Xmx2g
    image: elasticsearch:5
    networks:
      elk: null
    volumes:
    - type: volume
      target: /usr/share/elasticsearch/data
  kibana:
    deploy:
      mode: replicated
      replicas: 2
    environment:
      ELASTICSEARCH_URL: http://elasticsearch:9200
    healthcheck:
      test:
      - CMD-SHELL
      - wget -qO- http://localhost:5601 > /dev/null
      interval: 30s
      retries: 3
    image: kibana:latest
    networks:
      elk: null
    ports:
    - mode: ingress
      target: 5601
      published: 5601
      protocol: tcp
  logstash:
    command:
    - sh
    - -c
    - logstash -e 'input { syslog  { type => syslog port => 10514   } gelf { } } output
      { stdout { codec => rubydebug } elasticsearch { hosts => [ "elasticsearch" ]
      } }'
    deploy:
      mode: replicated
      replicas: 2
    hostname: logstash
    image: logstash:alpine
    networks:
      elk: null
    ports:
    - mode: ingress
      target: 10514
      published: 10514
      protocol: tcp
    - mode: ingress
      target: 10514
      published: 10514
      protocol: udp
    - mode: ingress
      target: 12201
      published: 12201
      protocol: udp
networks:
  elk: {

Setting the kernel parameter for ELK stack

sysctl -w vm.max_map_count=262144

Deploying the Application Stack


[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$ docker-app deploy elk --settings-files elk.dockerapp/settings.yml
Creating network elk_elk
Creating service elk_kibana
Creating service elk_logstash
Creating service elk_elasticsearch
[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$

Inspecting ELK Stack

[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$ docker-app inspect elk
myelk 0.1.0
Maintained by: Ajeet_Raina <ajeetraina@gmail.com>

ELK using Dockerapp

Setting                       Default
-------                       -------
elasticsearch.deploy.mode     replicated
elasticsearch.deploy.replicas 2
elasticsearch.image.name      elasticsearch:5
kibana.deploy.mode            replicated
kibana.deploy.replicas        2
kibana.image.name             kibana:latest
kibana.port                   5601
logstash.deploy.mode          replicated

Verifying Stack services are up & running

[manager1] (local) root@192.168.0.30 ~/app/examples/elk/docker101/play-with-docker/visualizer
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
uk2whax6f3jq        elk_elasticsearch   replicated          2/2                 elasticsearch:5
nm4p3yswvh5y        elk_kibana          replicated          2/2                 kibana:latest       *:5601->56
01/tcp
g5ubng6rhcyp        elk_logstash        replicated          2/2                 logstash:alpine     *:10514->1
0514/tcp, *:10514->10514/udp, *:12201->12201/udp
[manager1] (local) root@192


Pushing the App Package to Dockerhub

Password:[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to
 https://hub.docker.com to create one.
Username: ajeetraina
Password:Login Succeeded

Pushing the App package to DockerHub


[manager1] (local) root@192.168.0.30 ~/app/examples/elk$ docker-app push --namespace ajeetraina --tag 1.0.2
The push refers to repository [docker.io/ajeetraina/elk.dockerapp]
15e73d68a400: Pushed
1.0.2: digest: sha256:c5a8e3b7e2c7a5566a3e4247f8171516033e7e9791dfdb6ebe622d3830884d9b size: 524
[manager1] (local) root@192.168.0.30 ~/app/examples/elk
$

Important Note: If you are using Docker-app v0.5.0, you might face issue related to pulling the image from Dockerhub as it report unsupported OS error message. Here’s a link to this open issue.

Testing the Application Package

Open up a new PWD window. Install docker-app as shown above and try to run the below command:

docker-app deploy ajeetraina/elk.dockerapp:1.0.2

This should bring up your complete Elastic Stack Platform.

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you want to keep track of latest Docker related information, follow me at https://www.linkedin.com/in/ajeetsraina/.

A First Look at Docker Application Package “docker-app”

Estimated Reading Time: 10 minutes

 

Did you know? There are more than 300,000 Docker Compose files on GitHub.

Docker Compose is a tool for defining and running multi-container Docker applications. It  is an amazing developer tool to create the development environment for your application stack. It allows you to define each component of your application following a clear and simple syntax in YAML files. It works in all environments: production, staging, development, testing, as well as CI workflows. Though Compose files are easy to describe a set of related services but there are couple of problems which has emerged in the past. One of the major concern has been around multiple environments to deploy the application with small configuration differences.

Consider a scenario where you have separate development, test, and production environments for your Web application. Under development environment, your team might be spending time in building up Web application(say, WordPress), developing WP Plugins and templates, debugging the issue etc.  When you are in development you’ll probably want to check your code changes in real-time. The usual way to do this is mounting a volume with your source code in the container that has the runtime of your application. But for production this works differently. Before you host your web application in production environment, you might want to turn-off the debug mode and host it under the right port so as to test your application usability and accessibility. In production you have a cluster with multiple nodes, and in most of the case volume is local to the node where your container (or service) is running, then you cannot mount the source code without complex stuff that involve code synchronization, signals, etc. In nutshell, this might require multiple Docker compose files for each environment and as your number of service applications increases, it becomes more cumbersome to manage those pieces of Compose files. Hence, we need a tool which can ease the way Compose files can be shareable across  different environment seamlessly.

To solve this problem, Docker, Inc recently announced a new tool called “docker-app”(Application Packages) which makes “Compose files more reusable and shareable”. This tool not only makes Compose file shareable but provide us with simplified approach to share multi-service application (not just Docker Image) directly on Dockerhub.

 

 

Under this blog post, I will showcase how docker-app tool makes it easier to use Docker Compose for sharing and collaboration and then pushing it directly to DockerHub. Let us get started-

Prerequisite:

  • Click on Icon near to Instance to choose 3 Managers & 2 Worker Nodes

Deploy 5 Node Swarm Mode Cluster

$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
juld0kwbajyn11gx3bon9bsct *   manager1            Ready               Active              Leader              18.03.1-ce
uu675q2209xotom4vys0el5jw     manager2            Ready               Active              Reachable           18.03.1-ce
05jewa2brfkvgzklpvlze01rr     manager3            Ready               Active              Reachable           18.03.1-ce
n3frm1rv4gn93his3511llm6r     worker1             Ready               Active                                  18.03.1-ce
50vsx5nvwx5rbkxob2ua1c6dr     worker2             Ready               Active                                  18.03.1-ce

Cloning the Repository

$ git clone https://github.com/ajeetraina/app
Cloning into 'app'...remote: Counting objects: 14147, done.
remote: Total 14147 (delta 0), reused 0 (delta 0), pack-reused 14147Receiving objects: 100% (14147/14147), 17.32 MiB | 18.43 MiB/s, done.
Resolving deltas: 100% (5152/5152), done.

Installing docker-app

wget https://github.com/docker/app/releases/download/v0.3.0/docker-app-linux.tar.gz
tar xf docker-app-linux.tar.gz
cp docker-app-linux /usr/local/bin/docker-app

OR

$ ./install.sh
Connecting to github.com (192.30.253.112:443)
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (52.216.227.152:443)
docker-app-linux.tar 100% |**************************************************************|  8780k  0:00:00 ETA
[manager1] (local) root@192.168.0.13 ~/app
$ 

Verify docker-app version

$ docker-app version
Version:      v0.3.0
Git commit:   fba6a09
Built:        Fri Jun 29 13:09:30 2018
OS/Arch:      linux/amd64
Experimental: off
Renderers:    none

The docker-app tool comes with various options as shown below:

$ docker-app
Build and deploy Docker applications.

Usage:
  docker-app [command]

Available Commands:
  deploy      Deploy or update an application
  helm        Generate a Helm chart
  help        Help about any command
  init        Start building a Docker application
  inspect     Shows metadata and settings for a given application
  ls          List applications.
  merge       Merge the application as a single file multi-document YAML
  push        Push the application to a registry
  render      Render the Compose file for the application
  save        Save the application as an image to the docker daemon(in preparation for push)
  split       Split a single-file application into multiple files
  version     Print version information

Flags:
      --debug   Enable debug mode
  -h, --help    help for docker-app

Use "docker-app [command] --help" for more information about a command.
[manager1] (local) root@192.168.0.48 ~/app

WordPress Application under dev & Prod environment

If you browse to app/examples/wordpress directory under GitHub Repo, you will see that there is a folder called wordpress.dockerapp that contains three YAML documents:

  • metadatas
  • the Compose file
  • settings for your application

Okay, Fine ! But how you created those files?

The docker-app tool comes with an option “init” which initialize any application with the above  3 YAML files and the directory structure can be initialized with the below command:

docker-app init --single-file wordpress

I have already created a directory structure for my environment and you can find few examples under this directory.

Listing the WordPress Application package related files/directories

$ ls
README.md            install-wp           with-secrets.yml
devel                prod                 wordpress.dockerapp

As you see above, I have created each folder for my environment – dev and prod. Under these directories, I have created prod-settings.yml and dev-settings.yml. You can view the content via this link.

WordPress Application Package for Dev Environ

I can pass “-f” <YAML> parameter to docker-app tool to render the respective environment settings seamlessly as shown below:

$ docker-app render wordpress -f devel/dev-settings.yml
version: "3.6"
services:
  mysql:
    deploy:
      mode: replicated
      replicas: 1
      endpoint_mode: dnsrr
    environment:
      MYSQL_DATABASE: wordpressdata
      MYSQL_PASSWORD: wordpress
      MYSQL_ROOT_PASSWORD: wordpress101
      MYSQL_USER: wordpress
    image: mysql:5.6
    networks:
      overlay: null
    volumes:
    - type: volume
      source: db_data
      target: /var/lib/mysql
  wordpress:
    depends_on:
    - mysql
    deploy:
      mode: replicated
      replicas: 1
      endpoint_mode: vip
    environment:
      WORDPRESS_DB_HOST: mysql
      WORDPRESS_DB_NAME: wordpressdata
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DEBUG: "true"
    image: wordpress
    networks:
      overlay: null
    ports:
    - mode: ingress
      target: 80
      published: 8082
      protocol: tcp
networks:
  overlay: {}
volumes:
  db_data:
    name: db_data

WordPress Application Package for Prod

Under Prod environment, I have the following content under prod/prod-settings.yml as shown :

debug: false
wordpress:
  port: 80

For production environment, it is obvious that I want my application to be exposed under the standard port:80. Post rendering, you should be able to see port:80 exposed as shown below in the snippet:

 image: wordpress
    networks:
      overlay: null
    ports:
    - mode: ingress
      target: 80
      published: 80
      protocol: tcp
networks:
  overlay: {}
volumes:
  db_data:
    name: db_data

 

Inspect the WordPress App

$ docker-app inspect wordpress
wordpress 1.0.0
Maintained by: ajeetraina <ajeetraina@gmail.com>

Welcome to Collabnix

Setting                       Default
-------                       -------
debug                         true
mysql.database                wordpressdata
mysql.image.version           5.6
mysql.rootpass                wordpress101
mysql.scale.endpoint_mode     dnsrr
mysql.scale.mode              replicated
mysql.scale.replicas          1
mysql.user.name               wordpress
mysql.user.password           wordpress
volumes.db_data.name          db_data
wordpress.port                8081
wordpress.scale.endpoint_mode vip
wordpress.scale.mode          replicated
wordpress.scale.replicas      1
[manager1] (local) root@192.168.0.13 ~/app/examples/wordpress
$

Deploying the WordPress App

$ docker-app deploy wordpress
Creating network wordpress_overlay
Creating service wordpress_mysql
Creating service wordpress_wordpress

Switching to Dev Environ

If I want to switch back to Dev environment, all I need is to pass the dev specific YAML file using “-f” parameter  as shown below:

$docker-app deploy wordpress -f devel/dev-settings.yml

docker-app

Switching to Prod Environ

$docker-app deploy wordpress -f prod/prod-settings.yml

docker-app

[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app deploy -f devel/dev-settings.yml
Updating service wordpress_wordpress (id: l95b4s6xi7q5mg7vj26lhzslb)
Updating service wordpress_mysql (id: lhr4h2uaer861zz1b04pst5sh)
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app deploy -f prod/prod-settings.yml
Updating service wordpress_wordpress (id: l95b4s6xi7q5mg7vj26lhzslb)
Updating service wordpress_mysql (id: lhr4h2uaer861zz1b04pst5sh)
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Pushing Application Package to Dockerhub

So I have my application ready to be pushed to Dockerhub. Yes, you heard it right. I said, Application packages and NOT Docker Image.

Let me first authenticate myself before I push it to Dockerhub registry:

$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to
 https://hub.docker.com to create one.
Username: ajeetraina
Password:
Login Succeeded
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Saving this Application Package as Docker Image

The docker-app CLI is feature-rich and allows to save the entire application as a Docker Image. Let’s try it out –

[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app save wordpress
Saved application as image: wordpress.dockerapp:1.0.0
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Listing out the images

$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
wordpress.dockerapp   1.0.0               c1ec4d18c16c        47 seconds ago      1.62kB
mysql                 5.6                 97fdbdd65c6a        3 days ago          256MB
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Listing out the services

$ docker stack services wordpress
ID                  NAME                  MODE                REPLICAS            IMAGE               PORTS
l95b4s6xi7q5        wordpress_wordpress   replicated          1/1                 wordpress:latest    *:80->80
/tcp
lhr4h2uaer86        wordpress_mysql       replicated          1/1                 mysql:5.6
[manager1] (local) root@192.168.0.48 ~/docker101/play-with-docker/visualizer

Using docker-app ls command to list out the application packages

The ‘ls’ command has been recently introduced under v0.3.0. Let us try it once –

$ docker-app ls
REPOSITORY            TAG                 IMAGE ID            CREATED              SIZE
wordpress.dockerapp   1.0.1               299fb78857cb        About a minute ago   1.62kB
wordpress.dockerapp   1.0.0               c1ec4d18c16c        16 minutes ago       1.62kB

Pusing it to Dockerhub

$ docker-app push --namespace ajeetraina --tag 1.0.1
The push refers to repository [docker.io/ajeetraina/wordpress.dockerapp]
51cfe2cfc2a8: Pushed
1.0.1: digest: sha256:14145fc6e743f09f92177a372b4a4851796ab6b8dc8fe49a0882fc5b5c1be4f9 size: 524

Say, you built WordPress application package and pushed it to Dockerhub. Now one of your colleague want to pull it on his development system and deploy in his environment.

Pulling it from Dockerhub

$ docker pull ajeetraina/wordpress.dockerapp:1.0.1
1.0.1: Pulling from ajeetraina/wordpress.dockerapp
a59931d48895: Pull complete
Digest: sha256:14145fc6e743f09f92177a372b4a4851796ab6b8dc8fe49a0882fc5b5c1be4f9
Status: Downloaded newer image for ajeetraina/wordpress.dockerapp:1.0.1
[manager3] (local) root@192.168.0.24 ~/app
$ docker images
REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE
ajeetraina/wordpress.dockerapp   1.0.1               299fb78857cb        8 minutes ago       1.62kB
[manager3] (local) root@192.168.0.24 ~/app
$

Deploying the Application

$ docker images
REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE
ajeetraina/wordpress.dockerapp   1.0.1               299fb78857cb        9 minutes ago       1.62kB
[manager3] (local) root@192.168.0.24 ~/app
$ docker-app deploy ajeetraina/wordpress
Creating network wordpress_overlay
Creating service wordpress_mysql
Creating service wordpress_wordpress
[manager3] (local) root@192.168.0.24 ~/app
$

Using docker-app merge option

Docker Team has introduced docker-app merge option under the new 0.3.0 release.

[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app merge -o mywordpress
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ ls
README.md            install-wp           prod                 wordpress.dockerapp
devel                mywordpress          with-secrets.yml
$ cat mywordpress
version: 1.0.1
name: wordpress
description: "Welcome to Collabnix"
maintainers:
  - name: ajeetraina
    email: ajeetraina@gmail.com
targets:
  swarm: true
  kubernetes: true

--
version: "3.6"

services:

  mysql:
    image: mysql:${mysql.image.version}
    environment:
      MYSQL_ROOT_PASSWORD: ${mysql.rootpass}
      MYSQL_DATABASE: ${mysql.database}
      MYSQL_USER: ${mysql.user.name}
      MYSQL_PASSWORD: ${mysql.user.password}
    volumes:
       - source: db_data
         target: /var/lib/mysql
         type: volume
    networks:
       - overlay
    deploy:
      mode: ${mysql.scale.mode}
      replicas: ${mysql.scale.replicas}
      endpoint_mode: ${mysql.scale.endpoint_mode}

  wordpress:
    image: wordpress
    environment:
      WORDPRESS_DB_USER: ${mysql.user.name}
      WORDPRESS_DB_PASSWORD: ${mysql.user.password}
      WORDPRESS_DB_NAME: ${mysql.database}
      WORDPRESS_DB_HOST: mysql
      WORDPRESS_DEBUG: ${debug}
    ports:
      - "${wordpress.port}:80"
    networks:
      - overlay
    deploy:
      mode: ${wordpress.scale.mode}
      replicas: ${wordpress.scale.replicas}
      endpoint_mode: ${wordpress.scale.endpoint_mode}
    depends_on:
      - mysql

volumes:
  db_data:
     name: ${volumes.db_data.name}

networks:
  overlay:

--
debug: true
mysql:
  image:
    version: 5.6
  rootpass: wordpress101
  database: wordpressdata
  user:
    name: wordpress
    password: wordpress
  scale:
    endpoint_mode: dnsrr
    mode: replicated
    replicas: 1
wordpress:
  scale:
    mode: replicated
    replicas: 1
    endpoint_mode: vip
  port: 8081
volumes:
  db_data:
    name: db_data

docker-app comes with a few other helpful commands as well, in particular the ability to create Helm Charts from your Docker Applications. This can be useful if you’re adopting Kubernetes, and standardising on Helm to manage the lifecycle of your application components, but want to maintain the simplicity of Compose when writing you applications. This also makes it easy to run the same applications locally just using Docker, if you don’t want to be running a full Kubernetes cluster.

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you want to keep track of latest Docker related information, follow me at https://www.linkedin.com/in/ajeetsraina/.

Test Drive Elastic stack on PWD platform running Docker 17.06 CE Swarm Mode in 5 minutes

Estimated Reading Time: 6 minutes

Let’s talk about Dockerized Elastic Stack…

Elastic Stack is an open source solution that reliably and securely take data from any source, in any format, and search, analyze, and visualize it in real time. It is a collection of open source products – Elasticsearch, Logstash, Kibana & recently added  fourth product, called Beats. Elastic Stack can be deployed on premises or made available as Software as a Service.

Brief about Elastic Stack Components:

Elasticsearch:

Elasticsearch is a RESTful, distributed, highly scalable, JSON-based search and analytics engine built on top of Apache Lucene and released under Apache license. It is Java-based and designed for horizontal scalability, maximum reliability, and easy management. It is basically an open-source full-text search and analytics engine. It allows you to store, search, and analyze big volumes of data quickly and in near real time. It is generally used as the underlying engine/technology that powers applications that have complex search features and requirements.

                                                                                                                                              ~Source: https://www.elastic.co

Logstash:

Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.” (Elasticsearch). Logstash is a dynamic data collection pipeline with an extensible plugin ecosystem and strong Elasticsearch synergy. The product was originally optimized for log data but has expanded the scope to take data from all sources.Data is often scattered or siloed across many systems in many formats. Logstash supports a variety of inputs that pull in events from a multitude of common sources, all at the same time. Easily ingest from your logs, metrics, web applications, data stores, and various AWS services, all in continuous, streaming fashion.As data travels from source to store, Logstash filters parse each event, identify named fields to build structure, and transform them to converge on a common format for easier, accelerated analysis and business value.

Logstash dynamically transforms and prepare your data regardless of format or complexity:

  • Derive structure from unstructured data with grok
  • Decipher geo coordinates from IP addresses
  • Anonymize PII data, exclude sensitive fields completely
  • Ease overall processing independent of the data source, format, or schema.

Logstash has a pluggable framework featuring over 200 plugins. Mix, match, and orchestrate different inputs, filters, and outputs to work in pipeline harmony.

Kibana:

Lastly, Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack. It gives you the freedom to select the way you give shape to your data. And you don’t always have to know what you’re looking for. With its interactive visualizations, start with one question and see where it leads you.Kibana developer tools offer powerful ways to help developers interact with the Elastic Stack. With Console, you can bypass using curl from the terminal and tinker with your Elasticsearch data directly. The Search Profiler lets you easily see where time is spent during search requests. And authoring complex grok patterns in your Logstash configuration becomes a breeze with the Grok Debugger.

In next 5 minutes, we are going to test drive ELK stack on PWD playground.

Let’s get started –

Open up https://play-with-docker.com

 

Click on icon next to Instances to open up ready-made templates for Docker Swarm Mode:

 

Choose the first template (as highlighted in the above figure) to select 3 Managers and 2 Workers. It will bring up Docker 17.06 Swarm Mode cluster in just 10 seconds.

Run the below command to show up the cluster nodes:

docker node ls

Run the necessary command on node which will run elasticsearch:

sysctl -w vm.max_map_count=262144
echo ‘vm.max_map_count=262144’ >> /etc/sysctl.conf

Clone the GitHub repository:

git clone https://github.com/ajeetraina/docker101
cd docker101/play-with-docker/visualizer

Run the below command to bring up visualiser tool as shown below:

Soon you will notice port 8080 displayed on the top of the page which when clicked will open up visualiser tool.

It’s time to clone ELK stack and execute the below command to bring up ELK stack across Docker 17.06 Swarm Mode cluster:

git clone https://github.com/ajeetraina/swarm-elk
cd swarm-elk
docker stack deploy -c docker-compose.yml myself

 

[Credits to Andrew Hromis for building this docker-compose file. I leveraged his project repository to bring up the ELK stack in the first try]

You will soon see the below list of containers appearing on the nodes:

Run the below command to see the list of services running across the cluster:

docker service ls

Click on port 5601 displayed on the top of the PWD page:

Please Note:  Kibana need data in Elasticsearch to work with. The .kibana index holds Kibana related data, and if they is the only index you have there is no data available that Kibana can visualise.Before you can use Kibana you will therefore need to index some data into Elasticsearch. This can be done e.g. using Logstash or directly through the REST interface using curl.

Soon you will see the below Kibana page:

 

Enabling High Availability for Elastic Stack through scaling

Let us scale out more number of replicas for elasticsearch:

Pushing data into Logstash:

Example #1:

Let us push NGINX web server logs into logstash and see if Kibana is able to detect it:

docker run -d --name nginx-with-syslog --log-driver=syslog --log-opt syslog-address=udp://10.0.173.7:12201 -p 80:80 nginx:alpine

Now if you open up Kibana UI, you should be able to see logs being displayed for Nginx:

Example #2:

We can also push logs to logstash using the below command:

docker run --rm -it --log-driver=gelf --log-opt gelf-address=udp://10.0.173.7:12201 alpine ping 8.8.8.8

Open up Kibana and now you will see the below GREEN status:

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

Docker 17.06 Swarm Mode: Now with built-in MacVLAN & Node-Local Networks support

Estimated Reading Time: 5 minutes

Docker 17.06.0-ce-RC5 got announced 5 days back and is available for testing. It brings numerous new features & enablements under this new upcoming release. Few of my favourites includes support for Secrets on Windows,  allows specifying a secret location within the container, adds --format option to docker system df command, adds support for placement preference to docker stack deploy, adds monitored resource type metadata for GCP logging driver and adding build & engine info prometheus metrics to list a few. But one of the notable and most awaited feature include support of swarm-mode services with node-local networks such as macvlan, ipvlan, bridge and host.

Under the new upcoming 17.06 release, Docker provides support for local scope networks in Swarm. This includes any local scope network driver. Some examples of these are bridgehost, and macvlan though any local scope network driver, built-in or plug-in, will work with Swarm. Previously only swarm scope networks like overlay were supported. This is a great news for all Docker Networking enthusiasts.

A Brief Intro to MacVLAN:

Picture1

 

macvlan

In case you’re new , the MACVLAN driver provides direct access between containers and the physical network. It also allows containers to receive routable IP addresses that are on the subnet of the physical network.

MACVLAN offers a number of unique features and capabilities. It has positive performance implications by virtue of having a very simple and lightweight architecture. It’s use cases includes very low latency applications and networking design that requires containers be on the same subnet as and using IPs as the external host network.The macvlan driver uses the concept of a parent interface. This interface can be a physical interface such as eth0, a sub-interface for 802.1q VLAN tagging like eth0.10 (.10representing VLAN 10), or even a bonded host adaptor which bundles two Ethernet interfaces into a single logical interface.

To test-drive MacVLAN under Swarm Mode, I will leverage the existing 3 node Swarm Mode clusters on my VMware ESXi system. I have tested it on bare metal system and VirtualBox and it works equally great.  

[Updated: 9/27/2017 – I have added docker-stack.yml at the end of this guide to show you how to build services out of docker-compose.yml file. DO NOT FORGET TO CHECK IT OUT]

Installing Docker 17.06 on all the Nodes:

curl -fsSL https://test.docker.com > install-docker.sh
sh install-docker.sh

 

Verifying the latest Docker version:

Screen Shot 2017-06-26 at 12.51.18 AM

 

Setting up 2 Node Swarm Mode Cluster:

 

 

Attention VirtualBox Users: – In case you are using VirtualBox,  the MACVLAN driver requires the network and interfaces to be in promiscuous mode. 

A local network config is created on each host. The config holds host-specific information, such as the subnet allocated for this host’s containers. --ip-range is used to specify a pool of IP addresses that is a subset of IPs from the subnet. This is one method of IPAM to guarantee unique IP allocations.

Manager:

manager1==>sudo docker network create --config-only --subnet 100.98.26.0/24 -o parent=ens160.60 --ip-range 100.98.26.100/24 collabnet

 

Worker-1:

worker1==>sudo docker network create --config-only --subnet 100.98.26.0/24 -o parent=ens160.60 --ip-range 100.98.26.100/24 collabnet

 

 

Instantiating the macvlan network globally

Manager:

manager1==> $sudo docker network create -d macvlan --scope swarm --config-from collabnet swarm-macvlan

 

Deploying a service to the swarm-macvlan network:

Let us go ahead and deploy WordPress application. We will be creating 2 services – wordpressapp and wordpressdb1 and attach it to “swarm-macvlan” network as shown below:

Creating Backend Service:

docker service create --replicas 1 --name wordpressdb1 --network swarm-macvlan --env MYSQL_ROOT_PASSWORD=collab123 --env MYSQL_DATABASE=wordpress mysql

Let us verify if MacVLAN network scope holds this container:

 

Creating Frontend Service

Next, it’s time to create wordpress application i.e. wordpressapp

docker service create --env WORDPRESS_DB_HOST=wordpressdb1 --env WORDPRESS_DB_PASSWORD=collab123 --network swarm-macvlan --replicas 4 --name wordpressapp --publish 80:80/tcp wordpress:latest

Verify if both the services are up and running:

 

Verifying if all the containers on the master node picks up desired IP address from the subnet:

 

Docker Compose File showcasing MacVLAN Configuration

Ensure that you run the below commands to setup MacVLAN configuration for your services before you execute the above docker stack deploy CLI:

root@ubuntu-1610:~# docker network create --config-only --subnet 100.98.26.0/24 --gateway 100.98.26.1 -o parent=ens160.60 --ip-range 100.98.26.120/24 collabnet
da2912d762cbf5f5ea412e6e4d69352a3285f720e23740529af9e533c7168729
 
root@ubuntu-1610:~#docker network create -d macvlan --scope swarm --config-from collabnet swarm-macvlan
jp76lts6hbbheqlbbhggumujd

 

Verify that the containers inspection shows the correct information:

root@ubuntu-1610:~/docker101/play-with-docker/wordpress/example1# docker network inspect swarm-macvlan
[
{
“Name”: “swarm-macvlan”,
“Id”: “jp76lts6hbbheqlbbhggumujd”,
“Created”: “2017-09-27T02:12:00.827562388-04:00”,
“Scope”: “swarm”,
“Driver”: “macvlan”,
“EnableIPv6”: false,
“IPAM”: {
“Driver”: “default”,
“Options”: null,
“Config”: [
{
“Subnet”: “100.98.26.0/24”,
“IPRange”: “100.98.26.120/24”,
“Gateway”: “100.98.26.1”
}
]
},
“Internal”: false,
“Attachable”: false,
“Ingress”: false,
“ConfigFrom”: {
“Network”: “collabnet”
},
“ConfigOnly”: false,
“Containers”: {
“3c3f1ec48225ef18e8879f3ebea37c2d0c1b139df131b87adf05dc4d0f4d8e3f”: {
“Name”: “myapp2_wordpress.1.nd2m62alxmpo2lyn079x0w9yv”,
“EndpointID”: “a15e96456870590588b3a2764da02b7f69a4e63c061dda2798abb7edfc5e5060”,
“MacAddress”: “02:42:64:62:1a:02”,
“IPv4Address”: “100.98.26.2/24”,
“IPv6Address”: “”
},
“d47d9ebc94b1aa378e73bb58b32707643eb7f1fff836ab0d290c8b4f024cee73”: {
“Name”: “myapp2_db.1.cxz3y1cg1m6urdwo1ixc4zin7”,
“EndpointID”: “201163c233fe385aa9bd8b84c9d6a263b18e42893176271c585df4772b0a2f8b”,
“MacAddress”: “02:42:64:62:1a:03”,
“IPv4Address”: “100.98.26.3/24”,
“IPv6Address”: “”
}
},
“Options”: {
“parent”: “ens160”
},
“Labels”: {},
“Peers”: [
{
“Name”: “ubuntu-1610-1633ea48e392”,
“IP”: “100.98.26.60”
}
]
}
]

Docker Stack Deploy CLI:

docker stack deploy -c docker-stack.yml myapp2
Ignoring unsupported options: restart
Creating service myapp2_db
Creating service myapp2_wordpress

Verifying if the services are up and running:

root@ubuntu-1610:~/# docker stack ls
NAME SERVICES
myapp2 2
root@ubuntu-1610:~/# docker stack ps myapp2
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
nd2m62alxmpo myapp2_wordpress.1 wordpress:latest ubuntu-1610 Running Running 15 minutes ago
cxz3y1cg1m6u myapp2_db.1 mysql:5.7 ubuntu-1610 Running Running 15 minutes ago

Looking for Docker Compose file for Single Node?

 

Cool..I am going to leverage this for my Apache JMeter Setup so that I can push loads from different IPs using Docker containers.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more what’s new upcoming under Docker 17.06 CE release by clicking on this link.

Topology Aware Scheduling under Docker v17.05.0 Swarm Mode Cluster

Estimated Reading Time: 6 minutes

 

Docker 17.05.0 Final release went public exactly 2 week back.This community release was the first release as part of new Moby project.  With this release, numerous interesting features  like Multi-Stage Build support to the builder, using build-time ARG in FROM, DEB packaging for Ubuntu 17.04(Zesty Zapus)  has been included. With this latest release, Docker Team brought numerous new features and improvements in terms of Swarm Mode. Example – synchronous service commands, automatic service rollback on failure, improvement over raft transport package, service logs formatting etc. 

swarm_1705

 

Placement Preference under Swarm Mode:

One of the prominent new feature introduced is placement preference  under 17.05.0-CE Swarm Mode . Placement preference feature allows you to divide tasks evenly over different categories of nodes. It allows you to balance tasks between multiple datacenters or availability zones. One can use a placement preference to spread out tasks to multiple datacenters and make the service more resilient in the face of a localized outage. You can use additional placement preferences to further divide tasks over groups of nodes.  Under this blog, we will setup 5-node Swarm Mode cluster on play-with-docker platform and see how to balance them over multiple racks within each datacenter. (Note – This is not real time scenario but based on assumption that nodes are being placed in 3 different racks).

Assumption:  There are 3 datacenter Racks holding respective nodes as shown:

{Rack-1=> Node1, Node2 and Node3},

{Rack-2=> Node4}  &

{Rack-3=> Node5}

 

Creating Swarm Master Node:

Open up Docker Playground to build up Swarm Cluster.

docker swarm init --advertise-addr 10.0.116.3

Screen Shot 2017-05-20 at 7.04.26 AM

 

Adding Worker Nodes to Swarm Cluster

docker swarm join --token <token-id> 10.0.116.3:2377

Screen Shot 2017-05-20 at 7.07.53 AM

Create 3 more instances and add those nodes as worker nodes. This should build up 5 node Swarm Mode cluster.

Screen Shot 2017-05-20 at 7.08.51 AM

 

Setting up Visualizer Tool 

To showcase this demo, I will leverage a fancy popular Visualizer tool.

 

git clone https://github.com/ajeetraina/docker101
cd docker101/play-with-docker/visualizer

 

All you need is to execute docker-compose command to bring up visualizer container:

 

docker-compose up -d

 

Screen Shot 2017-05-20 at 7.09.57 AM

 

Click on port “8080” which gets displayed on top centre of this page and it should display a fancy visualiser depicting Swarm Mode cluster nodes.

Screen Shot 2017-05-20 at 7.13.50 AM

Creating an Overlay Network:

 $docker network create -d overlay collabnet

Screen Shot 2017-05-20 at 7.15.15 AM

 

Let us try to create service with no preference placement or no node labels.

Setting up WordPress DB service:

docker service create --replicas 10 --name wordpressdb1 --network collabnet --env MYSQL_ROOT_PASSWORD=collab123 --env MYSQL_DATABASE=wordpress mysql:latest

Screen Shot 2017-05-20 at 7.19.02 AM

When you run the above command, the swarm will spread the containers evenly node-by-node. Hence, you will see 2-containers per node as shown below:

 Screen Shot 2017-05-20 at 7.22.58 AM 

Setting up WordPress Web Application:

docker service create --env WORDPRESS_DB_HOST=wordpressdb1 --env WORDPRESS_DB_PASSWORD=collab123 --network collabnet --replicas 3 --name wordpressapp --publish 80:80/tcp wordpress:latest

Screen Shot 2017-05-20 at 7.30.55 AM

Visualizer:

Screen Shot 2017-05-20 at 7.34.51 AM 

As per the visualizer, you might end up with uneven distribution of services. Example., Rack-1 holding node-1, node-2 and node-3 looks to have almost equal distribution of services, Rack-2 which holds node3 lack WordPress fronted application.

Here Comes Placement Preference for a rescue…

Under the latest release, Docker team has introduced a new feature called “Placement Preference Scheduling”. Let us spend some time to understand what it actually means. You can set up the service to divide tasks evenly over different categories of nodes. One example of where this can be useful is to balance tasks over a set of datacenters or availability zones. 

This uses --placement-pref with a spread strategy (currently the only supported strategy) to spread tasks evenly over the values of the datacenter node label. In this example, we assume that every node has a datacenter node label attached to it. If there are three different values of this label among nodes in the swarm, one third of the tasks will be placed on the nodes associated with each value. This is true even if there are more nodes with one value than another. For example, consider the following set of nodes:

  • Three nodes with node.labels.datacenter=india
  • One node with node.labels.datacenter=uk
  • One node with node.labels.datacenter=us

Considering the last example, since we are spreading over the values of the datacenter label and the service has 5 replicas, at least 1 replica should be available  in each datacenter. There are three nodes associated with the value “india”, so each one will get one of the three replicas reserved for this value. There is 1 node with the value “uk”, and hence 1 replica for this value will be receiving it. Finally, “us” has a single node that will again get atleast 1 replica of each service reserved.

To understand more clearly, let us assign node labels to Rack nodes as shown below:

Rack-1 : 

Node-1

docker node update --label-add datacenter=india node1
docker node update --label-add datacenter=india node2
docker node update --label-add datacenter=india node3

Rack-2

docker node update --label-add datacenter=uk node4

Rack-3

docker node update --label-add datacenter=us node5

Screen Shot 2017-05-20 at 7.46.33 AM

 

Removing both the services:

docker service rm wordpressdb1 wordpressapp

Let us now pass placement preference parameter to the docker service command:

docker service create --replicas 10 --name wordpressdb1 --network collabnet --placement-pref “spread=node.labels.datacenter” --env MYSQL_ROOT_PASSWORD=collab123 --env MYSQL_DATABASE=wordpress mysql:latest

Screen Shot 2017-05-20 at 8.05.52 AM

 

Visualizer:

Screen Shot 2017-05-20 at 8.05.14 AM

Rack-1(node1+node2+node3) has 4 copies, Rack-2(node4) has 3 copies and Rack-3(node5) has 3 copies.

Let us run WordPress Application service likewise:

docker service create --env WORDPRESS_DB_HOST=wordpressdb1 --env WORDPRESS_DB_PASSWORD=collab123 --placement-pref “spread=node.labels.datacenter” --network collabnet --replicas 3 --name wordpressapp --publish 80:80/tcp wordpress:latest

Screen Shot 2017-05-20 at 8.09.29 AM

Visualizer: As shown below, we have used placement preference feature to ensure that the service containers get distributed across the swarm cluster on both the racks.

Screen Shot 2017-05-20 at 8.10.41 AM

 

As shown above, –placement-pref ensures that the task is spread evenly over the values of the datacenter node label. Currently spread strategy is only supported.Both engine labels and node labels are supported by placement preferences.

Please Note: If you want to try this feature with Docker compose , you will need Compose v3.3 which is slated to arrive under 17.06 release.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more about the latest Docker releases clicking on this link.

 

Docker Service Inspection Filtering & Template Engine under Swarm Mode

Estimated Reading Time: 4 minutes

Go programming language has really helped in shaping Docker as a powerful software and enabling fast development for distributed systems. It has been helping developers and operations team to quickly construct programs and tools for Cloud computing environment. Go offers built-in support for JSON  (JavaScript Object Notation) encoding and decoding, including to and from built-in and custom data types.

golang

 

In last 1 year, Docker Swarm Mode has matured enough to become production-ready. The Orchestration platform is quite stable with numerous features like Logging, Secrets Management, Security Scanning, improvement over Scheduling, networking etc. making it more simple to use and scale-out in just few liner commands. With the introduction of new APIs like swarm, node, volume plugins, services etc., Swarm Mode brings dozens of features to control every aspect of swarm cluster sitting on the master node. But when you start building services in the range of 100s & 1000s and that too distributed across another 100s and 1000s of nodes, there arise a need of quick and handy way of filtering the output, especially when you are interested to capture one specific data out of the whole cluster-wide configuration. Here comes ‘a filtering flag’ as a rescue.

swarm11

The filtering flag (-f or --filter) format is a key=value pair which is actually a very powerful weapon for developers & system administrators.If you have ever played around with Docker CLI, you must have used docker inspect command to get metadata on a container/ image. Using it with -f provides you more specific information like IP address, network etc. There are numerous guide on how to use filters with standalone host holding the Docker images but I found lack of guides talking about Swarm Mode filters.

Under this blog post, I have prepared a quick list of consolidated filtering commands and outputs in tabular format for Swarm Mode Cluster(shown below).

I have 3 node Swarm Mode cluster running on one of my Google Cloud Engine. Let us first create a simple wordpress application as shown below:

 

swarm-1

 

We will create a simple wordpress service as shown:

master==>docker service create –env WORDPRESS_DB_HOST=wordpressdb1 –env WORDPRESS_DB_PASSWORD=collab123 –network collabnet –replicas 4 –name wordpressapp –publish 80:80/tcp wordpress:latest

Below is the tabular list of commands and outputs which talks about various filtering mode for docker service inspect command:

 

Inspecting the “wordpress” service:

[wpsm_comparison_table id="3" class=""]

To retrieve the network information for specific service:

[wpsm_comparison_table id="6" class=""]

To list out the port which wordpress is using for specific service:

[wpsm_comparison_table id="7" class=""]

To list out the protocol detail for specific service:

[wpsm_comparison_table id="8" class=""]

To list out the type of Published port:

[wpsm_comparison_table id="9" class=""]

To verify if its DNS RR or Virtual IP(VIP) based for specific service:

[wpsm_comparison_table id="10" class=""]

 

To list out the type of Published port for specific service:

[wpsm_comparison_table id="11" class=""]

To list out the replication factor for a specific service:

[wpsm_comparison_table id="12" class=""]

To list out the environmental variable passed for specific service:

[wpsm_comparison_table id="13" class=""]

To list out the image detail for specific service:

[wpsm_comparison_table id="14" class=""]

To list out the complete networking details for specific service:

[wpsm_comparison_table id="15" class=""]

To list out detailed consolidated  meta data information for the specific service::

[wpsm_comparison_table id="4" class=""]
 
To list out further detailed information of the service :

[wpsm_comparison_table id="5" class=""]

Let us create another service called “wordpressdb1” which is actually a database service under Docker Swarm Mode:

$docker service create –replicas 1 –name wordpressdb1 –network collabnet –env MYSQL_ROOT_PASSWORD=collab123 –env MYSQL_DATABASE=wordpress mysql:latest

Inspecting the service “collabdb1”:

[wpsm_comparison_table id="16" class=""]

Displaying the output in human-friendly way:

[wpsm_comparison_table id="17" class=""]

To list out the VIPs details for specific service:

[wpsm_comparison_table id="18" class=""]

Retrieving the last StatusUpdate details:

[wpsm_comparison_table id="19" class=""]

I have plans to add more examples during the course of time based on experience and exploration. I hope you will find it handy.

 

 

Running Apache JMeter 3.1 Distributed Load Testing Tool using Docker Compose v3.1 on Swarm Mode Cluster

Estimated Reading Time: 6 minutes

Apache JMeter is a popular open source software used as a load testing tool for analyzing and measuring the performance of web application or multitude of services. It is 100% pure Java application, designed to test performance both on static and dynamic resources and web dynamic applications. It simulates a heavy load on one or multitude of servers, network or object to test its strength/capability or to analyze overall performance under different load types. The various applications/server/protocol includes – HTTP, HTTPS (Java, NodeJS, PHP, ASP.NET), SOAP / REST Web services, FTP,Database via JDBC, LDAP, Message-oriented middleware (MOM) via JMS,Mail – SMTP(S), POP3(S) and IMAP(S), native commands or shell scripts, TCP, Java Objects and many more.

Logo

JMeter is extremely easy to use and doesn’t take time to get familiar with it.It allows concurrent and simultaneous sampling of different functions by a separate thread group. JMeter is highly extensible which means you can write your own tests. JMeter also supports visualization plugins allow you extend your testing.JMeter supports many testing strategies such as Load Testing, Distributed Testing, and Functional Testing. JMeter can simulate multiple users with concurrent threads, create a heavy load against web application under test.
 
 

Arch

 
 
Architecture of JMeter Distributed Load Testing:
 
JMeter Distributed Load Testing uses client-server model. It is a kind of testing which use multiple systems to perform stress testing. Distributed testing is applied for testing web sites and server applications when they are working with multiple clients simultaneously.
It includes:
 
1. Master –  the system running JMeter GUI, which controls the test.
2. Slave –  the system running JMeter-server, which takes commands from the GUI and send requests to the target system(s).
3. Target (System Under Test)– the web server which undergoes stress test.

 

What Docker has to offer for Apache JMeter?

Good Question ! With every new installation of Apache JMeter, you need to download/install JMeter on every new node participating in Distributed Load Testing. Installing JMeter requires dependencies to be installed first like JAVA, default-jre-headless, iputils etc. The complexity begins when you have multitude OS distributions running on your infrastructure. You can always use automation tools or scripts to enable this solution but again there is an extra cost of maintenance and skills required to troubleshoot with the logs when something goes wrong in the middle of the testing.

With Docker, it is just a matter of 1 Docker Compose file and an one-liner command to get the entire infrastructure ready. Under this blog, I am going to show how a single Docker Compose v3.1 file format can help you setup the entire JMeter Distributed Load Testing tool – all working on Docker 17.03 Swarm Mode cluster.

I will leverage 4-node Docker 17.04 Swarm Cluster running on my Google Cloud Engine platform to showcase this solution as shown below:

gcloud

Under this setup, I will be using instance “master101” as master/client node while the rest of worker nodes as “server/slaves” nodes. All of these instances are running Ubuntu 17.04 with Docker 17.03 installed. You are free to choose any latest Linux distributions which supports Docker executables.

First, let us ensure that you have the latest Docker 17.03 running on your machine:

$curl -sSL https://get.docker.com/ | sh

Version

 

Ensure that the latest Docker compose is installed which supports v3.0 file format:

 $curl -L https://github.com/docker/compose/releases/download/1.11.2/docker-compose-`uname -s`-`uname -m` > /usr/local 
   /bin/docker-compose
 $chmod +x /usr/local/bin/docker-compose

Compose_Version

Docker Compose for Apache JMeter Distributed Load Testing

One can use docker stack deploy command to quickly setup JMeter Distributed Testing environment. This command requires docker-compose.yml file which holds the declaration of services (apache-jmeter-master and apache-jmeter-server) respectively. Let us clone the repository as shown below to get started –

Run the below commands on the Swarm Master node –

$git clone https://github.com/ajeetraina/jmeter-docker/

$cd jmeter-docker

Under this directory, you will find docker-compose.yml file which is shown below:

Picture1

 

In the above docker-compose.yml, there are two service definitions – master and server. Through this docker compose file format, we can push master/client service definition to the master node and server specific service definition which has to be pushed to all the slave nodes. The constraints sections takes care of this functionality. This compose file will automatically create an overlay network called jm-network across the cluster.

Let us start the required services out of the docker-compose file as shown below:

$sudo docker stack deploy -c docker-compose.yml myjmeter

Untitled picture

Checking if the services are up and running through docker service ls command:

service_ls

Let us verify if constraints worked correctly or not as shown below:

1_service

2_service

As shown above, the constraints worked well pushing the containers to the right node.

It’s time to enter into one of the container running on the master node as shown below:

exec_cont

Using docker exec command, one can enter into the container and browse to /jmeter/apache-jmeter-3.1/bin directory.

exec_cont2

Pushing the JMX file into the container

   $docker exec -i <container-running-on-master-node> sh -c 'cat > /jmeter/apache-jmeter-3.1/bin/jmeter-docker.jmx' < jmeter-docker.jmx

Starting the Load testing

  $docker exec -it <container-on-master-node> bash
  root@daf39e596b93:/#cd /jmeter/apache-jmeter-3.1/bin
  $./jmeter -n -t jmeter-docker.jmx -R<list of containers running on slave nodes seperated by commas)

Planning to move from Apache JMeter running on VMs to Containers??

This is very important topic which I don’t want to miss out , though we are at the end of this blog post. You might be using Virtual Infrastructure for setting up Apache JMeter Distributed Load tool. Generally, each  VM is assigned with static IP and it does make sense as your System Under Test will see load coming from different static IP. The above docker-compose file uses an overlay network and IPs are in the range of 172.16.x.x which gets assigned automatically and taken care by Docker  Swarm Networking. In case you are looking out for static IP which should get assigned to each containers automatically, then for sure you need macvlan to be implemented. Follow https://github.com/ajeetraina/jmeter-docker/blob/master/static-README.md to achieve this.

In my next blog post, I will cover further interesting stuffs related to JMeter Distributed Load Testing tool. Drop me a message through tweet (@ajeetsraina) if you face any issue with the above solution.

Loved reading it? Feel free to share it.

[clickandtweet handle=”@ajeetsraina” hashtag=”#Jmeter” related=”@docker” layout=”” position=””]Running Apache JMeter 3.1 Distributed Load Testing via Docker Compose on Swarm Mode[/clickandtweet]

References:

https://github.com/ajeetraina/jmeter-docker

http://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.pdf

https://docs.docker.com/compose/compose-file/

 

Introducing new RexRay 0.8 with Docker 17.03 Managed Plugin System for Persistent Storage on Cloud Platforms

Estimated Reading Time: 6 minutes

DellEMC Rex-Ray 0.8 Final Release was announced last week. Graduated as top-level project within {code} community, RexRay 0.8 release has been considered as one of the largest releases till date. The new release introduced support for long lists of new storage platforms like S3FS, EBS, EFS, GCEPD & ScaleIO shown below:

rexray010

Public cloud storage is one of the fastest growing sector in storage with leaders like Amazon AWS, Google Cloud Storage and Microsoft Azure. With the release of RexRay 0.8, {code} community took the right approach in targeting the first community-contributed driver starting with Amazon EFS driver and then quickly adding additional community-contributed drivers like Digital Ocean, FittedCloud, Google Cloud Engine (GCEPD) & Microsoft Azure Unmanaged Disk driver.

Introducing New Docker 17.03 Volume Plugin System

With Docker 17.03 release, a new managed plugin system has been introduced. This is quite different from the old Docker plugin system. Plugins are now distributed as Docker images and can be hosted on Docker Hub or on a private registry.A volume plugin enables Docker volumes to persist across multiple Docker hosts.

rex01

In case you are very new to Docker Plugins, they basically extend Docker’s functionality.  A plugin is a process running on the same or a different host as the docker daemon, which registers itself by placing a file on the same docker host in one of the plugin directories described either as a.sock files( UNIX domain sockets placed under /run/docker/plugins), .spec files( text files containing a URL, such as unix:///other.sock or tcp://localhost:8080 placed under /etc/docker/plugins or /usr/lib/docker/plugins)or.json files ( text files containing a full json specification for the plugin placed under /etc/docker/plugin). You can refer this in case you want to develop your own Docker Volume Plugin.

Running RexRay inside Docker container

Yes, you read it correct ! With the introduction of Docker 17.03 New Plugin Managed System, you can now run RexRay inside Docker container flawlessly. Rex-Ray Volume Plugin is written in Go and provides advanced storage functionality for many platforms including VirtualBox, EC2, Google Compute Engine, OpenStack, and EMC.

You can list out available Docker Volume Plugin for various storage platforms using docker search rexray as shown below:

rexray_plugin2

Let us test-drive RexRay Volume Plugin for Swarm Mode cluster for the first time. I have 4-node Swarm Mode cluster running on Google Cloud Platform as shown below:

rex1

Verify that all the cluster nodes are running the latest 17.03.0-ce (Community Edition).

Installing the RexRay Volume plugin is just one-liner command:

rexray_plugininstallation

 

You can inspect the Rex-Ray Volume plugin using docker plugin inspect command:

rexray_inspect    rexray_inspect22

It’s time to create a volume using docker volume create utility :

$sudo docker volume create –driver rexray/gcepd –name storage1 –opt=size=32

rexray_volume1

You can verify if it is visible under GCE Console window:

RexRay_GCE

Let us try running few application which uses RexRay volume plugin as shown:

{code}==>docker run -dit –name mydb -e MYSQL_ROOT_PASSWORD=wordpress -e MYSQL_USER=wordpress -e MYSQL_PASSWORD:wordpress –volume-driver=rexray/gcepd -v dbdata:/var/lib/mysql mysql:5.7

Verify that MySQL service is up and running using docker logs <container-id> command as shown below:

dbdata1

By now, we should be able to see new volume called “dbdata” created and shown under docker volume ls command:

rexray_dbdata

Under GCE console, it should get displayed too:

dbdata_gce

Using Rex-Ray Volume Plugin under Docker 17.03 Swarm Mode

This is the most interesting section of this blog post. RexRay volume plugin worked great for us till now, especially for a single Docker host running multiple number of services.But what if I want to enable RexRay Volume to persist across multiple Docker Hosts(Swarm Mode cluster)? Yes, there is one possible way to achieve this – using Swarm Executor. It executes docker command across the swarm cluster. Credits to Madhu Venugopal @ Docker Team for assisting me testing with this tool.

swarmexec_1

Please remember that this is UNOFFICIAL way of achieving Volume Plugin implementation across swarm cluster.  I found this tool really cool and hope that it gets integrated within Docker official repository.

First, we need to clone this repository:

$git clone https://github.com/mavenugo/swarm-exec

Run the below command to push the plugin across the swarm cluster:

$cd swarm-exec

$./swarm-exec.sh docker plugin install –grant-all-permissions rexray/gcepd GCEPD_TAG=rexray

swarm_exec01

First, let’s quickly  verify the plugin on the master node  as shown below:

swarm_exec2

While verifying it on the worker nodes:

swarm_exec3

rexray_gce11

The docker volume inspect <volname> should display this particular volume as rexray volume driver as shown below:

rexray_gce222

Creating a MySQL service which uses Rex-Ray volume under Swarm Mode cluster:

up_service

Verifying that the service is up and running:

rexray_100

To conclude, the new version of RexRay looks promising and brings support for various Cloud storage platform. It continues to be leading open source container orchestration engine and now with inclusion of Docker 17.03 Managed Plugin architecture, it will definitely reduce the pain for implementing persistent storage solution.

Docker Compose v3.1 file format now supports Docker 1.13.1 Secret Management

Estimated Reading Time: 5 minutes

Docker Engine 1.13.1 went GA last week and introduced one of the most awaited feature called Secrets Management . With a mission to introduce a container native solution that strengthens the Trusted Delivery component of container security, new Secrets API is rightly integrated into Docker 1.13.1 Orchestration engine.The new secrets-management capabilities are also included in Docker Datacenter as part of the Docker 1.13.1 release.

docker secrets

What are secrets all about?

It is a blob of data, such as password, SSH private keys, certificates,API keys, and encryption keys etc..In broader term, it can be anything that can be tightly control access to.The secrets-management capability is the latest security enhancement integrated into the Docker platform so as to ensure applications are safer in a containerized environment.This is going to benefit financial sector players who look for hybrid cloud security strategy.

 

Why do we need Docker secrets?

There has been numerous concerns over environmental variables which are being used to pass configuration & settings to the containers.Environmental variables are easily leaked when debugging and exposed into many places including child processes, hosting secrets on a server etc.

Consider a Docker compose file for WordPress application:

wordpress:
image: wordpressapp
links:
– mariadb:mysql
environment:
– WORDPRESS_DB_PASSWORD=<password>
ports:
– “80:80”
volumes:
– ./code:/code
– ./html:/var/www/html

As shown above, environmental variables are insecure in nature because they are accessible by any process in the container, preserved in intermediate layers of an image, easily accessible through docker inspect and lastly, it can get shared with any container linked to the container. To overcome this, one can use secrets to manage any sensitive data which a container needs at runtime aand there is no need to store in the image . A  given secret is only accessible to those services which have been granted explicit access to it, and only while those service tasks are running.

How does it actually work?

Docker secrets is currently supported for Swarm mode only starting Docker Engine 1.13.1. If you are using Docker 1.12.x you might need to upgrade to the latest 1.13.x release to use this feature. To understand how secret works under Docker Swarm mode, you can follow the below process flow:

 

secret_compose

 

Docker Compose v3.1 File Format now supports Secrets

Docker compose file format v3.1 is available and requires Docker Engine 1.13.0+. It introduced support for secrets for the first time which means that now you can use secrets inside your docker-compose file.

compose_matrix

Let us test-drive Compose v3.1 file format to see how secrets can be implemented using the newer docker stack deploy utility as shown below:

Ensure that you have the latest Docker 1.13.1 running on your Swarm Mode cluster:

secret_0

I will leverage 4-node Swarm Mode cluster to test the secret API:

secret_1

 

Let us first create a secret using docker secret create utility as shown:

$date | md5sum | docker secret create collab_mysqlpasswd –
72flsq9lhuj8je20y7bzfxyld

ollab# date | md5sum | docker secret create collab_mysqlrootpasswd –
1g329zm35umunim61r8q49res

collab# date | md5sum | docker secret create collab_wordpressdbpasswd –
tfxq1bm2cn54he03uzdaar91i

 

Listing the secret using the below command:

secret_101

Create a docker-compose.yml file with the below entry:

PLEASE NOTE: No Compose binaries are required to run the below command. All you require is Compose v3.1 file format for this to work.

compose-1compose-2compose-3

You can copy the whole code from here

Let us now use docker stack deploy to build up the services containing secrets:

$docker stack deploy –compose-file=./docker-compose.yml collab

Updating service collab_mysql (id: yn9fqojgmtmzukqnn3tfa6wlk)

Updating service collab_web (id: xw7kx49sqrkaqriikm5lsbqmj)

Verify the services are up and running:

collab_5

Let us verify if secret got stored under every container:

root@master101:/collab# docker exec -it 35f cat /run/secrets/mysqlpasswd
050a58c339431a5c9a6a6b8a15bead91  –

As shown above, one can use docker exec to connect to the container and read the contents of the secret data file, which defaults to being readable by all and has the same name as the name of the secret.

Key Takeaways:

  • Docker secrets are only available to swarm services, not to standalone containers. To use this feature, consider adapting your container to run as a service with a scale of 1.
  • No Compose binaries are required to run docker stack deploy. All you require is Compose v3.1 file format for this to work.
  • Raft data is encrypted in Docker 1.13 and higher.
  • It is recommended to update all of your manager nodes to Docker 1.13 to prevent secrets from being written to plain-text Raft logs.

 

What’s New in Docker Engine 1.13 Swarm Mode?

Estimated Reading Time: 11 minutes

Docker Engine 1.13.0 Final Release has been officially announced . With over 1050 commits, 1025 file changes and 175 days since Engine 1.12,Docker team has put a major effort to extend the Swarm Mode functionality and bug fixes/improvements. Docker 1.13.0 brings dozens of new features, a major highlights of this release includes – Centralized Logging, New Docker Management CLI, impressive New Secret API , Deploy Stack directly from Docker Compose, Auto Locking, Plugins Management and many more. With this release,Docker Inc. added support for building docker DEBs for Ubuntu 16.04 Xenial & 16.10 Yakkety Yak PPC64LE & s390x platform. There is now an inclusion of RPM builder for VMware Photon OS, Fedora 25 and DEB builder for Ubuntu 16.10 Yakkety Yak.

new_docker112

Under this blog, let us explore few of important new features of Docker 1.13 Swarm Mode:

Upgrading Docker Engine from 1.12 to 1.13

curl -sSL https://get.docker.com/ | sh

113_1

Docker 1.13 includes New System Management CLIs (which we will discuss later under this blog):

Docker_Management_Commands

Experimental & Stable Release ~ all in a single binary

Experimental features are now included in the standard Docker binaries as of version 1.13.0.This is a great improvement since 1.12 release. Under Docker 1.12, there was a separate branch for experimental & stable release and one has to pull them via curl utility from test.docker.com & get.docker.com repository respectively. For enabling experimental features, you need to start the Docker daemon with --experimental flag. You can also enable the daemon flag via /etc/docker/daemon.json. e.g.

{
    "experimental": true
}

Then make sure the experimental flag is enabled:

$ docker version -f {{.Server.Experimental}}

true


daemon
With this new experimental feature enabled, one can see additional Docker management commands as shown below:

docker-1

Docker 1.13 provides you an ability to set DOCKER_HIDE_LEGACY_COMMANDS environment variable to show only the management commands. You can enable it using the below command:

DOCKER_HIDE_LEGACY_COMMANDS=true docker --help
docker_legacy

New CLI for System Resource Management:

Docker 1.13 addresses one of the biggest issue faced by Docker users & community – how to reclaim disk space used by docker? Issues like <none>:<none> images, dangling images, getting disk full even if container consumes less spaces etc. are few of pain points which has been reported number of times.Docker 1.13 introduces new system resource management commands to help users understand how much disk space Docker is using, and help remove unused data.

Docker team has introduced a new system management CLI – docker system to help users to get information like disk usage, system-wide information and real time events from the server.

docker_system

Below tables depicts the various system-level commands & its usage:

system_df_3system_df_4system_image_prune

A New Centralized Logging Under Swarm Mode

Under Docker 1.12, one feature which we really missed was “Centralized Logging”. There was no docker service logs due to which Docker users has to depend upon 3rd party tool like rsyslog or ELK based solution. With Docker 1.13, a new command docker service log has been introduced.  It is a powerful new experimental command that simplifies the debugging services. Now there is NO NEED to track down hosts and containers powering a particular service and pulling logs from those containers, docker service logs utility pulls logs from all containers running a service and streams them to your console.

Want to try out this feature? As shown in the screenshot, one can easily retrieve logs from scaled-out services from various worker and master nodes using this simple API:

logs_1

logs_2

logs_3

logs_4

Let us scale-out the redis service to 5 and check if the logs collects data from all of those worker nodes and push it to master node:

logs_5

logs_6

Auto Locking Feature under Swarm Mode:

Docker 1.13 brings an interesting security feature called Auto Locking for Swarm Mode. To understand this concept, let us go back to raft consensus which forms the backbone of Swarm Mode implementation. By default, all of the raft consensus data is stored encrypted at rest in all the managers.A key is generated and stored on disk, so that managers can restart without operator intervention.However, if a disk gets corrupted, or one accidentally backup both the data and the key, all of the cluster data will get leaked(which starting with 1.13 might include secrets). Because of this, the newer release allow you to take ownership of this key, and you are able to enable autolock, which effectively means that the key used for encryption of your data never gets persisted to disk. Taking ownerhip of the key also means that a manager can no longer restart without human intervention, since providing the key is now of the responsibility of the operator (or external application). At any point in time you may rotate the key that is used to unlock the cluster, or give the responsibility of managing the key back to the managers, so that they can go back to restarting without external intervention.You can change which mode the cluster is operating in using docker swarm update --autolock=true/false.You can inspect what the current key is, or rotate it, by using docker swarm unlock-key

There are two ways to implement AutoLocking feature under 1.13 Swarm Mode.Either using:

docker swarm init --autolock

or

docker swarm update --autolock( to turn on manager locking).

When you run the above command, it prints out a key.On restart docker swarm unlock is necessary to start the manager. This takes the key on stdin.You can retrieve the current key with docker swarm unlock-key, or rotate it with docker swarm unlock-key --rotate. The next question could be – What happens while the swarm in locked? The swarm components do not operate until you run docker swarm unlock.

Let us look at how it actually works:-

Step-1: Let assume that we have 6-node cluster and Master1 is our Leader node.

lock_1

Step-2: Let us update the “Node1” as Leader node as shown below:

lock_2

Step-3: Run docker swarm update command to enable autolock. This command displays the key to unlock a swarm manager.

lock_3

Step-4: One can provide the key to unlock the manager node.

lock_4

Step-5: Let us test if it really works by restarting the manager node.

lock_5

Step-6: After the system come back, you need to join it back as manager only when you have unlock key.

lock_7

Step-7: As shown below, your Manager1 join back as the manager node:

lock_8

Deploy Stack directly from docker-compose

One of the compelling feature under Docker 1.12 was introduction to “Distributed Application Bundle” rightly called as DAB. DAB helped developers to build and test containerized apps and then wrap all the app artifacts into stable .dab files. Operation teams, in turn, can take those .dab  to deploy apps by creating stacks of services from DABs.

Under Docker 1.13, the two stage process has been simplified and turned into one single command to build microservices under Swarm Mode. You don’t need to run docker-compose bundle to build .DAB file, instead Docker 1.13 adds Compose-file support to the `docker stack deploy` command so that services can be deployed using a `docker-compose.yml` file directly

Below picture depicts one single-liner command to achieve this:

stack_docker_new

To test this out, let us write a docker-compose file as shown below:

  1. Create a directory called “collab” and place the below docker-compose.yml file under the same directory:

ajeetsraina81@ucp-1:~/collab$ cat docker-compose.yml
version: ‘3’
services:
db:
image: mysql:5.7
volumes:
– db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
networks:
– collabnet
wordpress:
depends_on:
– db
image: wordpress:latest
ports:
– “8000:80”
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: wordpress
networks:
– collabnet
volumes:
db_data:
networks:
collabnet:
driver: overlay

[Please note that the above file might incur spaces and blank lines while you copy-paste directly, instead use http://lorry.io/ to validate it]

Before we run docker-compose, let us verify the listed networks:

ajeetsraina81@ucp-1:~/collab$ sudo docker network ls
NETWORK ID          NAME                 DRIVER              SCOPE
7f8a98db7b71        bridge                          bridge              local
fb456b7434a0        docker_gwbridge     bridge              local
420d78209766       host                            host                local
kpa5bmhehoe5       ingress                     overlay             swarm
40b2c7537075        none                             null                local

Building microservices using docker-compose is now one-liner command.

ajeetraina@master001:~/collab$ sudo docker stack deploy –compose-file ./docker-compose.yml myapp
Ignoring unsupported options: restart
Creating network myapp_default
Creating network myapp_collabnet
Creating service myapp_wordpress
Creating service myapp_db

Done. Let us verify if this creates list of networks:

ajeetraina@master001:~/collab$ sudo docker network ls
NETWORK ID          NAME                    DRIVER              SCOPE
7f8a98db7b71           bridge                          bridge              local
fb456b7434a0         docker_gwbridge       bridge              local
420d78209766         host                              host                  local
kpa5bmhehoe5         ingress                        overlay             swarm
zxsd88aanqa9           myapp_collabnet    overlay             swarm
wx0nbvatrvax           myapp_default        overlay             swarm
40b2c7537075           none                            null                 local

 

ajeetsraina81@ucp-1:~/collab$ sudo docker-compose config –services
db
wordpress

compose_11

In the future blog, we will explore further with Secret Management, Plugins Management and other bugfixes/improvement which comes with 1.13 release.