The 20-minutes Docker 1.12 Swarm Mode demonstration on Azure Platform

Estimated Reading Time: 7 minutes

2016 has been a great year for Docker Inc. With the announcement of Docker 1.12 release in last Dockercon, a new generation Docker clustering & distributed system was born. With an optional “Swarm Mode” feature rightly integrated into core Docker Engine, a native management of a cluster of Docker Engines, orchestration, decentralized design, service and application deployment, scaling, rolling updates, desired state reconciliation, multi-host networking, service discovery and routing mesh implementation – all of these features works flawlessly. With the recent Engine 1.12.5 release, all of these features have matured enough to make it production ready.

Under this blog post, I will be spending another 20-minutes to go quickly through the complete A-Z tutorial around Swarm Mode covering the important features like Orchestration, Scaling, Routing Mesh, Overlay Networking, Rolling Updates etc.

  • Preparing Your Azure Environment
  • Setting up Cluster Nodes
  • Setting up Master Node
  • Setting up Worker Nodes
  • Creating Your First Service
  • Inspecting the Service
  • Scaling service for the first time
  • Creating the Nginx Service
  • Verifying the Nginx Page
  • Stopping all the services in a single command
  • Building WordPress Application using CLI
  • Building WordPress Application using Docker-Compose
  • Demonstrating CloudYuga RSVP Application
  • Scaling the CloudYuga RSVP Application
  • Demonstrating Rolling Updates
  • Docker 1.12 Scheduling | Restricting Service to specific nodes

 

Preparing Your Azure Environment:

1. Login to Azure Portal.
2. Create minimal of 5 swarm nodes(1 master & 4 worker nodes) – [We can definitely automate this]
3. While creating Virtual Machine, select “Docker on Ubuntu Server” (It contains 1.12.3 and Ubuntu 16.04)
4. Select Password rather than Public Key for quick access using PuTTY.[for demonstration]
5. Select the default Resource Group for all the nodes to communicate each other

 

Setting Up Cluster Nodes:

tryit1

 

Setting Up Master Node:

ajeetraina@Master1:~$ sudo docker swarm init –advertise-addr 10.1.0.4
Swarm initialized: current node (dj89eg83mymr0asfru2n9wpu7) is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join \
–token SWMTKN-1-511d99ju7ae74xs0kxs9x4sco8t7awfoh99i0vwrhhwgjt11wi-d8y0tplji3z449ojrfgrrtgyc \
10.1.0.4:2377

To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.

 

Setting up Worker Node:

Adding worker nodes is quite easy. Just run the above command to connect worker nodes to manager one by one. This can also be automated(will touch later if needed).

Tips: In case you loose your current session on Manager and want to know what token will allow you to connect to the cluster, run the below command on the manager node:

ajeetraina@Master1:~$ sudo docker swarm join-token worker
To add a worker to this swarm, run the following command:

docker swarm join \
–token SWMTKN-1-511d99ju7ae74xs0kxs9x4sco8t7awfoh99i0vwrhhwgjt11wi-d8y0tplji3z449ojrfgrrtgyc \
10.1.0.4:2377

Run the above command on all the nodes one by one.

 

Listing the Swarm Cluster:

ajeetraina@Master1:~$ sudo docker node ls
ID                                                 HOSTNAME        STATUS    AVAILABILITY  MANAGER STATUS
49fk9jibezh2yvtjuh5wlx3td            Node2              Ready            Active
aos67yarmj5cj8k5i4g9l3k6g          Node1               Ready            Active
dj89eg83mymr0asfru2n9wpu7 *  Master1           Ready            Active                        Leader
euo8no80mr7ocu5uulk4a6fto       Node4              Ready            Active

 

Verifying if the node is worker node or not?

Run the below command on the worker node:

$sudo docker info

Swarm: active
NodeID: euo8no80mr7ocu5uulk4a6fto
Is Manager: false
Node Address: 10.1.0.7
Runtimes: runc

The “Is Manager: false” entry specifies that this node is a worker node.

Creating our First Service:

ajeetraina@Master1:~$ sudo docker service create alpine ping 8.8.8.8
2ncblsn85ft2sgeh5frsj0n8g
ajeetraina@Master1:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                  PORTS               NAMES
cc80ce569a94        alpine:latest       “ping 8.8.8.8”      2 seconds ago       Up Less than a second                       nauseous_stonebraker.1.6gcqr8d9brbf9lowgqpb4q6uo
ajeetraina@Master1:~$

 

Querying the service:

Syntax: sudo docker service ps <service-id>

Example:

ajeetraina@Master1:~$ sudo docker service ps 2ncb
ID                         NAME                    IMAGE   NODE     DESIRED STATE  CURRENT STATE           ERROR
6gcqr8d9brbf9lowgqpb4q6uo  nauseous_stonebraker.1  alpine  Master1  Running        Running 54 seconds ago

Alternative Method:

ajeetraina@Master1:~$ sudo docker service ps nauseous_stonebraker
ID                         NAME                    IMAGE   NODE     DESIRED STATE  CURRENT STATE               ERROR
6gcqr8d9brbf9lowgqpb4q6uo  nauseous_stonebraker.1  alpine  Master1  Running        Running about a minute ago
ajeetraina@Master1:~$

 

Inspecting the Service:

ajeetraina@Master1:~$ sudo docker service inspect 2ncb
[
{
“ID”: “2ncblsn85ft2sgeh5frsj0n8g”,
“Version”: {
“Index”: 27
},
“CreatedAt”: “2016-11-16T10:59:10.462901856Z”,
“UpdatedAt”: “2016-11-16T10:59:10.462901856Z”,
“Spec”: {
“Name”: “nauseous_stonebraker”,
“TaskTemplate”: {
“ContainerSpec”: {
“Image”: “alpine”,
“Args”: [
“ping”,
“8.8.8.8”
]
},
“Resources”: {
“Limits”: {},
“Reservations”: {}
},
“RestartPolicy”: {
“Condition”: “any”,
“MaxAttempts”: 0
},
“Placement”: {}
},
“Mode”: {
“Replicated”: {
“Replicas”: 1
}
},
“UpdateConfig”: {
“Parallelism”: 1,
“FailureAction”: “pause”
},
“EndpointSpec”: {
“Mode”: “vip”
}
},
“Endpoint”: {
“Spec”: {}
},
“UpdateStatus”: {
“StartedAt”: “0001-01-01T00:00:00Z”,
“CompletedAt”: “0001-01-01T00:00:00Z”
}
}
]
ajeetraina@Master1:~$

 

Scaling the Service:

ajeetraina@Master1:~$ sudo docker service ls
ID            NAME                  REPLICAS  IMAGE   COMMAND
2ncblsn85ft2  nauseous_stonebraker  1/1       alpine  ping 8.8.8.8
ajeetraina@Master1:~$ sudo docker service scale nauseous_stonebraker=4
nauseous_stonebraker scaled to 4
ajeetraina@Master1:~$ sudo docker service ls
ID            NAME                  REPLICAS  IMAGE   COMMAND
2ncblsn85ft2  nauseous_stonebraker  4/4       alpine  ping 8.8.8.8
ajeetraina@Master1:~$

 

Creating a Nginx Service:

ajeetraina@Master1:~$ sudo docker service create –name web –publish 80:80 –replicas 4 nginx
9xm0tdkt83z395bqfhjgwge3t
ajeetraina@Master1:~$ sudo docker service ls
ID            NAME                  REPLICAS  IMAGE   COMMAND
2ncblsn85ft2  nauseous_stonebraker  4/4       alpine  ping 8.8.8.8
9xm0tdkt83z3  web                   0/4       nginx
ajeetraina@Master1:~$ sudo docker service ls
ID            NAME                  REPLICAS  IMAGE   COMMAND
2ncblsn85ft2  nauseous_stonebraker  4/4       alpine  ping 8.8.8.8
9xm0tdkt83z3  web                   0/4       nginx
ajeetraina@Master1:~$ sudo docker service ls
ID            NAME                  REPLICAS  IMAGE   COMMAND
2ncblsn85ft2  nauseous_stonebraker  4/4       alpine  ping 8.8.8.8
9xm0tdkt83z3  web                   4/4       nginx

 

Verifying the Nginx Web Page:

ajeetraina@Master1:~$ sudo curl http://localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href=”http://nginx.org/”>nginx.org</a>.<br/>
Commercial support is available at
<a href=”http://nginx.com/”>nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
ajeetraina@Master1:~$

 

Stopping all the swarm mode service in a single shot

$sudo docker service rm $(docker service ls | awk ‘{print $1}’)

Building WordPress Application using CLI

Create an overlay network:

$sudo docker network create –driver overlay collabnet

Run the backend(wordpressdb) service:

$sudo docker service create –env MYSQL_ROOT_PASSWORD=collab123 –env MYSQL_DATABASE=wordpress –network collabnet –replicas 1 –name wordpressdb mysql:latest

Run the frontend(wordpressapp) service:

$sudo docker service create –env WORDPRESS_DB_HOST=wordpressdb –env WORDPRESS_DB_PASSWORD=collab123 –network collabnet –replicas 4 –name wordpressapp –publish 80:80/tcp wordpress:latest

Inspecting the Virtual IP Address:

$docker service inspect –format {{.Endpoint.VirtualIPs}} wordpressdb
$docker service inspect –format {{.Endpoint.VirtualIPs}} wordpressapp

Ensuring that the WordPress Application is working or not:

$curl http://localhost

Building WordPress Application using Docker-Compose:

Create a file called docker-compose.yml under some directory on your Linux system with the below entry:

version: '2'

services:
   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: wordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_PASSWORD: wordpress
volumes:
    db_data:

Execute the below command to bring up the application:

$sudo docker-compose up -d

Verifying the running containers:

$sudo docker-compose ps

Running the Interactive Mode for docker-compose:

$sudo docker-compose config –services

 

Testing CloudYuga RSVP Application:

[Credits: http://www.cloudyuga.guru]

$docker network create –driver overlay rsvpnet
$docker service create –name mongodb  -e MONGODB_DATABASE=rsvpdata –network rsvpnet  mongo:3.3
$docker service create –name rsvp -e MONGODB_HOST=mongodb –publish 5000 –network rsvpnet teamcloudyuga/rsvpapp

Verifying it opens up on Web browser:

$ curl http://localhost:30000
<!doctype html>
<html>
<title>RSVP App!</title>
<meta name=”viewport” content=”width=device-width, initial-scale=1″>
<link rel=”stylesheet” href=”/static/bootstrap.min.css”>
<link rel=”icon” href=”https://raw.githubusercontent.com/cloudyuga/rsvpapp/master/static/cloudyuga.png” type=”image/png” sizes=”
16×16″>
<script type=”text/javascript” src= “/static/jquery.min.js”></script>
<script type=”text/javascript” src= “/static/bootstrap.min.js”></script>
<body>
<div class=”jumbotron”>
<div class=”container”>
<div align=”center”>
<h2><a href=””>CloudYuga<img src=”https://raw.githubusercontent.com/cloudyuga/rsvpapp/master/static/cloudyuga.png”/>Garage RSVP!<
/a></h2>
<h3><font color=”maroon”> Serving from Host: 75658e4fd141 </font>
<font size=”8″ >

Delete the CloudYuga RSVP service

$ docker service rm rsvp

 

Create the CloudYuga RSVP service with custom names:

$ docker service create –name rsvp  -e MONGODB_HOST=mongodb -e TEXT1=”Docker Meetup” -e TEXT2=”Bangalore” –publish 5000  –network rsvpnet teamcloudyuga/rsvpapp

 

Scale the CloudYuga RSVP service:

$ docker service scale rsvp=5

 

Demonstrating the rolling update

$docker service update –image teamcloudyuga/rsvpapp:v1 –update-delay 10s rsvp

keep refreshing the RSVP frontend watch for changes,  “Name” should be converted into “Your Name”.

Demonstrating DAB and Docker Compose

$ mkdir cloudyuga
$ cd cloudyuga/
:~/cloudyuga$ docker-compose bundle -o cloudyuga.dab
WARNING: Unsupported top level key ‘networks’ – ignoring
Wrote bundle to cloudyuga.dab
:~/cloudyuga$ vi docker-compose.yml
:~/cloudyuga$ ls
cloudyuga.dab  docker-compose.yml

cat cloudyuga.dab
{
“Services”: {
“mongodb”: {
“Env”: [
“MONGODB_DATABASE=rsvpdata”
],
“Image”: “mongo@sha256:08a90c3d7c40aca81f234f0b2aaeed0254054b1c6705087b10da1c1901d07b5d”,
“Networks”: [
“rsvpnet”
],
“Ports”: [
{
“Port”: 27017,
“Protocol”: “tcp”
}
]
},
“web”: {
“Env”: [
“MONGODB_HOST=mongodb”
],
“Image”: “teamcloudyuga/rsvpapp@sha256:df59278f544affcf12cb1798d59bd42a185a220ccc9040c323ceb7f48d030a75”,
“Networks”: [
“rsvpnet”
],
“Ports”: [
{
“Port”: 5000,
“Protocol”: “tcp”
}
]
}
},
“Version”: “0.1”

Scaling the Services:

$docker service ls
ID            NAME               REPLICAS  IMAGE                                                                                          COMMAND
66kcl850fkkh  cloudyuga_web      1/1       teamcloudyuga/rsvpapp@sha256:df59278f544affcf12cb1798d59bd42a185a220ccc9040c323ceb7f48d030a75
aesw4vcj1s11  cloudyuga_mongodb  1/1

$docker service scale rsvp=5

mongo@sha256:08a90c3d7c40aca81f234f0b2aaeed0254054b1c6705087b10da1c1901d07b5d
aztab8c3r22c  rsvp               5/5       teamcloudyuga/rsvpapp:v1
f4olzfoomu76  mongodb            1/1       mongo:3.3

Restricting service to node-1

$sudo docker node update –label-addtype=ubuntu node-1

master==>sudo docker service create –name mycloud –replicas 5 –network collabnet –constraint ‘node.labels.type
== ubuntu’ dockercloud/hello-world
0elchvwja6y0k01mbft832fp6
master==>sudo docker service ls
ID            NAME               REPLICAS  IMAGE
COMMAND
0elchvwja6y0  mycloud            0/5       dockercloud/hello-world

66kcl850fkkh  cloudyuga_web      3/3       teamcloudyuga/rsvpapp@sha256:df59278f544affcf12cb1798d59bd42a185a220ccc9
040c323ceb7f48d030a75
aesw4vcj1s11  cloudyuga_mongodb  1/1       mongo@sha256:08a90c3d7c40aca81f234f0b2aaeed0254054b1c6705087b10da1c1901d
07b5d
aztab8c3r22c  rsvp               5/5       teamcloudyuga/rsvpapp:v1

f4olzfoomu76  mongodb            1/1       mongo:3.3

master==>sudo docker service ps mycloud
ID                         NAME       IMAGE                    NODE    DESIRED STATE  CURRENT STATE          ERROR
a5t3rkhsuegf6mab24keahg1y  mycloud.1  dockercloud/hello-world  node-1  Running        Running 5 seconds ago
54dfeuy2ohncan1sje2db9jty  mycloud.2  dockercloud/hello-world  node-1  Running        Running 3 seconds ago
072u1dxodv29j6tikck8pll91  mycloud.3  dockercloud/hello-world  node-1  Running        Running 4 seconds ago
enmv8xo3flzsra5numhiln7d3  mycloud.4  dockercloud/hello-world  node-1  Running        Running 4 seconds ago
14af770jbwipbgfb5pgwr08bo  mycloud.5  dockercloud/hello-world  node-1  Running        Running 4 seconds ago
master==>

 

Top 10 Cool New Features in Docker Datacenter for Dev & IT Operations Team

Estimated Reading Time: 6 minutes
Docker Datacenter(DDC) provides an integrated platform for developers and IT operations teams to collaborate securely on the application life cycle. It is an integrated, end-to-end platform for agile application development and management from the datacenter to the cloud. Basically, DDC brings container management and deployment services to enterprises with a production-ready Containers-as-a-Service (CaaS) platform supported by Docker and hosted locally behind an organization’s firewall. The Docker native tools are integrated to create an on-premises CaaS platform, allowing organizations to save time and seamlessly take applications built-in dev to production.
 

finale_1

 
Under this blog, I will focus on the top 10 new cool features which comes with DDC:
 

ddc_1

Let us get started with setting up a fresh Docker Datacenter setup. I am going to leverage 6-node instances of Google Cloud Platform to share my experience with DDC UI.

dc_01

Built-in Docker 1.12 Swarm Mode Capabilities:

Run the below command on the first node where we are going to install Docker CS Engine.

$ sudo curl -SLf https://packages.docker.com/1.12/install.sh | sh

dc_0

Next, its time to install UCP:

$sudo docker run --rm -it \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp install \
--host-address 10.140.0.5 \
--interactive

dc_1

This brings up UCP UI as shown below. Kudos to Docker UCP Team for “a very lightweight UI” with the latest release.

dc_3

Docker Inc. provides you with 30-days trial license once you register for Docker Datacenter. Upload the license accordingly.

ddc-6

Once you login, you will see that you have Swarm Mode cluster already initialized.

I was interested to see how easy it is to add nodes to the cluster. Click on Add Nodes > Select nodes as either Manager or Worker based on your requirement. Docker UCP Team has done great job in providing features like “-advertise-addr` to build up the cluster in few seconds.

ddc-11ddc-12

ucp_111

It just took 5 minutes to bring up 6 nodes cluster.

Please ensure that the following ports are not under firewall.

ddc-20ddc-21

ucp_112

 

HTTP Routing Mesh & Load Balancing

Let us try out another interesting new feature – Routing Mesh. It makes use of LB concepts.It provides global publish port for a given service. The routing mesh uses port based service discovery and load balancing. So to reach any service from outside the cluster you need to expose ports and reach them via the Published Port.

Docker 1.12 Engine creates “ingress” overlay network to achieve the routing mesh. Usually the frontend web service and sandbox are part of “ingress” network and take care in routing mesh.All nodes become part of “ingress” overlay network by default using the sandbox network namespace created inside each node.

Let us try to setup a simple wordpress application and see how Routing Mesh works.

i. Create a network called “collabnet”. UCP Team has done a great job in covering almost all the features which we use under CLI option.

ddc-14

As shown below, a network “collabnet” with the scope “swarm” gets created:

 

ddc-15

ii. Creating a wordpress application

Typically, while creating a frontend service with name “wordpressapp” we usually run the below command. If you want to pass the same parameter through UCP UI, its matter of just few seconds:

$sudo docker service create –env WORDPRESS_DB_HOST=wordpressdb1 –env WORDPRESS_DB_PASSWORD=collab123 –network collabnet –replicas 4 –name wordpressapp –publish 80:80/tcp wordpress:latest

ddc-16

ddc-17ddc-001

ddc-18

ddc-19ddc-21

ddc-22

ddc-23

ddc-24You can easily browse through the master node and get the wordpress page working.

pci2

Let us enable Routing Mesh as shown below:

hrm_1

Once Routing Mesh is enabled, you can access it from any node even if the particular node is not running any container which serves the wordpress application. Let us try accessing it from worker-5 as shown below:

pic1

Cool.. Routing Mesh just works flawlessly.

 

Integrating Notary Installation and HA as part of DTR:

Installing DTR is a matter of single on-liner command as shown below:

ddc-24

Setting up Centralized Logging through UCP

Under admin settings > Logs section, one can supply the rsyslog server details to push all the cluster logs to a centralized location.

log1

logs

TLS Mutual Authentication and Encryption:

UCP Team has done another great job in including TLS mutual authentication and encryption feature to secure communications between itself and all other nodes. There is also cert rotation, which is awesome especially from a compliance point of view. The TLS encryption also ensures that the nodes are connecting to the correct managers in the swarm.

Rotation Join tokens are secrets that allow a node to join the swarm. There are two different join tokens available, one for the worker role and one for the manager role. One usually pass the token using the --token flag when you run swarm join. Nodes use the join token only when they join the swarm. One can view or rotate the join tokens using swarm join-token. We have now these features rightly available under Docker Datacenter as shown below:

new1

Raft Consensus, Orchestrator  & Dispatcher specific Changes:

One of the compelling feature which has been introduced under the latest Docker datacenter is capability to alter and control Raft consensus parameter, orchestration and Dispatcher specific changes. These features were enabled in the recent 1.12.2 release and have been put under docker swarm update command as shown below:

one1

--snapshot-interval is an important parameter useful for performance tuning while –dispatcher-heartbeat duration controls heartbeat period which is default 5 seconds.

In the future post, I am going to talk about DTR, Docker-compose V2 specific examples and cluster configuration under the latest Docker datacenter.

Walkthrough: Building distributed Docker persistent storage platform for Microservices using DellEMC RexRay & ScaleIO

Estimated Reading Time: 9 minutes

Today Enterprise IT look for a secure, scalable, out-of-the-box, elastic, portable and integrated solution platform which can span across from their highly dense data center to the hybrid cloud. An architectural need of data centers which can cope up with the growth of applications, compute and storage requirements through higher capacity infrastructure  and through a higher degree of utilization through virtualization is highly on-demand. That’s just one part of the story.Let’s not disagree with the fact that even though we have complex architectural design & protocol today , the rise of simplified approach towards the application life cycle management, enterprise-grade resiliency, provisioning tools, security  control and management has become a subtle requirement.

Open Source containerization is foreseen as one of the hot storage technology trend in 2017. Key areas like Docker Persistent Storage, Data Protection, storage consumption and portability of microservices are expected to grow on-demand in coming years.”Storage growth automatically aligned with application needs” is emerging as the next bridge to cross for enterprises IT. The integration of Docker, ScaleIO & RexRay is a perfect answer and solution for Enterprise IT and cloud vendors who have been wrestling with Docker persistent storage immaturity.

121_sc

Why is ScaleIO –  a perfect choice?

DellEMC ScaleIO is a software that creates a server-based SAN from local storage to deliver performance and capacity on-demand. In a simple language, it turns your DAS to the server SAN. ScaleIO integrates storage and compute resources, scaling to thousands of servers (also called nodes). As an alternative to traditional SAN infrastructures, ScaleIO combines hard disk drives (HDD), solid state disk (SSD), and Peripheral Component Interconnect Express (PCIe) flash cards to create a virtual pool of block storage with varying performance tiers.

One attractive feature of ScaleIO is – It’s hardware agnostic and supports either physical or virtual application servers. This gives you the flexibility of implementing it on bare metal, on VMs and on the cloud. ScaleIO brings various enterprise-ready features like Elasticity(add, move, remove storage and compute resources “on the fly” with no downtime), no capacity planning required,  massive I/O parallelism, no need of array, switching and HBAs, storage auto-rebalancing, simple management UI and much more.

Elements of ScaleIO:

1. ScaleIO Meta Data Manager (MDM) – It configures and monitors the ScaleIO system.
2. ScaleIO Data Server (SDS) — It manages the capacity of a single server and acts as a back-end for data access. This component  is installed on all servers contributing storage devices to the ScaleIO system.
3. ScaleIO Data Client (SDC) — It is a lightweight device driver situated in each host whose applications or file system requires access to the ScaleIO virtual SAN block devices. The SDC exposes block devices representing the ScaleIO volumes that are currently mapped to that host.

Generally, the MDM can be configured as a three-node cluster (Master MDM, Slave MDM and Tie-Breaker MDM). or as a five-node cluster (Master MDM, 2 Slave MDMs and 2 Tie-Breaker MDMs) to provide greater resiliency.

In my previous blog post, I talked about how to achieve Docker persistent storage using NFS & RexRay on AWS EC2. Under this blog, I am going to demonstrate how to achieve enterprise-grade Docker persistent storage platform with ScaleIO & RexRay.

Infrastructure Setup:

I am going to leverage 3 number of CentOS 7.2 VMs already running on my VMware ESXi 6 environment:(Please note that I am NOT using ScaleIO Virtual SAN setup, this is just a simple VM setup used for demonstration purpose)

1010_sc

 

104_sc

 

Pre-requisite:

As a pre-requisite, I added 2nd Hard Disk(secondary disk) on all the VMs of size 100GB.(Please note that ScaleIO expects at least 90GB of raw disk on all the nodes). I ensured that all the nodes are reachable and are on the common network.

120_sc

1_sc

Setting up ScaleIO Gateway Server on MDM1 using Docker:

ScaleIO Gateway server can be installed on any of the listed servers or a seperate server too.(Please DO NOT install the Gateway on a server on which Xcache will be installed.)

Let us install Docker 1.13 on all the 3 nodes using the below command:

$sudo curl -sSL https:///test.docker.com/ | sh

You can verify the version using docker version command as shown below:

51_sc

A Good News for Docker enthusiasts : Docker Team has enabled long awaiting Logging feature in upcoming Docker 1.13.0 release. Under Docker 1.13.0-rc3, one can  use docker service log <service name> to view the logs for a particular service spanned across the cluster.

Logging doesn’t come with the default Docker 1.13.0 installation. One has to enable experimental: true to get it working.Let us enable experimental feature so that we can leverage the latest features of Docker 1.13.

52_sc

Ensure that you see “experimental: true” under docker version as shown below:

53_sc

Next, let’s install the ScaleIO Gateway Server using Docker container.The Gateway basically includes the Installation Manager (IM), which is used to deploy the system.

root@mdm1:~# docker run -d -p 80:80 \
-p 443:443 \
–restart=always \
-e MDM1_IP_ADDRESS=10.94.214.180 \
-e MDM2_IP_ADDRESS=10.94.214.181 \
-e TRUST_MDM_CRT=true \
cduchesne/scaleio-gateway:2.0.1
[Credit to Chris Duchesne for helping me out with this Docker image]

Verify that the gateway container gets started as shown below:

i_sc

Cool. It was so simple to setup…Isn’t it?

Next, it’s time to open up the browser https://10.94.214.180 and you will see a fancy ScaleIO UI as shown below:

2_sc

Supply admin as username and Scaleio123 as a password to enter into the Gateway Server UI.

3_sc

 Next, you will need to download ScaleIO related packages from this link. Click on Browse and upload all the packages. These packages will be pushed to all the nodes in the next window.(I am planning to automate this step using Docker container in my next blog post)
4_sc

6_sc

As we have 3-node setup, let’s go ahead and select the right option. You have option of importing .CSV file too.

9_sc

 

13_sc

14_sc

15_sc

16_sc

Ensure that the status shows “Completed” without any error message.

Please note: In case you are trying it on Cloud, you will need to ensure that public keys are in place.

Finally, you should see that the operation goes completed:

11_sc

One can install ScaleIO UI on Windows machine so as to have a clear view of the overall volumes as shown below:

22_sc

As shown above, there is total of 297GB capacity which is aggregate of all the secondary HDDs added to the VMs during the initial setup.

23_sc

One can use command-line tool to verify the capacity:

[root@localhost ~]# scli –query_all_volumes

Query-all-volumes returned 1 volumes

Protection Domain 2c732cf300000000 Name: default

Storage Pool 0dbf62e900000000 Name: default

Volume ID: bd2a1a1500000000 Name: dock_vol Size: 24.0 GB (24576 MB) Mapped to 3 SDC Thin-provisioned

 

#/opt/emc/scaleio/sdc/bin/drv_cfg –query_guid

447DB343-B774-4706-B581-B1DC7B44352D

 

Integrating DellEMC RexRay with ScaleIO:

If you are very new to RexRay, I suggest you to read this. The ScaleIO driver registers a storage driver named scaleio with the libStorage driver manager and is used to connect and manage ScaleIO storage. To integrate RexRay with ScaleIO driver, follow the below steps:

First, Install RexRay with just one-liner command on MDM2 system as shown below:

90_sc

In order to make RexRay talk to ScaleIO driver, edit /etc/rexray/config.yml to look as shown below:

31_sc

PLEASE NOTE: System ID and System Name are important entries and must not be missed out for RexRay to talk to ScaleIO driver.

Now start the RexRay service:

sc-32

You can verify the environmental variable for RexRay using rexray env command as shown below:

41_s

I created a default volume through Scale UI. Let us verify if Rexray detected the volumes successfully.

91_sc

Once RexRay is well integrated with ScaleIO, one can directly use docker volume command to create volume inspite of manually going to ScaleIO UI and mapping it to the particular server. Here is how you can achieve it:

94_sc

As shown above, the docker volume inspect command shows that RexRay is using ScaleIO service.

Running Docker container on RexRay volume which uses ScaleIO driver as a backend:

Let us setup 2-node Swarm Mode cluster( MDM2 and TB) and see how microservices running inside Docker container uses ScaleIO + RexRay enabled volume. I assume that you have 2-node Swarm Mode cluster already built up and running.

200_sc

As shown above, I have added a label to nodes so that the containers or task to be pushed to MDM2 and TB accordingly.

Creating a network called “collabnet”:

100_sc

Creating Docker Volume using RexRay Driver:

112_sc

Listing out the volumes:

113_sc

We will be using “mongodb1” volume for our microservices.

201_sc

202_sc

Shown below is the glimpse of how services under Swarm Mode look like:

300_sc

The below entries shows that the container is using RexRay driver for persistent storage.

305_sc

In the next blog post, I will be talking further on how can one containerize all the ScaleIO components and use tools like Puppet/Chef/ Vagrant to bring up RexRay + ScaleIO + Docker 1.13 Swarm Mode – all integrated to provide persistent storage for Microservice architecture.

[clickandtweet handle=”” hashtag=”” related=”” layout=”” position=””]Building distributed @docker persistent storage platform for #Microservices using @DellEMC #RexRay & @ScaleIO [/clickandtweet]