Walkthrough: Building distributed Docker persistent storage platform for Microservices using DellEMC RexRay & ScaleIO

Today Enterprise IT look for a secure, scalable, out-of-the-box, elastic, portable and integrated solution platform which can span across from their highly dense data center to the hybrid cloud. An architectural need of data centers which can cope up with the growth of applications, compute and storage requirements through higher capacity infrastructure  and through a higher degree of utilization through virtualization is highly on-demand. That’s just one part of the story.Let’s not disagree with the fact that even though we have complex architectural design & protocol today , the rise of simplified approach towards the application life cycle management, enterprise-grade resiliency, provisioning tools, security  control and management has become a subtle requirement.

Open Source containerization is foreseen as one of the hot storage technology trend in 2017. Key areas like Docker Persistent Storage, Data Protection, storage consumption and portability of microservices are expected to grow on-demand in coming years.”Storage growth automatically aligned with application needs” is emerging as the next bridge to cross for enterprises IT. The integration of Docker, ScaleIO & RexRay is a perfect answer and solution for Enterprise IT and cloud vendors who have been wrestling with Docker persistent storage immaturity.

121_sc

Why is ScaleIO –  a perfect choice?

DellEMC ScaleIO is a software that creates a server-based SAN from local storage to deliver performance and capacity on-demand. In a simple language, it turns your DAS to the server SAN. ScaleIO integrates storage and compute resources, scaling to thousands of servers (also called nodes). As an alternative to traditional SAN infrastructures, ScaleIO combines hard disk drives (HDD), solid state disk (SSD), and Peripheral Component Interconnect Express (PCIe) flash cards to create a virtual pool of block storage with varying performance tiers.

One attractive feature of ScaleIO is – It’s hardware agnostic and supports either physical or virtual application servers. This gives you the flexibility of implementing it on bare metal, on VMs and on the cloud. ScaleIO brings various enterprise-ready features like Elasticity(add, move, remove storage and compute resources “on the fly” with no downtime), no capacity planning required,  massive I/O parallelism, no need of array, switching and HBAs, storage auto-rebalancing, simple management UI and much more.

Elements of ScaleIO:

1. ScaleIO Meta Data Manager (MDM) – It configures and monitors the ScaleIO system.
2. ScaleIO Data Server (SDS) — It manages the capacity of a single server and acts as a back-end for data access. This component  is installed on all servers contributing storage devices to the ScaleIO system.
3. ScaleIO Data Client (SDC) — It is a lightweight device driver situated in each host whose applications or file system requires access to the ScaleIO virtual SAN block devices. The SDC exposes block devices representing the ScaleIO volumes that are currently mapped to that host.

Generally, the MDM can be configured as a three-node cluster (Master MDM, Slave MDM and Tie-Breaker MDM). or as a five-node cluster (Master MDM, 2 Slave MDMs and 2 Tie-Breaker MDMs) to provide greater resiliency.

In my previous blog post, I talked about how to achieve Docker persistent storage using NFS & RexRay on AWS EC2. Under this blog, I am going to demonstrate how to achieve enterprise-grade Docker persistent storage platform with ScaleIO & RexRay.

Infrastructure Setup:

I am going to leverage 3 number of CentOS 7.2 VMs already running on my VMware ESXi 6 environment:(Please note that I am NOT using ScaleIO Virtual SAN setup, this is just a simple VM setup used for demonstration purpose)

1010_sc

 

104_sc

 

Pre-requisite:

As a pre-requisite, I added 2nd Hard Disk(secondary disk) on all the VMs of size 100GB.(Please note that ScaleIO expects at least 90GB of raw disk on all the nodes). I ensured that all the nodes are reachable and are on the common network.

120_sc

1_sc

Setting up ScaleIO Gateway Server on MDM1 using Docker:

ScaleIO Gateway server can be installed on any of the listed servers or a seperate server too.(Please DO NOT install the Gateway on a server on which Xcache will be installed.)

Let us install Docker 1.13 on all the 3 nodes using the below command:

$sudo curl -sSL https:///test.docker.com/ | sh

You can verify the version using docker version command as shown below:

51_sc

A Good News for Docker enthusiasts : Docker Team has enabled long awaiting Logging feature in upcoming Docker 1.13.0 release. Under Docker 1.13.0-rc3, one can  use docker service log <service name> to view the logs for a particular service spanned across the cluster.

Logging doesn’t come with the default Docker 1.13.0 installation. One has to enable experimental: true to get it working.Let us enable experimental feature so that we can leverage the latest features of Docker 1.13.

52_sc

Ensure that you see “experimental: true” under docker version as shown below:

53_sc

Next, let’s install the ScaleIO Gateway Server using Docker container.The Gateway basically includes the Installation Manager (IM), which is used to deploy the system.

root@mdm1:~# docker run -d -p 80:80 \
-p 443:443 \
–restart=always \
-e MDM1_IP_ADDRESS=10.94.214.180 \
-e MDM2_IP_ADDRESS=10.94.214.181 \
-e TRUST_MDM_CRT=true \
cduchesne/scaleio-gateway:2.0.1
[Credit to Chris Duchesne for helping me out with this Docker image]

Verify that the gateway container gets started as shown below:

i_sc

Cool. It was so simple to setup…Isn’t it?

Next, it’s time to open up the browser https://10.94.214.180 and you will see a fancy ScaleIO UI as shown below:

2_sc

Supply admin as username and Scaleio123 as a password to enter into the Gateway Server UI.

3_sc

 Next, you will need to download ScaleIO related packages from this link. Click on Browse and upload all the packages. These packages will be pushed to all the nodes in the next window.(I am planning to automate this step using Docker container in my next blog post)
4_sc

6_sc

As we have 3-node setup, let’s go ahead and select the right option. You have option of importing .CSV file too.

9_sc

 

13_sc

14_sc

15_sc

16_sc

Ensure that the status shows “Completed” without any error message.

Please note: In case you are trying it on Cloud, you will need to ensure that public keys are in place.

Finally, you should see that the operation goes completed:

11_sc

One can install ScaleIO UI on Windows machine so as to have a clear view of the overall volumes as shown below:

22_sc

As shown above, there is total of 297GB capacity which is aggregate of all the secondary HDDs added to the VMs during the initial setup.

23_sc

One can use command-line tool to verify the capacity:

[root@localhost ~]# scli –query_all_volumes

Query-all-volumes returned 1 volumes

Protection Domain 2c732cf300000000 Name: default

Storage Pool 0dbf62e900000000 Name: default

Volume ID: bd2a1a1500000000 Name: dock_vol Size: 24.0 GB (24576 MB) Mapped to 3 SDC Thin-provisioned

 

#/opt/emc/scaleio/sdc/bin/drv_cfg –query_guid

447DB343-B774-4706-B581-B1DC7B44352D

 

Integrating DellEMC RexRay with ScaleIO:

If you are very new to RexRay, I suggest you to read this. The ScaleIO driver registers a storage driver named scaleio with the libStorage driver manager and is used to connect and manage ScaleIO storage. To integrate RexRay with ScaleIO driver, follow the below steps:

First, Install RexRay with just one-liner command on MDM2 system as shown below:

90_sc

In order to make RexRay talk to ScaleIO driver, edit /etc/rexray/config.yml to look as shown below:

31_sc

PLEASE NOTE: System ID and System Name are important entries and must not be missed out for RexRay to talk to ScaleIO driver.

Now start the RexRay service:

sc-32

You can verify the environmental variable for RexRay using rexray env command as shown below:

41_s

I created a default volume through Scale UI. Let us verify if Rexray detected the volumes successfully.

91_sc

Once RexRay is well integrated with ScaleIO, one can directly use docker volume command to create volume inspite of manually going to ScaleIO UI and mapping it to the particular server. Here is how you can achieve it:

94_sc

As shown above, the docker volume inspect command shows that RexRay is using ScaleIO service.

Running Docker container on RexRay volume which uses ScaleIO driver as a backend:

Let us setup 2-node Swarm Mode cluster( MDM2 and TB) and see how microservices running inside Docker container uses ScaleIO + RexRay enabled volume. I assume that you have 2-node Swarm Mode cluster already built up and running.

200_sc

As shown above, I have added a label to nodes so that the containers or task to be pushed to MDM2 and TB accordingly.

Creating a network called “collabnet”:

100_sc

Creating Docker Volume using RexRay Driver:

112_sc

Listing out the volumes:

113_sc

We will be using “mongodb1” volume for our microservices.

201_sc

202_sc

Shown below is the glimpse of how services under Swarm Mode look like:

300_sc

The below entries shows that the container is using RexRay driver for persistent storage.

305_sc

In the next blog post, I will be talking further on how can one containerize all the ScaleIO components and use tools like Puppet/Chef/ Vagrant to bring up RexRay + ScaleIO + Docker 1.13 Swarm Mode – all integrated to provide persistent storage for Microservice architecture.

Building distributed @docker persistent storage platform for #Microservices using @DellEMC #RexRay & @ScaleIO

2 thoughts on “Walkthrough: Building distributed Docker persistent storage platform for Microservices using DellEMC RexRay & ScaleIO

  1. I am sure this article has touched all the internet viewers, its really really
    nice paragraph on building up new website.

  2. I have been surfing online greater than three hours nowadays, yet I by no means discovered any fascinating article like yours.
    It’s lovely price enough for me. Personally, if
    all web owners and bloggers made just right content material as you did,
    the internet will likely be a lot more helpful than ever
    before.

Leave a Reply

Your email address will not be published. Required fields are marked *