What’s New in Docker 17.03 Volume Plugin Architecture & Specification?

Docker, Inc announced initial support for volume driver plugins for the first time under Docker 1.8 release. Since then, there has been subtle changes in terms of its volume plugin architecture. With the new Docker 17.03 Volume plugin architecture, writing your own Volume Plugin is quite simplified.

VolPlugin

 

Old Legacy Docker Volume Plugin Specification < 1.12 release

Evolution

Before 17.03 release,  Docker Volume Plugin Specification feature includes the standardization of  API interface. In case Docker daemon is already been running in the system and you write certain extension, the only way Docker daemon talks to extension is based on standardized API. As per Docker official page , you need to write 9 endpoints shown below:

1 2 3 4 5 6 7 8 9

 

New Docker Volume Plugin Specification > 1.12

With Docker 17.03, the new Volume Plugin Spec has been revamped. The new specification extends the standardization and talks about plugin packaging as a Docker Image. What it really mean is now you can now convert your extension/plugin into a Docker image which you can publish on Dockerhub. Interesting, isn’t it? In simple statement, now it is possible to publish Volume plugin in the form of Docker image which anyone can discover, install flawlessly onto their system and easy to configure & manage.New Docker Volume Plugins enable Engine deployments to be integrated with external storage systems such as Amazon EBS, and enable data volumes to persist beyond the lifetime of a single Docker host.

Before we build, store, install and manage the plugin, we need to go deeper in understanding the newer Docker Volume API design.

Understanding Docker Volume API Design:

As per the official Docker Volume Plugin page…

“The new Plugin API is RPC-style JSON over HTTP, much like webhooks.Requests flow from the Docker daemon to the plugin. So the plugin needs to implement an HTTP server and bind this to the UNIX socket mentioned in the “plugin discovery” section. All requests are HTTP POST requests.The API is versioned via an Accept header, which currently is always set to application/vnd.docker.plugins.v1+json.”

How Docker Volume Orchestration Works?

Orches

 

Playing around with RexRay Volume Plugin:

In my previous blog post, I talked about RexRay as a Volume Plugin. Let us look at various CLIs which can be used to play around with this plugin:

  • Listing the RexRay Volume Plugin:

1213

  • Disabling or Enabling the RexRay Volume Plugin:

141516

  • Verify that “Enabled=true” value gets listed once plugin is enabled back:

17

  • If Volume Plugin is in the form of Docker Image, then there should be a way to enter into this container. Right? Yes,it is possible. You can enter into a shell of RexRay Volume Plugin using docker-runc command.

1819

  • Checking the Plugin logs:

log

  • It’s time to use this plugin and create volume for your application:

vol1

 

  • Inspecting the volume:

vol2

 

I hope you found this blog useful.In my future blog post, I will talk further on how Volume Plugin Orchestration works in terms of Swarm Mode cluster.

0
0

Introducing new RexRay 0.8 with Docker 17.03 Managed Plugin System for Persistent Storage on Cloud Platforms

DellEMC Rex-Ray 0.8 Final Release was announced last week. Graduated as top-level project within {code} community, RexRay 0.8 release has been considered as one of the largest releases till date. The new release introduced support for long lists of new storage platforms like S3FS, EBS, EFS, GCEPD & ScaleIO shown below:

rexray010

Public cloud storage is one of the fastest growing sector in storage with leaders like Amazon AWS, Google Cloud Storage and Microsoft Azure. With the release of RexRay 0.8, {code} community took the right approach in targeting the first community-contributed driver starting with Amazon EFS driver and then quickly adding additional community-contributed drivers like Digital Ocean, FittedCloud, Google Cloud Engine (GCEPD) & Microsoft Azure Unmanaged Disk driver.

Introducing New Docker 17.03 Volume Plugin System

With Docker 17.03 release, a new managed plugin system has been introduced. This is quite different from the old Docker plugin system. Plugins are now distributed as Docker images and can be hosted on Docker Hub or on a private registry.A volume plugin enables Docker volumes to persist across multiple Docker hosts.

rex01

In case you are very new to Docker Plugins, they basically extend Docker’s functionality.  A plugin is a process running on the same or a different host as the docker daemon, which registers itself by placing a file on the same docker host in one of the plugin directories described either as a.sock files( UNIX domain sockets placed under /run/docker/plugins), .spec files( text files containing a URL, such as unix:///other.sock or tcp://localhost:8080 placed under /etc/docker/plugins or /usr/lib/docker/plugins)or.json files ( text files containing a full json specification for the plugin placed under /etc/docker/plugin). You can refer this in case you want to develop your own Docker Volume Plugin.

Running RexRay inside Docker container

Yes, you read it correct ! With the introduction of Docker 17.03 New Plugin Managed System, you can now run RexRay inside Docker container flawlessly. Rex-Ray Volume Plugin is written in Go and provides advanced storage functionality for many platforms including VirtualBox, EC2, Google Compute Engine, OpenStack, and EMC.

You can list out available Docker Volume Plugin for various storage platforms using docker search rexray as shown below:

rexray_plugin2

Let us test-drive RexRay Volume Plugin for Swarm Mode cluster for the first time. I have 4-node Swarm Mode cluster running on Google Cloud Platform as shown below:

rex1

Verify that all the cluster nodes are running the latest 17.03.0-ce (Community Edition).

Installing the RexRay Volume plugin is just one-liner command:

rexray_plugininstallation

 

You can inspect the Rex-Ray Volume plugin using docker plugin inspect command:

rexray_inspect    rexray_inspect22

It’s time to create a volume using docker volume create utility :

$sudo docker volume create –driver rexray/gcepd –name storage1 –opt=size=32

rexray_volume1

You can verify if it is visible under GCE Console window:

RexRay_GCE

Let us try running few application which uses RexRay volume plugin as shown:

{code}==>docker run -dit –name mydb -e MYSQL_ROOT_PASSWORD=wordpress -e MYSQL_USER=wordpress -e MYSQL_PASSWORD:wordpress –volume-driver=rexray/gcepd -v dbdata:/var/lib/mysql mysql:5.7

Verify that MySQL service is up and running using docker logs <container-id> command as shown below:

dbdata1

By now, we should be able to see new volume called “dbdata” created and shown under docker volume ls command:

rexray_dbdata

Under GCE console, it should get displayed too:

dbdata_gce

Using Rex-Ray Volume Plugin under Docker 17.03 Swarm Mode

This is the most interesting section of this blog post. RexRay volume plugin worked great for us till now, especially for a single Docker host running multiple number of services.But what if I want to enable RexRay Volume to persist across multiple Docker Hosts(Swarm Mode cluster)? Yes, there is one possible way to achieve this – using Swarm Executor. It executes docker command across the swarm cluster. Credits to Madhu Venugopal @ Docker Team for assisting me testing with this tool.

swarmexec_1

Please remember that this is UNOFFICIAL way of achieving Volume Plugin implementation across swarm cluster.  I found this tool really cool and hope that it gets integrated within Docker official repository.

First, we need to clone this repository:

$git clone https://github.com/mavenugo/swarm-exec

Run the below command to push the plugin across the swarm cluster:

$cd swarm-exec

$./swarm-exec.sh docker plugin install –grant-all-permissions rexray/gcepd GCEPD_TAG=rexray

swarm_exec01

First, let’s quickly  verify the plugin on the master node  as shown below:

swarm_exec2

While verifying it on the worker nodes:

swarm_exec3

rexray_gce11

The docker volume inspect <volname> should display this particular volume as rexray volume driver as shown below:

rexray_gce222

Creating a MySQL service which uses Rex-Ray volume under Swarm Mode cluster:

up_service

Verifying that the service is up and running:

rexray_100

To conclude, the new version of RexRay looks promising and brings support for various Cloud storage platform. It continues to be leading open source container orchestration engine and now with inclusion of Docker 17.03 Managed Plugin architecture, it will definitely reduce the pain for implementing persistent storage solution.

0
0

Walkthrough: Building distributed Docker persistent storage platform for Microservices using DellEMC RexRay & ScaleIO

Today Enterprise IT look for a secure, scalable, out-of-the-box, elastic, portable and integrated solution platform which can span across from their highly dense data center to the hybrid cloud. An architectural need of data centers which can cope up with the growth of applications, compute and storage requirements through higher capacity infrastructure  and through a higher degree of utilization through virtualization is highly on-demand. That’s just one part of the story.Let’s not disagree with the fact that even though we have complex architectural design & protocol today , the rise of simplified approach towards the application life cycle management, enterprise-grade resiliency, provisioning tools, security  control and management has become a subtle requirement.

Open Source containerization is foreseen as one of the hot storage technology trend in 2017. Key areas like Docker Persistent Storage, Data Protection, storage consumption and portability of microservices are expected to grow on-demand in coming years.”Storage growth automatically aligned with application needs” is emerging as the next bridge to cross for enterprises IT. The integration of Docker, ScaleIO & RexRay is a perfect answer and solution for Enterprise IT and cloud vendors who have been wrestling with Docker persistent storage immaturity.

121_sc

Why is ScaleIO –  a perfect choice?

DellEMC ScaleIO is a software that creates a server-based SAN from local storage to deliver performance and capacity on-demand. In a simple language, it turns your DAS to the server SAN. ScaleIO integrates storage and compute resources, scaling to thousands of servers (also called nodes). As an alternative to traditional SAN infrastructures, ScaleIO combines hard disk drives (HDD), solid state disk (SSD), and Peripheral Component Interconnect Express (PCIe) flash cards to create a virtual pool of block storage with varying performance tiers.

One attractive feature of ScaleIO is – It’s hardware agnostic and supports either physical or virtual application servers. This gives you the flexibility of implementing it on bare metal, on VMs and on the cloud. ScaleIO brings various enterprise-ready features like Elasticity(add, move, remove storage and compute resources “on the fly” with no downtime), no capacity planning required,  massive I/O parallelism, no need of array, switching and HBAs, storage auto-rebalancing, simple management UI and much more.

Elements of ScaleIO:

1. ScaleIO Meta Data Manager (MDM) – It configures and monitors the ScaleIO system.
2. ScaleIO Data Server (SDS) — It manages the capacity of a single server and acts as a back-end for data access. This component  is installed on all servers contributing storage devices to the ScaleIO system.
3. ScaleIO Data Client (SDC) — It is a lightweight device driver situated in each host whose applications or file system requires access to the ScaleIO virtual SAN block devices. The SDC exposes block devices representing the ScaleIO volumes that are currently mapped to that host.

Generally, the MDM can be configured as a three-node cluster (Master MDM, Slave MDM and Tie-Breaker MDM). or as a five-node cluster (Master MDM, 2 Slave MDMs and 2 Tie-Breaker MDMs) to provide greater resiliency.

In my previous blog post, I talked about how to achieve Docker persistent storage using NFS & RexRay on AWS EC2. Under this blog, I am going to demonstrate how to achieve enterprise-grade Docker persistent storage platform with ScaleIO & RexRay.

Infrastructure Setup:

I am going to leverage 3 number of CentOS 7.2 VMs already running on my VMware ESXi 6 environment:(Please note that I am NOT using ScaleIO Virtual SAN setup, this is just a simple VM setup used for demonstration purpose)

1010_sc

 

104_sc

 

Pre-requisite:

As a pre-requisite, I added 2nd Hard Disk(secondary disk) on all the VMs of size 100GB.(Please note that ScaleIO expects at least 90GB of raw disk on all the nodes). I ensured that all the nodes are reachable and are on the common network.

120_sc

1_sc

Setting up ScaleIO Gateway Server on MDM1 using Docker:

ScaleIO Gateway server can be installed on any of the listed servers or a seperate server too.(Please DO NOT install the Gateway on a server on which Xcache will be installed.)

Let us install Docker 1.13 on all the 3 nodes using the below command:

$sudo curl -sSL https:///test.docker.com/ | sh

You can verify the version using docker version command as shown below:

51_sc

A Good News for Docker enthusiasts : Docker Team has enabled long awaiting Logging feature in upcoming Docker 1.13.0 release. Under Docker 1.13.0-rc3, one can  use docker service log <service name> to view the logs for a particular service spanned across the cluster.

Logging doesn’t come with the default Docker 1.13.0 installation. One has to enable experimental: true to get it working.Let us enable experimental feature so that we can leverage the latest features of Docker 1.13.

52_sc

Ensure that you see “experimental: true” under docker version as shown below:

53_sc

Next, let’s install the ScaleIO Gateway Server using Docker container.The Gateway basically includes the Installation Manager (IM), which is used to deploy the system.

root@mdm1:~# docker run -d -p 80:80 \
-p 443:443 \
–restart=always \
-e MDM1_IP_ADDRESS=10.94.214.180 \
-e MDM2_IP_ADDRESS=10.94.214.181 \
-e TRUST_MDM_CRT=true \
cduchesne/scaleio-gateway:2.0.1
[Credit to Chris Duchesne for helping me out with this Docker image]

Verify that the gateway container gets started as shown below:

i_sc

Cool. It was so simple to setup…Isn’t it?

Next, it’s time to open up the browser https://10.94.214.180 and you will see a fancy ScaleIO UI as shown below:

2_sc

Supply admin as username and Scaleio123 as a password to enter into the Gateway Server UI.

3_sc

 Next, you will need to download ScaleIO related packages from this link. Click on Browse and upload all the packages. These packages will be pushed to all the nodes in the next window.(I am planning to automate this step using Docker container in my next blog post)
 
4_sc

6_sc

As we have 3-node setup, let’s go ahead and select the right option. You have option of importing .CSV file too.

9_sc

 

13_sc

14_sc

15_sc

16_sc

Ensure that the status shows “Completed” without any error message.

Please note: In case you are trying it on Cloud, you will need to ensure that public keys are in place.

Finally, you should see that the operation goes completed:

11_sc

One can install ScaleIO UI on Windows machine so as to have a clear view of the overall volumes as shown below:

22_sc

As shown above, there is total of 297GB capacity which is aggregate of all the secondary HDDs added to the VMs during the initial setup.

23_sc

One can use command-line tool to verify the capacity:

[root@localhost ~]# scli –query_all_volumes

Query-all-volumes returned 1 volumes

Protection Domain 2c732cf300000000 Name: default

Storage Pool 0dbf62e900000000 Name: default

Volume ID: bd2a1a1500000000 Name: dock_vol Size: 24.0 GB (24576 MB) Mapped to 3 SDC Thin-provisioned

 

#/opt/emc/scaleio/sdc/bin/drv_cfg –query_guid

447DB343-B774-4706-B581-B1DC7B44352D

 

Integrating DellEMC RexRay with ScaleIO:

If you are very new to RexRay, I suggest you to read this. The ScaleIO driver registers a storage driver named scaleio with the libStorage driver manager and is used to connect and manage ScaleIO storage. To integrate RexRay with ScaleIO driver, follow the below steps:

First, Install RexRay with just one-liner command on MDM2 system as shown below:

90_sc

In order to make RexRay talk to ScaleIO driver, edit /etc/rexray/config.yml to look as shown below:

31_sc

PLEASE NOTE: System ID and System Name are important entries and must not be missed out for RexRay to talk to ScaleIO driver.

Now start the RexRay service:

sc-32

You can verify the environmental variable for RexRay using rexray env command as shown below:

41_s

I created a default volume through Scale UI. Let us verify if Rexray detected the volumes successfully.

91_sc

Once RexRay is well integrated with ScaleIO, one can directly use docker volume command to create volume inspite of manually going to ScaleIO UI and mapping it to the particular server. Here is how you can achieve it:

94_sc

As shown above, the docker volume inspect command shows that RexRay is using ScaleIO service.

Running Docker container on RexRay volume which uses ScaleIO driver as a backend:

Let us setup 2-node Swarm Mode cluster( MDM2 and TB) and see how microservices running inside Docker container uses ScaleIO + RexRay enabled volume. I assume that you have 2-node Swarm Mode cluster already built up and running.

200_sc

As shown above, I have added a label to nodes so that the containers or task to be pushed to MDM2 and TB accordingly.

Creating a network called “collabnet”:

100_sc

Creating Docker Volume using RexRay Driver:

112_sc

Listing out the volumes:

113_sc

We will be using “mongodb1” volume for our microservices.

201_sc

202_sc

Shown below is the glimpse of how services under Swarm Mode look like:

300_sc

The below entries shows that the container is using RexRay driver for persistent storage.

305_sc

In the next blog post, I will be talking further on how can one containerize all the ScaleIO components and use tools like Puppet/Chef/ Vagrant to bring up RexRay + ScaleIO + Docker 1.13 Swarm Mode – all integrated to provide persistent storage for Microservice architecture.

[clickandtweet handle=”” hashtag=”” related=”” layout=”” position=””]Building distributed @docker persistent storage platform for #Microservices using @DellEMC #RexRay & @ScaleIO [/clickandtweet]

0
0

Docker 1.12.1 Swarm Mode & Persistent Storage with DellEMC RexRay on AWS Platform

“Does Docker Engine 1.12 has storage discovery similar to Service Discovery and Orchestration Feature? What is the Volume/Persistent Storage story in 1.12 Swarm Mode? Which will be the best solution for my DB persistence? ” – are few common questions which I faced in almost every online meetup, blogs and Docker webinar. Docker 1.12 release was totally focused on Orchestration feature but there has been subtle changes in regards to volumes, volume drivers and storage. Under this blog post, I will be answering the following queries:

docker-storage

  1. What’s new in Docker 1.12 Persistent Storage?
  2. How does Persistent storage work in case of new service API?
  3. How to deploy a simple Web Application for Docker 1.12 using RexRay?

In case you’re new to Docker storage, persistent storage refers to storage volumes — usually associated with stateful applications, such as database .In laymen language,these are places to store data that lives outside the life cycle of the container. A long lived service like database needs persistent storage which should exist outside the container space and has life span longer than the container which uses it.

Docker offers a basic persistent storage solution for containers in the form of Docker Data Volumes. There has been tremendous amount of focus on OverlayFS which is a modern union filesystem that is similar to AUFS. It has a simpler design, potentially faster and has been in the mainline Linux kernel since version 3.18. It is rapidly gaining popularity in the Docker community and is seen by many as a natural successor to AUFS. If interested, you can refer this to learn more about Overlay2. Let us accept the fact that persistent storage is still an active area of development for Docker. Under Docker 1.12.1, there has been number of improvement over volumes which can be tracked here.

Let us accept the another truth – Docker enthusiast who are looking out to run Docker in the production still count on the ecosystem’s partners like DellEMC (RexRay), ClusterHQ (Flocker), PortWrox, CoreOS and Nutanix to simplify persistent storage in one or different ways. DellEMC RexRay and Flocker are the two most popular persistent storage solution which has been appreciated by the large crowd of Docker users. To my curiosity, I decided to first start looking at RexRay and see how Docker 1.12 Swarm Mode works.

What is RexRay?

docker-storage

RexRay is an open source storage orchestration engine which delivers persistent storage access for container run-time, such as Docker , and provides an easy interface for enabling advanced storage functionality across common storage, virtualization and cloud platforms. It implements the back-end for a Docker volume driver, providing persistent storage to containers backed by a long list of storage providers. It is actually a distributed toolset to manage storage from multiple platforms. REX-Ray locally advertises consistent methods to create, remove, map, and copy volumes abstract of what storage platform is serving the operating system.

RexRay(prior to 0.4.0) is available as a standalone process while starting 0.4.0 version it works as a distributed model of client-server.The client performs a level abstraction of local host processes (request for volume attachment, discovery, format, and mounting of devices) while the server provides the necessary abstraction of the control plane for multiple storage platforms.

Let us try out installing RexRay for Docker 1.12 Swarm Mode and see how it achieves persistent storage for us. I will be using two node Swarm Mode cluster under Amazon AWS.

ooto-1

Want to setup RexRay in 1 minute?

Yes, you surely can. Run RexRay inside the container.

docker run -d \

-e AWS_ACCESS_KEY_ID=<access-key> \

-e AWS_SECRET_ACCESS_KEY=<secret-access-key \

-v /run/docker/plugins:/run/docker/plugins \

-v /var/run/rexray:/var/run/rexray \

-v /dev:/dev \

-v /var/run:/var/run \

joskfg/docker-rexray

The official way is simplified too. RexRay is written in Go, so there are typically no dependencies that must be installed alongside its single binary file. The manual methods can be extremely simple through tools like curl. Installing RexRay is just a simple on-liner command:

curl -sSL https://dl.bintray.com/emccode/rexray/install | sh

oto-2

RexRay 0.5.0 is the latest release and setting it up is a matter of few seconds. We can check the RexRay version information through the below command:

ooto-3

RexRay CLI is feature-rich and there are various options available to play around with the storage volume management capabilities as shown below:

oto-5

One of the compelling feature of RexRay is that it can be ran as an interactive CLI to perform volume management capabilities plus it can be ran as a service to support Docker and other platforms that can communicate through HTTP/JSON. For Example, one can create a config.yml file as shown below:

root@ip-172-31-31-235:~# cat /etc/rexray/config.yml
rexray:
logLevel: warn
libstorage:
– service: <>
osDrivers:
– linux
volumeDrivers:
– docker
storageDrivers:
– ec2
aws:
accesskey: <aws-access-key>
secretkey: <aws-secret-access-key>

Initializing the RexRay:

ooto-6

ooto-10

To retrieve the information about Storage volume, one can issue the below command:

ooto-11

How does Persistent storage work in case of new service API?

Docker 1.12 comes with 3 new APIs – service, swarm and node. I found users complaining about why -v option has been dropped in the newer Docker 1.12 swarm mode. The reason – We are not just talking about one single host which runs the docker container, here we are talking about orchestration feature which spans across the hundred of cluster nodes.The  -v  was dropped because services don’t orchestrate volumes. It is important to note that under Docker 1.12.1 we specifically use the term  mount  because that is what services do. You can have a service that has a template for creating a volume that the service then mounts in for you, but it does not itself handle volumes, so it’s intentionally left out for now. The syntax looks like one shown below for NFS mount:

ooto-13

$sudo docker service create  –mount type=volume,volume-opt=o=addr=<source-host which can be master node>, volume-opt=device=:<NFS directory>,volume-opt=type=nfs,source=<volume name>, target=/<insideContainer> –replicas 3 –name <service-name> dockerimage <command>

Want to see how Docker Swarm Mode 1.12 & Persistence storage works with NFS? Check out my recent blog post.

http://collabnix.com/archives/2001

 

How to deploy a simple Web Application for Docker 1.12 using RexRay?

Now this is an interesting topic and I was just curious to implement this. I already had RexRay 0.3.0 installed on my AWS instances and hence wanted to quickly try it and see how Swarm Mode handles the persistent storage functionality:

I assume you have RexRay up and running in your environment. If not, setting up RexRay 0.3.0 is a matter of few seconds. DellEMC {code} team did a great job in providing “REX-Ray Configuration Generator” through this link to create config.yml file for your RexRay configuration. In case you want to keep it simple, you can export the below variables too:

$ export REXRAY_STORAGEDRIVERS=ec2

$ export AWS_ACCESSKEY=access_key

$ export AWS_SECRETKEY=secret_key

$ rexray service start

Done. Next, lets create a RexRay volume which Docker container is going to use:

ooto-30

I have created a rexray volume called collabray of certain size(for demonstration purpose). Also, you need to mount the volume for Docker to understand as shown above.

Now, run the below docker volume create utility:

ooto-1334

You can check that the above command created EBS volume of 7GiB with name “collabray” as shown below:

ooto-234

You can easily verify that the /dev/xvdh gets mounted on /var/lib/rexray/volumes/collabray automatically as shown below:

ooto435

Let’s verify that docker volume detects the particular volume:

ooto-334

Great ! Now comes the most important part of the whole blog. We will be creating WordPress application as a service where MySQL DB will be mounted to the RexRay volume so as to make DB a persistent storage application.  The actual command is shown below:

ooto-34

In the above example, we are using –mount option for the source “collabray” as RexRay volume, targeting /var/lib/mysql as our backup directory using the volume driver called “RexRay”.

ooto-40

We can use docker inspect command for the particular service to verify the RexRay volume being used by container running the service.

ooto-41

Let’s check if it dumps the DB related files under /var/lib/rexray/volumes/collabray/data or not.

ooto989

Wow ! MySQL database related files are present under the mounted location which will be our persistent storehouse for our cluster and RexRay does that for us very safely.

Next, run the wordpress frontend service which is going to discover the backend wordpressdb1 using the unqualified domain name as shown below:

ooto-42

As shown in the example above, we are not using storage volume for wordpressapp but you might want to backup /var/www/html in different volume created using RexRay.

ooto-43

Finally, I have WordPress application ready to be installed in Swarm Mode using RexRay volume. You can quickly view it using lynx text UI as shown below:

ooto5678

ooto900

RexRay provides an efficient snapshot capability too.  First let us find out the volumeid for “collabray” through the below command:

ooto-678

# rexray snapshot create \

–description=”My DB First Snapshot” \

–snapshotname=”Collabnix WordPress” \

–volumeid=vol-06b67054cd80fe9cb

ooto-11000

You can verify the Snapshot looking at AWS Management dashboard too.

oto678

In the future post, I will be covering more on libstorage and newer RexRay 0.5.0 implementation with ScaleIO for Docker 1.12 Swarm Mode.

0
0