Docker 1.12 Swarm Mode & Persistent Storage using NFS

Estimated Reading Time: 5 minutes

Containers are stateless by nature and likely to be short-lived. They are quite ephemeral than VMs. What it actually means? Say, you have any data or logs generated inside the container and you don’t really care about loosing it no matter how many times you spin it up and down, like HTTP requests, then the ideal stateless feature should be good enough. BUT in case you are looking out for a solution which should record “stateful” applications like storing databases, storing logs etc. you definitely need persistent storage to be implemented. This is achieved by leveraging Docker’s volume mounts to create either a data volume or a data volume container that can be used and shared by other containers.

oct1

In case you’re new to Docker Storage, Docker Volumes are logical building blocks for shared storage when combined with plugins. It helps us to store state from the application to locations outside the docker image layer. Whenever you run a Docker command with -v, and provide it with a name or path, this gets managed within /var/lib/docker or in case you’re using a host mount, it’s something that exists on the host. The problem with this implementation is that the data is pretty inflexible, which means anything you write to that specific location, yes, it’ll stay there after the container’s life cycle, but only on that host. If you lose that host, the data will get erased. This clearly means that the situation is very prone to data loss. Within Docker, it looks very similar to what shown in the above picture, /var/lib/docker directory structure. Let’s talk about how to implement the management of data with an external storage. This could be anything from NFS to distributed file systems to block storage.

In my previous blog, we discussed about Persistent Storage implementation with DellEMC RexRay for Docker 1.12 Swarm Mode. Under this blog, we will look at how NFS works with Swarm Mode.I assume that you have an existing NFS server running in your infrastructure. If not, you can quickly set it up in just few minutes. I have Docker 1.12 Swarm Mode initialized with 1 master node and 4 worker nodes. Just for an example, I am going to leverage a Ubuntu 16.04 node(outside the swarm mode cluster) as NFS server and rest of the nodes( 1 master & 4 workers) as NFS client systems.

Setting up NFS environment:

There are two ways to setup NFS server – either using available Docker image or manually setting up NFS server on the Docker host machine. As I already have NFS server working on one of Docker host running Ubuntu 16.04 system, I will just verify if the configuration looks good.

Let us ensure that NFS server packages are properly installed as shown below:

raina_ajeet@master1:~$ sudo dpkg -l | grep nfs-kernel-server

1:1.2.8-9ubuntu12               amd64        support for NFS kernel server

raina_ajeet@master1:~$

I created the following NFS directory which I want to share across the containers running the swarm cluster.

$sudo mkdir /var/nfs

$sudo chown nobody:nogroup /var/nfs

It’s time to configure NFS shares. For this, let’s edit the export file to look like as show below:

raina_ajeet@master1:~$ cat /etc/exports

# /etc/exports: the access control list for filesystems which may be exported

#               to NFS clients.  See exports(5).#

# Example for NFSv2 and NFSv3:

# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2

(ro,sync,no_subtree_check)#

# Example for NFSv4:

# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)

# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)

/var/nfs    *(rw,sync,no_subtree_check)

raina_ajeet@master1:~$

 

As shown above, we will be sharing /var/nfs directory among all the worker nodes in the Swarm cluster.

Let’s not forget to run the below commands to provide the proper permission:

$sudo chown nobody:nogroup /var/nfs

$sudo exportfs -a

$sudo service nfs-kernel-server start

Great ! Let us cross-check if the configuration holds good.

raina_ajeet@master:~$ sudo df -h

Filesystem           Size  Used Avail Use% Mounted on

udev                 1.9G     0  1.9G   0% /dev

tmpfs                370M   43M  328M  12% /run

/dev/sda1             20G  6.6G   13G  35% /

tmpfs                1.9G     0  1.9G   0% /dev/shm

tmpfs                5.0M     0  5.0M   0% /run/lock

tmpfs                1.9G     0  1.9G   0% /sys/fs/cgroup

tmpfs                100K     0  100K   0% /run/lxcfs/controllers

tmpfs                370M     0  370M   0% /run/user/1001

10.140.0.7:/var/nfs   20G   19G  1.3G  94% /mnt/nfs/var/nfs

As shown above, we have NFS server with IP: 10.140.0.7  and ready to share volume to all the worker nodes.

Running NFS service on Swarm Mode

nfs1

In case you are new to –mount option introduced under Docker 1.12 Swarm Mode, here is an easy explanation:-

$sudo docker service create  –mount type=volume,volume-opt=o=addr=<source-host which is master node>, volume-opt=device=:<NFS directory>,volume-opt=type=nfs,source=<volume name>, target=/<insideContainer> –replicas 3 –name <service-name> dockerimage <command>

Let’s verify if NFS service is up and running across the swarm cluster –

nfs2

We can verify the NFS volume driver through ‘docker volume’ utility as shown below –

nfs3

Inspecting the NFS Volume on the nodes:

nfs10

The docker volume inspect <vol_name> rightly displays the Mountpoint, Labels and Scope of the NFS volume driver.

nfs11

Verifying if worker nodes are able to see the Volumes

Let’s verify by logging into one of worker node and trying to check if volume is shared across the swarm cluster:

nfs12

Let us create a file under /var/nfs directory so as to see if storage persistent is actually working and data is shared across the swarm cluster:

nfs13

Let’s verify if one of worker node can see the created file.

nfs14

Hence, we implemented persistent storage for Docker 1.12 Swarm Mode using NFS. In the future post, we will talk further on the available storage plugin and its implementations.

[clickandtweet handle=”@ajeetsraina” hashtag=”#Swarm, #docker, #NFS” related=”@docker” layout=”” position=””]Docker 1.12 Swarm Mode & Persistent Storage using NFS[/clickandtweet]

Docker 1.12.1 Swarm Mode & Persistent Storage with DellEMC RexRay on AWS Platform

Estimated Reading Time: 9 minutes

“Does Docker Engine 1.12 has storage discovery similar to Service Discovery and Orchestration Feature? What is the Volume/Persistent Storage story in 1.12 Swarm Mode? Which will be the best solution for my DB persistence? ” – are few common questions which I faced in almost every online meetup, blogs and Docker webinar. Docker 1.12 release was totally focused on Orchestration feature but there has been subtle changes in regards to volumes, volume drivers and storage. Under this blog post, I will be answering the following queries:

docker-storage

  1. What’s new in Docker 1.12 Persistent Storage?
  2. How does Persistent storage work in case of new service API?
  3. How to deploy a simple Web Application for Docker 1.12 using RexRay?

In case you’re new to Docker storage, persistent storage refers to storage volumes — usually associated with stateful applications, such as database .In laymen language,these are places to store data that lives outside the life cycle of the container. A long lived service like database needs persistent storage which should exist outside the container space and has life span longer than the container which uses it.

Docker offers a basic persistent storage solution for containers in the form of Docker Data Volumes. There has been tremendous amount of focus on OverlayFS which is a modern union filesystem that is similar to AUFS. It has a simpler design, potentially faster and has been in the mainline Linux kernel since version 3.18. It is rapidly gaining popularity in the Docker community and is seen by many as a natural successor to AUFS. If interested, you can refer this to learn more about Overlay2. Let us accept the fact that persistent storage is still an active area of development for Docker. Under Docker 1.12.1, there has been number of improvement over volumes which can be tracked here.

Let us accept the another truth – Docker enthusiast who are looking out to run Docker in the production still count on the ecosystem’s partners like DellEMC (RexRay), ClusterHQ (Flocker), PortWrox, CoreOS and Nutanix to simplify persistent storage in one or different ways. DellEMC RexRay and Flocker are the two most popular persistent storage solution which has been appreciated by the large crowd of Docker users. To my curiosity, I decided to first start looking at RexRay and see how Docker 1.12 Swarm Mode works.

What is RexRay?

docker-storage

RexRay is an open source storage orchestration engine which delivers persistent storage access for container run-time, such as Docker , and provides an easy interface for enabling advanced storage functionality across common storage, virtualization and cloud platforms. It implements the back-end for a Docker volume driver, providing persistent storage to containers backed by a long list of storage providers. It is actually a distributed toolset to manage storage from multiple platforms. REX-Ray locally advertises consistent methods to create, remove, map, and copy volumes abstract of what storage platform is serving the operating system.

RexRay(prior to 0.4.0) is available as a standalone process while starting 0.4.0 version it works as a distributed model of client-server.The client performs a level abstraction of local host processes (request for volume attachment, discovery, format, and mounting of devices) while the server provides the necessary abstraction of the control plane for multiple storage platforms.

Let us try out installing RexRay for Docker 1.12 Swarm Mode and see how it achieves persistent storage for us. I will be using two node Swarm Mode cluster under Amazon AWS.

ooto-1

Want to setup RexRay in 1 minute?

Yes, you surely can. Run RexRay inside the container.

docker run -d \

-e AWS_ACCESS_KEY_ID=<access-key> \

-e AWS_SECRET_ACCESS_KEY=<secret-access-key \

-v /run/docker/plugins:/run/docker/plugins \

-v /var/run/rexray:/var/run/rexray \

-v /dev:/dev \

-v /var/run:/var/run \

joskfg/docker-rexray

The official way is simplified too. RexRay is written in Go, so there are typically no dependencies that must be installed alongside its single binary file. The manual methods can be extremely simple through tools like curl. Installing RexRay is just a simple on-liner command:

curl -sSL https://dl.bintray.com/emccode/rexray/install | sh

oto-2

RexRay 0.5.0 is the latest release and setting it up is a matter of few seconds. We can check the RexRay version information through the below command:

ooto-3

RexRay CLI is feature-rich and there are various options available to play around with the storage volume management capabilities as shown below:

oto-5

One of the compelling feature of RexRay is that it can be ran as an interactive CLI to perform volume management capabilities plus it can be ran as a service to support Docker and other platforms that can communicate through HTTP/JSON. For Example, one can create a config.yml file as shown below:

root@ip-172-31-31-235:~# cat /etc/rexray/config.yml
rexray:
logLevel: warn
libstorage:
– service: <>
osDrivers:
– linux
volumeDrivers:
– docker
storageDrivers:
– ec2
aws:
accesskey: <aws-access-key>
secretkey: <aws-secret-access-key>

Initializing the RexRay:

ooto-6

ooto-10

To retrieve the information about Storage volume, one can issue the below command:

ooto-11

How does Persistent storage work in case of new service API?

Docker 1.12 comes with 3 new APIs – service, swarm and node. I found users complaining about why -v option has been dropped in the newer Docker 1.12 swarm mode. The reason – We are not just talking about one single host which runs the docker container, here we are talking about orchestration feature which spans across the hundred of cluster nodes.The  -v  was dropped because services don’t orchestrate volumes. It is important to note that under Docker 1.12.1 we specifically use the term  mount  because that is what services do. You can have a service that has a template for creating a volume that the service then mounts in for you, but it does not itself handle volumes, so it’s intentionally left out for now. The syntax looks like one shown below for NFS mount:

ooto-13

$sudo docker service create  –mount type=volume,volume-opt=o=addr=<source-host which can be master node>, volume-opt=device=:<NFS directory>,volume-opt=type=nfs,source=<volume name>, target=/<insideContainer> –replicas 3 –name <service-name> dockerimage <command>

Want to see how Docker Swarm Mode 1.12 & Persistence storage works with NFS? Check out my recent blog post.

http://collabnix.com/archives/2001

 

How to deploy a simple Web Application for Docker 1.12 using RexRay?

Now this is an interesting topic and I was just curious to implement this. I already had RexRay 0.3.0 installed on my AWS instances and hence wanted to quickly try it and see how Swarm Mode handles the persistent storage functionality:

I assume you have RexRay up and running in your environment. If not, setting up RexRay 0.3.0 is a matter of few seconds. DellEMC {code} team did a great job in providing “REX-Ray Configuration Generator” through this link to create config.yml file for your RexRay configuration. In case you want to keep it simple, you can export the below variables too:

$ export REXRAY_STORAGEDRIVERS=ec2

$ export AWS_ACCESSKEY=access_key

$ export AWS_SECRETKEY=secret_key

$ rexray service start

Done. Next, lets create a RexRay volume which Docker container is going to use:

ooto-30

I have created a rexray volume called collabray of certain size(for demonstration purpose). Also, you need to mount the volume for Docker to understand as shown above.

Now, run the below docker volume create utility:

ooto-1334

You can check that the above command created EBS volume of 7GiB with name “collabray” as shown below:

ooto-234

You can easily verify that the /dev/xvdh gets mounted on /var/lib/rexray/volumes/collabray automatically as shown below:

ooto435

Let’s verify that docker volume detects the particular volume:

ooto-334

Great ! Now comes the most important part of the whole blog. We will be creating WordPress application as a service where MySQL DB will be mounted to the RexRay volume so as to make DB a persistent storage application.  The actual command is shown below:

ooto-34

In the above example, we are using –mount option for the source “collabray” as RexRay volume, targeting /var/lib/mysql as our backup directory using the volume driver called “RexRay”.

ooto-40

We can use docker inspect command for the particular service to verify the RexRay volume being used by container running the service.

ooto-41

Let’s check if it dumps the DB related files under /var/lib/rexray/volumes/collabray/data or not.

ooto989

Wow ! MySQL database related files are present under the mounted location which will be our persistent storehouse for our cluster and RexRay does that for us very safely.

Next, run the wordpress frontend service which is going to discover the backend wordpressdb1 using the unqualified domain name as shown below:

ooto-42

As shown in the example above, we are not using storage volume for wordpressapp but you might want to backup /var/www/html in different volume created using RexRay.

ooto-43

Finally, I have WordPress application ready to be installed in Swarm Mode using RexRay volume. You can quickly view it using lynx text UI as shown below:

ooto5678

ooto900

RexRay provides an efficient snapshot capability too.  First let us find out the volumeid for “collabray” through the below command:

ooto-678

# rexray snapshot create \

–description=”My DB First Snapshot” \

–snapshotname=”Collabnix WordPress” \

–volumeid=vol-06b67054cd80fe9cb

ooto-11000

You can verify the Snapshot looking at AWS Management dashboard too.

oto678

In the future post, I will be covering more on libstorage and newer RexRay 0.5.0 implementation with ScaleIO for Docker 1.12 Swarm Mode.