Join our Discord Server
Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).

Docker 1.12 Swarm Mode & Persistent Storage using NFS

4 min read

Containers are stateless by nature and likely to be short-lived. They are quite ephemeral than VMs. What it actually means? Say, you have any data or logs generated inside the container and you don’t really care about loosing it no matter how many times you spin it up and down, like HTTP requests, then the ideal stateless feature should be good enough. BUT in case you are looking out for a solution which should record “stateful” applications like storing databases, storing logs etc. you definitely need persistent storage to be implemented. This is achieved by leveraging Docker’s volume mounts to create either a data volume or a data volume container that can be used and shared by other containers.

oct1

In case you’re new to Docker Storage, Docker Volumes are logical building blocks for shared storage when combined with plugins. It helps us to store state from the application to locations outside the docker image layer. Whenever you run a Docker command with -v, and provide it with a name or path, this gets managed within /var/lib/docker or in case you’re using a host mount, it’s something that exists on the host. The problem with this implementation is that the data is pretty inflexible, which means anything you write to that specific location, yes, it’ll stay there after the container’s life cycle, but only on that host. If you lose that host, the data will get erased. This clearly means that the situation is very prone to data loss. Within Docker, it looks very similar to what shown in the above picture, /var/lib/docker directory structure. Let’s talk about how to implement the management of data with an external storage. This could be anything from NFS to distributed file systems to block storage.

In my previous blog, we discussed about Persistent Storage implementation with DellEMC RexRay for Docker 1.12 Swarm Mode. Under this blog, we will look at how NFS works with Swarm Mode.I assume that you have an existing NFS server running in your infrastructure. If not, you can quickly set it up in just few minutes. I have Docker 1.12 Swarm Mode initialized with 1 master node and 4 worker nodes. Just for an example, I am going to leverage a Ubuntu 16.04 node(outside the swarm mode cluster) as NFS server and rest of the nodes( 1 master & 4 workers) as NFS client systems.

Setting up NFS environment:

There are two ways to setup NFS server – either using available Docker image or manually setting up NFS server on the Docker host machine. As I already have NFS server working on one of Docker host running Ubuntu 16.04 system, I will just verify if the configuration looks good.

Let us ensure that NFS server packages are properly installed as shown below:

raina_ajeet@master1:~$ sudo dpkg -l | grep nfs-kernel-server

1:1.2.8-9ubuntu12               amd64        support for NFS kernel server

raina_ajeet@master1:~$

I created the following NFS directory which I want to share across the containers running the swarm cluster.

$sudo mkdir /var/nfs

$sudo chown nobody:nogroup /var/nfs

It’s time to configure NFS shares. For this, let’s edit the export file to look like as show below:

raina_ajeet@master1:~$ cat /etc/exports

# /etc/exports: the access control list for filesystems which may be exported

#               to NFS clients.  See exports(5).#

# Example for NFSv2 and NFSv3:

# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2

(ro,sync,no_subtree_check)#

# Example for NFSv4:

# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)

# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)

/var/nfs    *(rw,sync,no_subtree_check)

raina_ajeet@master1:~$

 

As shown above, we will be sharing /var/nfs directory among all the worker nodes in the Swarm cluster.

Let’s not forget to run the below commands to provide the proper permission:

$sudo chown nobody:nogroup /var/nfs

$sudo exportfs -a

$sudo service nfs-kernel-server start

Great ! Let us cross-check if the configuration holds good.

raina_ajeet@master:~$ sudo df -h

Filesystem           Size  Used Avail Use% Mounted on

udev                 1.9G     0  1.9G   0% /dev

tmpfs                370M   43M  328M  12% /run

/dev/sda1             20G  6.6G   13G  35% /

tmpfs                1.9G     0  1.9G   0% /dev/shm

tmpfs                5.0M     0  5.0M   0% /run/lock

tmpfs                1.9G     0  1.9G   0% /sys/fs/cgroup

tmpfs                100K     0  100K   0% /run/lxcfs/controllers

tmpfs                370M     0  370M   0% /run/user/1001

10.140.0.7:/var/nfs   20G   19G  1.3G  94% /mnt/nfs/var/nfs

As shown above, we have NFS server with IP: 10.140.0.7  and ready to share volume to all the worker nodes.

Running NFS service on Swarm Mode

nfs1

In case you are new to –mount option introduced under Docker 1.12 Swarm Mode, here is an easy explanation:-

$sudo docker service create  –mount type=volume,volume-opt=o=addr=<source-host which is master node>, volume-opt=device=:<NFS directory>,volume-opt=type=nfs,source=<volume name>, target=/<insideContainer> –replicas 3 –name <service-name> dockerimage <command>

Let’s verify if NFS service is up and running across the swarm cluster –

nfs2

We can verify the NFS volume driver through ‘docker volume’ utility as shown below –

nfs3

Inspecting the NFS Volume on the nodes:

nfs10

The docker volume inspect <vol_name> rightly displays the Mountpoint, Labels and Scope of the NFS volume driver.

nfs11

Verifying if worker nodes are able to see the Volumes

Let’s verify by logging into one of worker node and trying to check if volume is shared across the swarm cluster:

nfs12

Let us create a file under /var/nfs directory so as to see if storage persistent is actually working and data is shared across the swarm cluster:

nfs13

Let’s verify if one of worker node can see the created file.

nfs14

Hence, we implemented persistent storage for Docker 1.12 Swarm Mode using NFS. In the future post, we will talk further on the available storage plugin and its implementations.

[clickandtweet handle=”@ajeetsraina” hashtag=”#Swarm, #docker, #NFS” related=”@docker” layout=”” position=””]Docker 1.12 Swarm Mode & Persistent Storage using NFS[/clickandtweet]

Have Queries? Join https://launchpass.com/collabnix

Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).
Join our Discord Server