Docker Swarm is 1 year old now. Docker Inc. during DockerConEU, December 2014 made the announcement of Docker Swarm. A Docker Swarm is a clustering manager for Docker and Docker Machine, a CLI tool for provisioning Docker hosts. Docker Inc. intented to provide a complete and integrated solution for running containers and not allowing themselves to be restricted to only providing the Docker engine.
Docker Swarm is the native clustering tool for Docker. Swarm uses the standard Docker API, i.e., containers can be launched using normal docker run commands and Swarm will take care of selecting an appropriate host to run the container on. What it actually means that the other tools that use the Docker API—such as Compose and bespoke scripts—can use Swarm without any changes and take advantage of running on a cluster rather than a single host.
The basic architecture of Swarm is fairly straightforward:
i. Each host runs a Swarm agent and one host runs a Swarm manager (on small test clusters this host may also run an agent).
ii. The manager is responsible for the orchestration and scheduling of containers on the hosts.
ii. Swarm can be run in a high-availability mode where etcd,Consul, or ZooKeeper is used to handle failover to a back-up manager.
There are several different methods for how hosts are found and added to a cluster, which is known as discovery in Swarm. By default, token-based discovery is used, where the addresses of hosts are kept in a list stored on the Docker Hub.
Let’ start with practical implementation of Docker Swarm. I quickly turned on 4 node cluster on my Google Cloud Engine, 1 Swarm Master Node, 2 Agent Nodes and 1 Swarm Manager Node to manage the overall cluster. Here is my environment details:
Setting up Swarm Master Node:
Ensure that Docker 1.9.1 is installed on all the nodes of Docker swarm cluster. You can follow the below command to update all the Docker Hosts to the latest version:
wget -qO- https://get.docker.com/ | sh
[Remember its capital-O and not zero]
Processing triggers for systemd (225-1ubuntu9) …
Processing triggers for man-db (2.7.4-1) …
Setting up docker-engine (1.9.1-0~wily) …
Installing new version of config file /etc/bash_completion.d/docker …
Installing new version of config file /etc/init.d/docker …
Installing new version of config file /etc/init/docker.conf …
Processing triggers for ureadahead (0.100.0-19) …
Processing triggers for systemd (225-1ubuntu9) …
+ sh -c docker version
Client:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:20:08 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:20:08 UTC 2015
OS/Arch: linux/amd64
If you would like to use Docker as a non-root user, you should now consider
adding your user to the “docker” group with something like:
sudo usermod -aG docker your-user
Remember that you will have to log out and back in for this to take effect!
Starting Docker Daemon
root@dockerhost-1 ~]# docker -H tcp://0.0.0.0:2375 -d &
[1] 11516
[root@dockerhost-1 ~]# Warning: ‘-d’ is deprecated, it will be removed soon. See usage.
WARN[0000] please use ‘docker daemon’ instead.
WARN[0000] /!\ DON’T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON’T KNOW WHAT YOU’RE DOING /!\
INFO[0000] Listening for HTTP on tcp (0.0.0.0:2375)
ERRO[0000] WARNING: No –storage-opt dm.thinpooldev specified, using loopback; this configuration is strongly disco
uraged for production use
INFO[0000] [graphdriver] using prior storage driver “devicemapper”
INFO[0000] Option DefaultDriver: bridge
INFO[0000] Option DefaultNetwork: bridge
WARN[0000] Running modprobe bridge nf_nat br_netfilter failed with message: modprobe: WARNING: Module br_netfilter
not found.
, error: exit status 1
INFO[0000] Firewalld running: true
INFO[0000] Loading containers: start.
INFO[0000] Loading containers: done.
INFO[0000] Daemon has completed initialization
INFO[0000] Docker daemon commit=a01dc02/1.8.2 execdriver=native-0.2 graphdriver=dev
icemapper version=1.8.2-el7.centos
[root@dockerhost-1 ~]#
Setting up Docker Swarm:
root@docker-1:~# docker run –rm swarm create
7733f838d176809cb2f2d24eb34ce78c
It created a token ID which is a heart of Docker Swarm cluster configuration.
Setting up Swarm Agent Node 1:[10.240.0.3]
Ensure that Docker 1.9.x is installed on Agent Node 1. Also, docker daemon need to be running following the similar steps during the master node setup.
#docker run -d swarm join –addr=10.240.0.3:2375 token://7733f838d176809cb2f2d24eb34ce78c
560e58a76ef235c8d74fffb2c680149111b8bbe687f1e0e164cd7c05dad59f33
Setting up Swarm Manager Node :[10.240.0.5]
root@docker-3:~# docker run -d -p 7000:2375 swarm manage token://7733f838d176809cb2f2d24eb34ce78c
b26a2bbb336e26e0cd6ac5d61b90d7a9d9f74a2d072f573b6890bd9b0f6e470f
Please remember that 7000 is swarm manager port which is being set.
Now you can see the swarm cluster details:
root@docker-3:~# docker -H tcp://10.240.0.5:7000 info
Containers: 0
Images: 2
Storage Driver:
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 1
docker-1: 10.240.0.2:2375
+ Status: Healthy
+ Containers: 0
+ Reserved CPUs: 0 / 1
+ Reserved Memory: 0 B / 3.794 GiB
+ Labels: executiondriver=native-0.2, kernelversion=4.2.0-18-generic, operatingsystem=Ubuntu 15.10, storagedriver
=devicemapper
Execution Driver:
Kernel Version:
Operating System:
CPUs: 1
Total Memory: 3.794 GiB
Name: b26a2bbb336e
ID:
Http Proxy:
Https Proxy:
No Proxy:
As just one node has been added to the cluster node, it shows just Node-1 added to the swarm cluster.
Setting up Swarm Agent Node 2 :[10.240.0.4]
root@docker-2:~# docker run -d swarm join –addr=10.240.0.4:2375 token://7733f838d176809cb2f2d24eb34ce78c
fe5b81d57f6394db655d15ce097f8ca537d6c74c2de6a477ac38544a7db0a5d8
root@docker-2:~#
Finally, you can list out all the nodes added to the swarm cluster by running the below command on Swarm Manager node as shown:
#docker run –rm swarm list token://7733f838d176809cb2f2d24eb34ce78c
10.240.0.4:2375
10.240.0.2:2375
Wow !!! Here you go…Your Multi-Node Swarm Cluster is Ready to rock !!!
Let’s create a container and run it through swarm Manager:
sudo docker -H tcp://10.0.1.61:5001 run -dt –name swarm-test nginx /bin/sh
root@docker-1:~# docker images
INFO[0999] GET /v1.18/images/json
INFO[0999] +job images()
INFO[0999] -job images() = OK (0)
REPOSITORY TAG IMAGE ID CREATED VIRTU
AL SIZE
nginx latest 813e3731b203 4 days ago 133.8
MB
centos latest 14dab3d40372 5 days ago 194.7
MB
swarm latest e9ff33e7e5b9 11 days ago 17.15
MB
ajeetraina/dell-syscfg v1.0 d121b6e6dba4 12 weeks ago 1.449
GB
tduzan/docker-omsa latest ffcbdafb4aa6 16 months ago 806.4
MB
root@docker-1:~#
Lets see what containers are running on each swarm agent machines:
root@docker-3:~# docker -H tcp://10.240.0.5:7100 ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
d0ee6dad4b39 nginx:latest “/bin/sh” 58 minutes ago Up 58 minutes 80/tcp,
443/tcp docker-1/swarm-test
cd9bc8c96e45 ajeetraina/dell-syscfg:v1.0 “/bin/bash” About an hour ago Up About an hour
docker-1/jolly_curie
d0bcc8da090e bad926a6fb50:latest “/bin/bash” About an hour ago Up About an hour
docker-2/hopeful_tesla
root@docker-3:~#
What it means? Sitting on Swarm Manager Node you can simply create containers and interestingly you don’t need to worry about what node shall this container sit on. Your Swarm Cluster is intelligent enough to take care of right resource for your container. Happy Swarming !!!
Comments are closed.