Docker 17.05.0 Final release went public exactly 2 week back.This community release was the first release as part of new Moby project. With this release, numerous interesting features like Multi-Stage Build support to the builder, using build-time ARG in FROM, DEB packaging for Ubuntu 17.04(Zesty Zapus) has been included. With this latest release, Docker Team brought numerous new features and improvements in terms of Swarm Mode. Example – synchronous service commands, automatic service rollback on failure, improvement over raft transport package, service logs formatting etc.
Placement Preference under Swarm Mode:
One of the prominent new feature introduced is placement preference under 17.05.0-CE Swarm Mode . Placement preference feature allows you to divide tasks evenly over different categories of nodes. It allows you to balance tasks between multiple datacenters or availability zones. One can use a placement preference to spread out tasks to multiple datacenters and make the service more resilient in the face of a localized outage. You can use additional placement preferences to further divide tasks over groups of nodes. Under this blog, we will setup 5-node Swarm Mode cluster on play-with-docker platform and see how to balance them over multiple racks within each datacenter. (Note – This is not real time scenario but based on assumption that nodes are being placed in 3 different racks).
Assumption: There are 3 datacenter Racks holding respective nodes as shown:
{Rack-1=> Node1, Node2 and Node3},
{Rack-2=> Node4} &
{Rack-3=> Node5}
Creating Swarm Master Node:
Open up Docker Playground to build up Swarm Cluster.
[simterm]
$ docker swarm init –advertise-addr 10.0.116.3
[/simterm]
Adding Worker Nodes to Swarm Cluster
[simterm]
$ docker swarm join –token <token-id> 10.0.116.3:2377
[/simterm]
Create 3 more instances and add those nodes as worker nodes. This should build up 5 node Swarm Mode cluster.
Setting up Visualizer Tool
To showcase this demo, I will leverage a fancy popular Visualizer tool.
[simterm]
$git clone https://github.com/ajeetraina/docker101
$cd docker101/play-with-docker/visualizer
[/simterm]
All you need is to execute docker-compose command to bring up visualizer container:
[simterm]
$docker-compose up -d
[/simterm]
Click on port “8080” which gets displayed on top centre of this page and it should display a fancy visualiser depicting Swarm Mode cluster nodes.
Creating an Overlay Network:
[simterm]
$docker network create -d overlay collabnet
[/simterm]
Let us try to create service with no preference placement or no node labels.
Setting up WordPress DB service:
[simterm]
$docker service create –replicas 10 –name wordpressdb1 –network collabnet –env MYSQL_ROOT_PASSWORD=collab123 –env MYSQL_DATABASE=wordpress mysql:latest
[/simterm]
When you run the above command, the swarm will spread the containers evenly node-by-node. Hence, you will see 2-containers per node as shown below:
Setting up WordPress Web Application:
[simterm]
$docker service create –env WORDPRESS_DB_HOST=wordpressdb1 –env WORDPRESS_DB_PASSWORD=collab123 –network collabnet –replicas 3 –name wordpressapp –publish 80:80/tcp wordpress:latest
[/simterm]
Visualizer:
As per the visualizer, you might end up with uneven distribution of services. Example., Rack-1 holding node-1, node-2 and node-3 looks to have almost equal distribution of services, Rack-2 which holds node3 lack WordPress fronted application.
Here Comes Placement Preference for a rescue…
Under the latest release, Docker team has introduced a new feature called “Placement Preference Scheduling”. Let us spend some time to understand what it actually means. You can set up the service to divide tasks evenly over different categories of nodes. One example of where this can be useful is to balance tasks over a set of datacenters or availability zones.
This uses --placement-pref
with a spread
strategy (currently the only supported strategy) to spread tasks evenly over the values of the datacenter
node label. In this example, we assume that every node has a datacenter
node label attached to it. If there are three different values of this label among nodes in the swarm, one third of the tasks will be placed on the nodes associated with each value. This is true even if there are more nodes with one value than another. For example, consider the following set of nodes:
- Three nodes with
node.labels.datacenter=india
- One node with
node.labels.datacenter=uk
- One node with
node.labels.datacenter=us
Considering the last example, since we are spreading over the values of the datacenter
label and the service has 5 replicas, at least 1 replica should be available in each datacenter. There are three nodes associated with the value “india”, so each one will get one of the three replicas reserved for this value. There is 1 node with the value “uk”, and hence 1 replica for this value will be receiving it. Finally, “us” has a single node that will again get atleast 1 replica of each service reserved.
To understand more clearly, let us assign node labels to Rack nodes as shown below:
Rack-1 :
Node-1
[simterm]
$ docker node update –label-add datacenter=india node1
[/simterm]
[simterm]
$ docker node update –label-add datacenter=india node2
[/simterm]
[simterm]
$ docker node update –label-add datacenter=india node3
[/simterm]
Rack-2
[simterm]
$ docker node update –label-add datacenter=uk node4
[/simterm]
Rack-3
[simterm]
$ docker node update –label-add datacenter=us node5
[/simterm]
Removing both the services:
[simterm]
$ docker service rm wordpressdb1 wordpressapp
[/simterm]
Let us now pass placement preference parameter to the docker service command:
[simterm]
$docker service create –replicas 10 –name wordpressdb1 –network collabnet –placement-pref “spread=node.labels.datacenter” –env MYSQL_ROOT_PASSWORD=collab123 –env MYSQL_DATABASE=wordpress mysql:latest
[/simterm]
Visualizer:
Rack-1(node1+node2+node3) has 4 copies, Rack-2(node4) has 3 copies and Rack-3(node5) has 3 copies.
Let us run WordPress Application service likewise:
[simterm]
$docker service create –env WORDPRESS_DB_HOST=wordpressdb1 –env WORDPRESS_DB_PASSWORD=collab123 –placement-pref “spread=node.labels.datacenter” –network collabnet –replicas 3 –name wordpressapp –publish 80:80/tcp wordpress:latest
[/simterm]
Visualizer: As shown below, we have used placement preference feature to ensure that the service containers get distributed across the swarm cluster on both the racks.
As shown above, –placement-pref ensures that the task is spread evenly over the values of the datacenter
node label. Currently spread strategy is only supported.Both engine labels and node labels are supported by placement preferences.
Please Note: If you want to try this feature with Docker compose , you will need Compose v3.3 which is slated to arrive under 17.06 release.
Did you find this blog helpful? Feel free to share your experience. Get in touch @ajeetsraina.
If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.
Know more about the latest Docker releases clicking on this link.
Comments are closed.