Join our Discord Server
Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).

What’s new in Docker 1.12 Scheduling? – Part-I

4 min read

In our previous posts, we spent considerable amount of time deep-diving into Swarm Mode which is in-built orchestration engine in Docker 1.12 release. The Swarm Mode orchestration engine comprises of desired state reconciliation, replicated and global services, configuration updates in the form of parallelism/delay and restart policies to name a few. Docker Engine 1.12 is not just about the multi-host and multi-container orchestration but there are numerous improvements in terms of Scheduling, Cluster management and Security. Under this post, I am going to talk about scheduling(primarily Engine & Swarm Labels) aspect in terms of new service API introduced under 1.12 engine.

wordle

Looking at Engine 1.12, scheduling can be referred to a subset of Orchestration.Orchestration is a broader term that refers to container scheduling, cluster management, and possibly the provisioning of master and worker nodes.When applications are scaled out across multiple swarm nodes, the ability to manage each nodes and abstract away the complexity of the underlying platform becomes more important.Under Docker 1.12 swarm mode cluster, we talk more of docker service rather than docker run.In terms of new service API, the “scheduling” refers to the ability for an operation team to build application services onto a swarm node cluster that establishes how to run a specific group of tasks/containers. While scheduling refers to the specific act of loading the application service , in a more general sense, schedulers are responsible for hooking into a node’s init system(dockerd ~ docker daemon) to manage services.

Under Docker 1.12, scheduling refers to resource awareness, constraints and strategies. Resource awareness is about being aware of resources available on nodes and will place tasks/containers accordingly. Swarm Mode handles that quite effectively. As of today, the newer 1.12 ships with a spread strategy which will attempt to schedule tasks on the least loaded nodes, provided they meet the constraints and resource requirements.Under constraints, the operation team can limit the set of nodes where a task/containers can be scheduled by defining constraint expressions. Multiple constraints find nodes that satisfy every expression, i.e., an AND match. Constraints can match node attributes in the following table.

Few Important Tips:

  • The engine.labels are collected from Docker Engine with information like operating system, drivers, etc.
  • The node.labels are added by the operations team for operational purpose. For example, some nodes have security compliant labels to run tasks with compliant requirements

Below is a snippet of the constraints used under 1.12 Swarm:

node attribute matches example
node.id node’s ID node.id == 5ivku8v2gvtg4
node.hostname node’s hostname node.hostname != node-1
node.role node’s manager or worker role node.role == manager
node.labels node’s labels added by cluster admins node.labels.security == high
engine.labels Docker Engine’s labels engine.labels.operatingsystem == ubuntu 16.04

Let’s take a look at Docker 1.12 labels and constraints in detail. A Label is a key-value pair which serves a wide range of uses, such as identifying the right set of node/s etc. The label is a metadata which can be attached to dockerd(docker daemon). Labels, along with the semantics of constraints can help services run on a target worker nodes. For example, payment related application services can be targeted at the nodes which are more secured, some of the database R/W operations can be limited to specific number of SSD equipped worker nodes etc.

Under 1.12, there are two types of labels – Swarm labels and Engine labels. Swarm Labels adds a security scheduling decisions on top of Engine labels. It is important to note that Engine labels can’t be trusted for security sensitive scheduling decisions,since any worker can report any label up to a manager. However, they can be useful for certain scenarios like scheduling containers on SSD specific nodes, running application services based on resources and so on.

On the other hand, Swarm labels adds an additional layer of trust as they can be explicitly defined by the operations folks. One can easily label worker nodes as “production” or “secure” to ensure that the payment related application can get scheduled on those nodes primarily and this ensures that malicious workers can be kept away.

GCE

To get started, let us setup 5 node Swarm Cluster running Docker 1.12 on Google Cloud Engine. I will be running Ubuntu 16.04 so as to show what new changes has to be made under Ubuntu/Debian specific OS to make it work. Before I start setting up Swarm cluster, let us pick up 2 nodes( node-2 and node-3)  for which we will adding labels and constraints.

Login to node3 instance and add the following lines under [Service] section:

[Service]

EnvironmentFile=/etc/default/docker

ExecStart=/usr/bin/dockerd -H fd:// $DOCKER_OPTS

Your file should look like as shown below:

03

Next, open up /etc/default/docker and add the highlighted line as shown below:

05

As shown above, I have added a label named “com.example.environment” with a value “production” so as to differentiate this node from the other nodes.

PLEASE NOTE : These are systemd specific changes which works great for Debian/Ubuntu specific distributions.

Once you have made those changes, follow the sequence shown below:

$sudo systemctl stop docker.service

Ensure that docked service doesn’t list up with sudo ps -aef | grep dockerd

$sudo systemctl daemon-reload

$sudo systemctl restart docker.service

To ensure that the $DOCKER_OPTS variable is rightly integrated into the docker daemon, run the below command:

06

As shown in the screenshot, the Labels is right attached to the dockerd daemon.

Follow the same step for node-2 before we start building the swarm cluster.

Once done, let’s setup a swarm cluster as shown below:

01

Setup a worker nodes, by joining all the nodes one by one. Hence we have 5-node Swarm Cluster setup ready:

Pic-1

It’s time to create a service which schedules the tasks or containers only on node3 and node2 based on the labels which we defined earlier. This is how the docker service command should look like:

07-servicecreate

If you notice the ‘docker service’ command(shown above), a new prefix ‘engine.labels’ has been added which is very specific to service API introduced under this new release. Once you pass this constraint with the service name specification, the scheduler will ensure that these tasks will only be run on specific set of nodes( node2 and node3).

09

Even though we had 5-node cluster, the master node just chose node2 and node3 based on the constraints which supplied at the Engine and Swarm labels.

Demonstrating Node Label constraints:

Let us pick up node1 and node4 to demonstrate node labels constraints. We will be using docker node update command to add labels to the nodes directly.( Please remember it doesn’t require engine level label changes).

pico1

As shown above, we added ostype=ubuntu to both the nodes individually. Now create a service with name –collabtest1 passing labels through –constraint option. You can easily verify the labels for each individual nodes using docker node inspect format as shown below:

Pico5

Now if you try scaling the service to 4, it will restrict to the node1 and node4 as per the node label constraints we supplied earlier.

pico2

This brings an important point of consideration where if two containers should always run on the same host because they operate as a unit, that affinity can often be declared during the scheduling. On the other hand, if two containers should not be placed on the same host, for example to ensure high availability of two instances of the same service, this can be possible through scheduling. In my next post, I will be covering more on affinity and additional filters in terms of Swarm Mode.

Have Queries? Join https://launchpass.com/collabnix

Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).
Join our Discord Server