Join our Discord Server
Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 570+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 8900+ members and discord server close to 2200+ members. You can follow him on Twitter(@ajeetsraina).

What’s new in Docker 1.12.0 Load-Balancing feature?

5 min read

In the previous blog post, we deep-dived into Service Discovery aspects of Docker. A service is now a first class citizen in Docker 1.12.0 which allows replication, update of images and dynamic load-balancing. With Docker 1.12, services can be exposed on ports on all Swarm nodes and load balanced internally by Docker using either a virtual IP(VIP) based or DNS round robin(RR) based Load-Balancing method or both.

AAEAAQAAAAAAAAa5AAAAJGFjMWI2N2VhLTcyYTQtNGUyOC04OGI0LTIxZTkxZGRhY2E3Ng

In case you are very new to Load-balancing concept, the load balancer assigns workload to a set of networked computer servers or components in such a manner that the computing resources are used in an optimal manner. A load balancer provides high availability by detecting server or component failure and re-configuring the system appropriately. Under this post, I will try to answer the following queries:

Que1

Let’s get started –

Is Load-Balancing new to Docker?

Load-balancing(LB) feature is not at all new for Docker. It was firstly introduced under Docker 1.10 release where Docker Engine implements an embedded DNS server for containers in user-defined networks.In particular, containers that are run with a network alias ( — net-alias) were resolved by this embedded DNS with the IP address of the container when the alias is used.

No doubt, DNS Round robin is extremely simple to implement and is an excellent mechanism to increment capacity in certain scenarios, provided that you take into account the default address selection bias but it possess certain limitations and issues like some applications cache the DNS host name to IP address mapping and  this causes applications to timeout when the mapping gets changed.Also, having non-zero DNS TTL value causes delay in DNS entries reflecting the latest detail. DNS based load balancing does not do proper load balancing based on the client implementation. To learn more about DNS RR which is sometimes called as poor man’s protocol, you can refer here.

What’s new in Load-balancing feature under Docker 1.12.0?

  • Docker 1.12.0 comes with built-in Load Balancing feature now.LB is designed as an integral part of Container Network Model (rightly called as CNM) and works on top of CNM constructs like network, endpoints and sandbox. Docker 1.12 comes with VIP-based Load-balancing.VIP based services use Linux IPVS load balancing to route to the backend containers
  • No more centralized Load-Balancer, it’s distributed and hence scalable. LB is plumbed into individual container. Whenever container wants to talk to another service, LB is actually embedded into the container where it happens. LB is more powerful  now and just works out of the box.

Pci321

  • New Docker 1.12.0 swarm mode uses IPVS(kernel module called “ip_vs”) for load balancing. It’s a load balancing module integrated into the Linux kernel
  • Docker 1.12 introduces Routing Mesh for the first time.With IPVS routing packets inside the kernel, swarm’s routing mesh delivers high performance container-aware load-balancing.Docker Swarm Mode includes a Routing Mesh that enables multi-host networking. It allows containers on two different hosts to communicate as if they are on the same host. It does this by creating a Virtual Extensible LAN (VXLAN), designed for cloud-based networking. we will talk more on Routing Mesh at the end of this post.

Whenever you create a new service in Swarm cluster, the service gets Virtual IP(VIP) address. Whenever you try to make a request to the particular VIP, the swarm Load-balancer will distribute that request to one of the container of that specified service. Actually the built-in service discovery resolves service name to Virtual-IP. Lastly, the service VIP to container IP load-balancing is achieved using IPVS. It is important to note here that VIP is only useful within the cluster. It has no meaning outside the cluster because it is a private non-routable IP.

I have 6 node cluster running Docker 1.12.0 in Google Cloud Engine. Let’s examine the  VIP address through the below steps:

  1. Create a new overlay network:

    $docker network create –driver overlay \

     –subnet 10.0.3.0/24 \

     –opt encrypted \

      collabnet

To1

 

2. Let’s create a new service called collabweb which is a simple Nginx server as shown:

        $ docker service create \

       —replicas 3 \

       —name collabweb \

       —network collabnet \

       nginx

3. As shown below, there are 3 nodes where 3 replicas of containers are running the service under the swarm overlay network called “collabnet”.

To3

4. Use docker inspect command to look into the service internally as shown below:

To5

 

It shows “VIP” address added to each service. There is a single command which can help us in getting the Virtual IP address as shown in the diagram below:

To9

5. You can use nsenter utility to enter into its sandbox to check the iptables configuration:

To10

In any iptables, usually a packets enters the Mangle Table chains first and then the NAT Table chains.Mangling refers to modifying the IP Packet whereas NAT refers to only address translation. As shown above in the mangle table,10.0.3.2 service IP gets marking of 0x10c using iptables OUTPUT chain. IPVS uses this marking and load balances it to containers 10.0.3.3, 10.0.3.5 and 10.0.3.6 as shown:

To11

As shown above, you can use  ipvsadm  to set up, maintain or inspect the IP virtual server table in the Linux kernel.This tool can be installed on any of Linux machine through apt or yum based on the Linux distribution.

A typical DNS RR and IPVS LB can be differentiated as shown in the below diagram where DNS RR shows subsequent list of IP addresses when we try to access the service each time(either through curl or dig) while VIP  load balances it to containers(i.e. 10.0.0.1, 10.0.0.2 and 10.0.0.3)

LB-1

 

6. Let’s create  a new service called collab-box under the same network. As shown in the diagram, a new Virtual-IP (10.0.3.4) will be automatically attached to this service as shown below:

To33

Also, the service discovery works as expected,

pc45

Why IPVS?

IPVS (IP Virtual Server) implements transport-layer load balancing inside the Linux kernel, so called Layer-4 switching. It’s a load balancing module integrated into the linux kernel. It is based on Netfilter.It supports TCP, SCTP & UDP, v4 and v7. IPVS running on a host acts as a load balancer before a cluster of real servers, it can direct requests for TCP/UDP based services to the real servers, and makes services of the real servers to appear as a virtual service on a single IP address.

It is important to note that IPVS is not a proxy — it’s a forwarder that runs on Layer 4. IPVS forwards traffic from clients to back-ends, meaning you can load balance anything, even DNS! Modes it can use include:

  • UDP support
  • Dynamically configurable
  • 8+ balancing methods
  • Health checking

IPVS holds lots of interesting features and has been in kernel for more than 15 years. Below chart differentiate IPVS from other LB tools:

LB-4

 
Is Routing Mesh a Load-balancer?

Routing Mesh is not Load-Balancer. It makes use of LB concepts.It provides global publish port for a given service. The routing mesh uses port based service discovery and load balancing. So to reach any service from outside the cluster you need to expose ports and reach them via the Published Port.

In simple words, if you had 3 swarm nodes, A, B and C, and a service which is running on nodes A and C and assigned node port 30000, this would be accessible via any of the 3 swarm nodes on port 30000 regardless of whether the service is running on that machine and automatically load balanced between the 2 running containers. I will talk about Routing Mesh in separate blog if time permits.

It is important to note that Docker 1.12 Engine creates “ingress” overlay network to achieve the routing mesh. Usually the frontend web service and sandbox are part of “ingress” network and take care in routing mesh.All nodes become part of “ingress” overlay network by default using the sandbox network namespace created inside each node. You can refer this link to learn more about the internals of Routing Mesh.

Is it possible to integrate an external LB to the services in the cluster.Can I use HA-proxy in Docker Swarm Mode?

You can expose the ports for services to an external load balancer. Internally, the swarm lets you specify how to distribute service containers between nodes.If you would like to use an L7 LB you either need to point them to any (or all or some) node IPs and PublishedPort. This is only if your L7 LB cannot be made part of the cluster. If the L7 LB can be made of the cluster by running the L7 LB itself as a service then they can just point to the service name itself (which will resolve to a VIP). A typical architecture would look like this:

LB

In my next blog, I am going to elaborate on External Load balancer primarily. Keep Reading !

Have Queries? Join https://launchpass.com/collabnix

Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 570+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 8900+ members and discord server close to 2200+ members. You can follow him on Twitter(@ajeetsraina).
Join our Discord Server
Index