Docker 1.12 Networking Model Overview

“The Best way to orchestrate Docker is Docker”

In our previous post, we talked about Swarm Mode’s built-in orchestration and distribution engine.Docker’s new deployment API objects like Service and Node , built-in multi-host and multi-container cluster management integrated with Docker Engine, decentralized design,declarative service model, scaling and resiliency services , desired state conciliation, service discovery,  load balancing,  out-of-box security by default and rolling updates etc. just makes Docker 1.12  all-in-all automated deployment and management tool for Dockerized distributed applications and microservices at scale in production.

1st

 

Under this post, we are going to deep-dive into Docker 1.12 Networking model. I have 5 node swarm cluster test environment as shown below:

Docker112

If you SSH to test-master1 and check the default network layout:

networkls

Every container has an IP address on three overlay networks:

  1. Ingress
  2. docker_gwbridge
  3. user-defined overlay

Ingress Networking:

The swarm manager uses ingress load balancing to expose the services you want to make available externally to the swarm. The swarm manager can automatically assign the service a PublishedPort or you can configure a PublishedPort for the service in the 30000-32767 range.  What it actually means is Network ingress into the cluster is based on a node port model in which each service is randomly assigned a cluster-wide reserved port between 30000 and 32000 (default range). This means that every node in the cluster listens on this port and routes traffic for that service to it. This is true irrespective of whether a particular worker node is actually running the specified service.

It is important to note that only those services that has a port published (using the -p option) require the ingress network. But for those backend services which doesn’t publish ports, the corresponding containers are NOT attached to the ingress network.

External components, such as cloud load balancers, can access the service on the PublishedPort of any node in the cluster whether or not the node is currently running the task for the service. All nodes in the swarm cluster route ingress connections to a running task instance. Hence, ingress  follow a node port model in which each service has the same port on every node in the cluster.

docker_gwbridge:

The `default_gwbridge` network is added only for non-internal networks. Internal networks can be created with “–internal” option.. Containers connected to the multi-host network are automatically connected to the docker_gwbridge network. This network allows the containers to have external connectivity outside of their cluster, and is created on each worker node.

Docker Engine provides you flexibility to create this default_gwbridge by hand instead of letting the daemon create it automatically. In case you want docker to create a docker_gwbridge network in desired subnet, you can tweak it as shown below:

$docker network create –subnet={Your prefered subnet } -o com.docker.network.bridge.enable_icc=false -o com.docker.network.bridge.name=docker_gwbridge docker_gwbridge.

User-defined Overlay:

This is the overlay network that the user has specified that the container should be on. In our upcoming example , we will call it mynet. A container can be on multiple user-defined overlays.

 

Enough with the theoretical aspects !! Let’s try hands on the networking model practically.

As shown below, I have 3 nodes swarm cluster with 1 master and 2 worker nodes.

pic1

I created a user-defined overlay through the below command:

$ sudo docker network create -d overlay mynet

I can see that the new overlay network gets listed under “swarm” scope(as we are already into swarm mode)  as shown below:

pic8

I have a service “frontier” running tasks on node1, node2 and master1 as shown below:

pic2

pic18

We can check the container running under node-1 and node-2 respectively:

pic3

pic4

Meanwhile, I added a new Node-3 and then scaled it to 10.

pic5

Now I can see that the containers are scaled across the swarm cluster.

To look into how overlay networking works, let us target the 4th Node and add it to the Swarm Cluster.

pic6

Now, the node list gets updated as shown below:

pic9

Whenever you add any node to the swarm cluster, it doesn’t automatically reflect the mynet overlay network as shown below:

pic7

The overlay network only gets reflected whenever the new task is assigned and this happens on-demand.

Let us try to scale our old service and see if node-4 network layout gets reflected with mynet network.

Earlier, we had 10 replicas running which is scaled across master1, node1, node2 and node4. Once we scaled it to 20, the swarm cluster engine will scale it across all the nodes as shown below:

pic10

Let us now check the network layout at node-4:

Hence, the overlay network gets created on-demand whenever the new task is assigned to this node.

Self-Healing:

Swarm nodes are “self-organizing and self-healing. What it means? Whenever any node or container goes crash or sudden unplanned shutdown, the scaling swarm engine attempts to correct to make things right again. Let us look into this aspect in detail:

As we see above, here is an example of nodes and running tasks:

Master-1 running 4 tasks

Node-1 running 4 tasks

Node-2 running 4 tasks

Node-3 running 4 tasks

Node-4 running  4 instances

Now let’s bring down node-4.

pic13

As soon as all the containers running on node-4 is stopped, it tries to start another 4 containers with different IDs on the same node.

pic15

So this shows self-healing aspect of Docker Swarm Engine.

Self-Organizing:

Let’s try bringing down node-4 completely. As soon as you bring down the node-4, the containers running on node-4 gets started on other node automatically.

pic16

 

Master-1 running 5 tasks

Node-1 running 5 tasks

Node-2 running 5 tasks

Node-3 running 5 tasks

Hence, this is how it organizes 20 tasks which is now scaled across master1, node1, node2 and node3.

Global Services:

This option enables service tasks to sit on all the nodes. You can create a service with –mode-global option to enable this functionality as shown below:

pic17

Constraints:

There are scenerios when you sometime want to segregate workloads on your cluster where you specifically want workloads to go to only a certain set of nodes .One example I have pulled from DockerCon slide which shows SSD based constraints which can be applied as shown below:

pic24

 

Routing Mesh 

We have reached the last topic of this blog post and that’s not complete without talking about Routing mesh. I have pulled out the presentation demonstrated at DockerCon 2016 which clearly shows us how routing mesh works.

                                                                                                                                                                                                        ~ Source: DockerCon 2016

To understand how Routing mesh works, suppose that there is one manager node and 3 worker nodes which is serving myapp:80. Whenever an operator try to access myapp:80 on the exposed port, he probably because of an external load-balancer might happen to hit worker-2 and it sounds good because worker-2 is having 2 copies of the frontend containers and it is ready to serve it without any issue. Imagine a scenario when user access myapp:80 and gets redirected to worker-3 which currently has no copies of the containers. This is where Routing Mesh technology comes into the picture. Even though the worker-3 has no copies of this container, the docker swarm engine is going to re-route the traffic to worker-2 which has necessary copies to serve it.Your external Load-balancer don’t need to understand where is the container running. Routing mesh takes care of that automatically. In short, Container-aware routing mesh is capable of transparent rerouting the traffic from node-3 to a node that is running container(which is node-2) shown above. Internally, Docker Engine  allocates a cluster wide port and it maps that port to the containers of a service and the routing mesh will take care of routing traffic to the containers of that service by exposing a port on every node in the swarm.

In our next post, we will talk about routing mesh in detail and cover volume aspect of Swarm Mode. Till then, Happy Swarming !!!

25 thoughts on “Docker 1.12 Networking Model Overview

  1. Madhu

    Nice Post. Few comments though.
    1. Only those services that has a port published (using the -p option) require the ingress network. But for those backend services which doesnt publish ports, the corresponding containers are NOT attached to the ingress network. Pls update the post.

    2. Also the `default_gwbridge` network is added only for non-internal networks. Internal networks can be created with “–internal” option.

    1. Thanks Madhu for those important points. I totally agree on your statement and have included it in the post. Thanks for your time in improving it.

  2. zz

    well explained the network model with example in detail. thank you for sharing

  3. Santiago

    You wrote that Every container has an IP address on three overlay networks, how does the container know which interface to use ? Ie where is the default gateway from the container perceptive ? You described the 3 networks, can you attach to a container and explain and show those 3 networks in detail ? The screenshots focus on the user overlay, how about the other network types ?

  4. Great post. However, one question come up to me.

    Regarding the routing mesh: What happens if worker 3 has no direct network connection to worker 2 (according to your example). Is the request routed via the manager? Is the overlay network capable to handle such kind of network partitions?

    Thank

  5. Nice article.
    The comment under the pic11 illustration say that the overlay network is created on-demand, but there is no “mynet” network on the screenshot. It is kind of confusing.

    1. I have put the same image for mynet. Thanks for improving the doc.

  6. Hi – Thanks for the blog post. I noticed several typo’s that could improve the article if they were corrected. Is there any way of submitting PR to correct those?
    Thanks,
    Alan

    1. Alan, I am sorry I don’t have PR facility here 🙂 You can rather point out the lines and I shall look into it. Thanks again for reading this blog.

  7. Howdy would you mind letting me know which web
    host you’re utilizing? I’ve loaded your blog in 3 different internet browsers and I must say this blog loads a lot faster then most.
    Can you suggest a good internet hosting provider at a reasonable price?

    Thank you, I appreciate it! http://www.yahoo.net

  8. Great overview. I will be including it my next newsletter. I do have a question about how the DNS is working in the Swarm. Since every container is given a unique DNS name how are they being resolved between containers (Which network) and between hosts?

    1. Thanks Brian. I am planning to publish a next blog on “Service Discovery” which is coming very soon..Hope it will help you with your query.

  9. It is appropriate time to make some plans for the future and it is time to be happy.

    I’ve learn this post and if I may just I desire to counsel you some fascinating issues or advice.
    Maybe you can write subsequent articles referring to this article.
    I wish to learn even more things about it! http://bing.co.uk

  10. I’m now not certain where you are having your info,
    but great topic. I must spend a while studying more or understanding more.
    Thanks for fantastic info I was previously searching for this info for my mission.

  11. Hello there, just became alert to your blog through
    Google, and located that it can be really informative.
    I am gonna watch out for brussels. I will appreciate should you continue this later on. Numerous
    men and women be benefited from your writing.
    Cheers!

  12. Great post. I used to be checking constantly this blog and I am inspired!

    Extremely useful info specifically the closing part 🙂 I care for such information a lot.
    I had been searching for this certain information for any number
    of years. Thanks and better of luck.

  13. Appreciating the hard work you put into your site and in depth information you provide.
    It’s nice to come across a blog every once in a while
    that isn’t the same out of date rehashed material.
    Great read! I’ve saved your site and I’m adding
    your RSS feeds to my Google account.

  14. I have read several good stuff here. Certainly worth bookmarking for revisiting. I surprise how much effort you put to make such a magnificent informative web site.

  15. Woah! I’m really loving the template/theme of the website.
    It’s simple, yet effective. A great deal of times it’s difficult to obtain that “perfect balance” between usability and visual appearance.
    I must say you may have done a awesome job using
    this type of. Also, the blog loads very fast for me personally
    on Internet explorer. Superb Blog!

  16. Hello there I am so thrilled I foundd your blog page, I really found you by
    accident, while I was browsing on Aol for something else,
    Regardless I am here nnow and would just like too say thanks a lot for a incredible post and a all round entertaining blog (I also
    love the theme/design), I don’t ave time to browse it
    all at the minute but I have saaved itt and also added your RSS feeds, so wheen I have time I will be back to read a lot more, Please do keep up the
    fantastic work. http://mocarny.eu/dive1

  17. Spot on with this write-up, I absolutely think this excellent site needs much more attention. I’ll probably be returning to view more,
    thanks for the info!

  18. When someone writes an post he/she keeps the thought
    of a user in his/her mind that how a user can be
    aware of it. Thus that’s why this article is perfect. Thanks!

Leave a Reply

Your email address will not be published. Required fields are marked *