2 Minutes to Docker MacVLAN Networking – A Beginners Guide

Estimated Reading Time: 3 minutes

 

Scenario:  Say, you have built Docker applications(legacy in nature like network traffic monitoring, system management etc.) which is expected to be directly connected to the underlying physical network. In this type of situation, you can use the macvlan network driver to assign a MAC address to each container’s virtual network interface, making it appear to be a physical network interface directly connected to the physical network.

Last year, I wrote a blog post on “How MacVLAN work under Docker Swarm?” for those users who want to assign underlying physical network address to Docker containers which runs various Swarm services. Do check it out.

Docker 17.06 Swarm Mode: Now with built-in MacVLAN & Node-Local Networks support

In case you’re completely new to Docker networking, when Docker is installed, a default bridge network named docker0 is created. Each new Docker container is automatically attached to this network. Besides docker0 , two other networks get created automatically by Docker: host (no isolation between host and containers on this network, to the outside world they are on the same network) and none (attached containers run on container-specific network stack)

Assume you have a clean Docker Host system with just 3 networks available – bridge, host and null

root@ubuntu:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
871f1f745cc4        bridge              bridge              local
113bf063604d        host                host                local
2c510f91a22d        none                null                local
root@ubuntu:~#

My Network Configuration is quite simple. It has eth0 and eth1 interface. I will just use eth0.

root@ubuntu:~# ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:7d:83:13:8e
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr fe:05:ce:a1:2d:5d
          inet addr:100.98.26.43  Bcast:100.98.26.255  Mask:255.255.255.0
          inet6 addr: fe80::fc05:ceff:fea1:2d5d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:923700 errors:0 dropped:367 overruns:0 frame:0
          TX packets:56656 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:150640769 (150.6 MB)  TX bytes:5125449 (5.1 MB)
          Interrupt:31 Memory:ac000000-ac7fffff

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:45 errors:0 dropped:0 overruns:0 frame:0
          TX packets:45 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3816 (3.8 KB)  TX bytes:3816 (3.8 KB)

Creating MacVLAN network on top of eth0.

docker network create -d macvlan --subnet=100.98.26.43/24 --gateway=100.98.26.1  -o parent=eth0 pub_net

Verifying MacVLAN network

root@ubuntu:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
871f1f745cc4        bridge              bridge              local
113bf063604d        host                host                local
2c510f91a22d        none                null                local
bed75b16aab8        pub_net             macvlan             local
root@ubuntu:~#

Let us create a sample Docker Image and assign statics IP(ensure that it is from free pool)

root@ubuntu:~# docker  run --net=pub_net --ip=100.98.26.47 -itd alpine /bin/sh
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
ff3a5c916c92: Pull complete
Digest: sha256:e1871801d30885a610511c867de0d6baca7ed4e6a2573d506bbec7fd3b03873f
Status: Downloaded newer image for alpine:latest
493a9566c31c15b1a19855f44ef914e7979b46defde55ac6ee9d7db6c9b620e0

Important Point: When using macvlan, you cannot ping or communicate with the default namespace IP address. For example, if you create a container and try to ping the Docker host’s eth0, it will not work. That traffic is explicitly filtered by the kernel modules themselves to offer additional provider isolation and security.

Enabling Container to Host Communication

It’s simple. Just run the below command:

Example: ip link add mac0 link $PARENTDEV type macvlan mode bridge

So, in our case, it will be:

ip link add mac0 link eth0 type macvlan mode bridge
ip addr add 100.98.26.38/24 dev mac0
ifconfig mac0 up

Let us try creating container and pinging:

root@ubuntu:~# docker run --net=pub_net -d --ip=100.98.26.53 -p 81:80 nginx
10146a39d7d8839b670fc5666950c0e265037105e61b0382575466cc62d34824
root@ubuntu:~# ping 100.98.26.53
PING 100.98.26.53 (100.98.26.53) 56(84) bytes of data.
64 bytes from 100.98.26.53: icmp_seq=1 ttl=64 time=1.00 ms
64 bytes from 100.98.26.53: icmp_seq=2 ttl=64 time=0.501 ms

Wow ! It just worked.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Walkthrough: Enabling IPv6 Functionality for Docker & Docker Compose

Estimated Reading Time: 5 minutes

By default, Docker assigns IPv4 addresses to containers. Does Docker support IPv6 protocol too? If yes, how complicated is to get it enabled? Can I use docker-compose to build micro services which uses IPv6 addresses? What if I work for a company where our services run natively under IPv6 only environment? How shall I build Multi-Node Cluster setup using IPv6? Does Docker 17.06 Swarm Mode support IPv6?

I have been reading numerous queries, GITHUB issues around breaking IPv6 configuration while upgrading Docker version, issues related to IPv6 changes with host configuration etc. and just thought to share few of the findings around IPv6 effort ongoing in Docker upcoming releases.

Does Docker support IPv6 Protocol?

Yes. Support for IPv6 address has been there since Docker Engine 1.5 release.As of Docker 17.06 version (which is the latest stable release as of August 2017) by default, the Docker server configures the container network for IPv4 only. You can enable IPv4/IPv6 dualstack support by adding the below entry under daemon.json file as shown below:

 

File: /etc/docker/daemon.json

{
“ipv6”: true,
“fixed-cidr-v6”: “2001:db8:1::/64”
}

This is very similar to old way of running the Docker daemon with the --ipv6 flag. Docker will set up the bridge docker0 with the IPv6 link-local address fe80::1.

Why did we add “fixed-cidr-v6”: “2001:db8:1::/64” entry?

By default, containers that are created will only get a link-local IPv6 address. To assign globally routable IPv6 addresses to your containers you have to specify an IPv6 subnet to pick the addresses from. Setting the IPv6 subnet via the --fixed-cidr-v6 parameter when starting Docker daemon will help us achieve globally routable IPv6 address.

The subnet for Docker containers should at least have a size of /80. This way an IPv6 address can end with the container’s MAC address and you prevent NDP neighbor cache invalidation issues in the Docker layer.

With the --fixed-cidr-v6 parameter set Docker will add a new route to the routing table. Further IPv6 routing will be enabled (you may prevent this by starting dockerd with --ip-forward=false).

Let us closely examine the changes which Docker Host undergoes before & after IPv6 Enablement:

A Typical Host Network Configuration – Before IPv6 Enablement 

 

As shown above, before IPv6 protocol is enabled, the docker0 bridge network shows IPv4 address only.

Let us enable IPv6 on the Host system. In case you find daemon.json already created under /etc/docker directory, don’t delete the old entries, rather just add these two below entries into the file as shown:

{
“ipv6”: true,
“fixed-cidr-v6”: “2001:db8:1::/64”
}

 

 

Restarting the docker daemon to reflect the changes:

sudo systemctl restart docker

 

A Typical Host Network Configuration – After IPv6 Enablement 

Did you see anything new? Yes, the docker0 now gets populated with IPV6 configuration.(shown below)

docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:06:62:82:4d brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever
inet6 2001:db8:1::1/64 scope global tentative
valid_lft forever preferred_lft forever
inet6 fe80::1/64 scope link tentative valid_lft forever preferred_lft forever

Not only this, the docker_gwbridge network interface too received IPV6 changes:

docker_gwbridge: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:bc:0b:2a:84 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 scope global docker_gwbridge
valid_lft forever preferred_lft forever
inet6 fe80::42:bcff:fe0b:2a84/64 scope link
valid_lft forever preferred_lft forever

 

PING TEST – Verifying IPv6 Functionalities For Docker Host containers

Let us try bringing up two containers on the same host and see if they can ping each using IPV6 address:

 

Setting up Ubuntu Container:

mymanager1==>sudo docker run -itd ajeetraina/ubuntu-iproute bash

Setting up CentOS Container:

mymanager1==>sudo docker run -itd ajeetraina/centos-iproute bash

[Please Note: If you are using default Ubuntu or CentOS Docker Image, you will be surprised to find that ip or ifconfig command doesn’t work. You might need to install iproute package for ip command to work & net-tools package for ifconfig to work. If you want to save time, use ajeetraina/ubuntu-iproute for Ubuntu OR ajeetraina/centos-iproute for CentOS directly.]

Now let us initiate the quick ping test:

 

In this example the Docker container is assigned a link-local address with the network suffix /64 (here: fe80::42:acff:fe11:3/64) and a globally routable IPv6 address (here: 2001:db8:1:0:0:242:ac11:3/64). The container will create connections to addresses outside of the 2001:db8:1::/64 network via the link-local gateway at fe80::1 on eth0.

mymanager1==>sudo docker exec -it 907 ping6 fe80::42:acff:fe11:2
PING fe80::42:acff:fe11:2(fe80::42:acff:fe11:2) 56 data bytes
64 bytes from fe80::42:acff:fe11:2%eth0: icmp_seq=1 ttl=64 time=0.153 ms
64 bytes from fe80::42:acff:fe11:2%eth0: icmp_seq=2 ttl=64 time=0.100 ms
^C
--- fe80::42:acff:fe11:2 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 999
msrtt min/avg/max/mdev = 0.100/0.126/0.153/0.028 ms

So the two containers are able to reach out to each other using IPv6 address.

Does Docker Compose support IPv6 protocol?

The answer is Yes. Let us verify it using docker-compose version 1.15.0 and compose file format 2.1. I faced an issue while I use the latest 3.3 file format. As Docker Swarm Mode doesn’t support IPv6, hence it is not included under 3.3 file format. Till then, let us try to bring up container using IPv6 address using 2.1 file format:

docker-compose version
version 1.15.0, build e12f3b9
docker-py version: 2.4.2
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.0.1t 3 May 2016

Let us first verify the network available in the host machine:

 

File: docker-compose.yml

version: ‘2.1’
services:
  app:
    image: busybox
    command: ping www.collabnix.com
    networks:
      app_net:
        ipv6_address: 2001:3200:3200::20
networks:
  app_net:
    enable_ipv6: true
    driver: bridge
    ipam:
      driver: default
      config:
        -- subnet: 2001:3200:3200::/64
          gateway: 2001:3200:3200::1

 

The above docker-compose file will create a new network called testping_app_net based on IPv6 network under the subnet 2001:3200:3200::/64 and container should get IPv6 address automatically assigned.

Let us bring up services using docker-compose up and see if the services communicates over IPv6 protocol:

 

Verifying the IPv6 address for each container:

As shown above, this new container gets IPv6 address – 2001:3200:3200::20 and hence they are able to reach other flawlessly.

What’s Next? Under the next blog post, I am going to showcase how does IPv6 works across the multiple host machine and will talk about ongoing effort to bring IPv6 support in terms of Swarm Mode. 

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Docker 17.06 Swarm Mode: Now with built-in MacVLAN & Node-Local Networks support

Estimated Reading Time: 5 minutes

Docker 17.06.0-ce-RC5 got announced 5 days back and is available for testing. It brings numerous new features & enablements under this new upcoming release. Few of my favourites includes support for Secrets on Windows,  allows specifying a secret location within the container, adds --format option to docker system df command, adds support for placement preference to docker stack deploy, adds monitored resource type metadata for GCP logging driver and adding build & engine info prometheus metrics to list a few. But one of the notable and most awaited feature include support of swarm-mode services with node-local networks such as macvlan, ipvlan, bridge and host.

Under the new upcoming 17.06 release, Docker provides support for local scope networks in Swarm. This includes any local scope network driver. Some examples of these are bridgehost, and macvlan though any local scope network driver, built-in or plug-in, will work with Swarm. Previously only swarm scope networks like overlay were supported. This is a great news for all Docker Networking enthusiasts.

A Brief Intro to MacVLAN:

Picture1

 

macvlan

In case you’re new , the MACVLAN driver provides direct access between containers and the physical network. It also allows containers to receive routable IP addresses that are on the subnet of the physical network.

MACVLAN offers a number of unique features and capabilities. It has positive performance implications by virtue of having a very simple and lightweight architecture. It’s use cases includes very low latency applications and networking design that requires containers be on the same subnet as and using IPs as the external host network.The macvlan driver uses the concept of a parent interface. This interface can be a physical interface such as eth0, a sub-interface for 802.1q VLAN tagging like eth0.10 (.10representing VLAN 10), or even a bonded host adaptor which bundles two Ethernet interfaces into a single logical interface.

To test-drive MacVLAN under Swarm Mode, I will leverage the existing 3 node Swarm Mode clusters on my VMware ESXi system. I have tested it on bare metal system and VirtualBox and it works equally great.  

[Updated: 9/27/2017 – I have added docker-stack.yml at the end of this guide to show you how to build services out of docker-compose.yml file. DO NOT FORGET TO CHECK IT OUT]

Installing Docker 17.06 on all the Nodes:

curl -fsSL https://test.docker.com > install-docker.sh
sh install-docker.sh

 

Verifying the latest Docker version:

Screen Shot 2017-06-26 at 12.51.18 AM

 

Setting up 2 Node Swarm Mode Cluster:

 

 

Attention VirtualBox Users: – In case you are using VirtualBox,  the MACVLAN driver requires the network and interfaces to be in promiscuous mode. 

A local network config is created on each host. The config holds host-specific information, such as the subnet allocated for this host’s containers. --ip-range is used to specify a pool of IP addresses that is a subset of IPs from the subnet. This is one method of IPAM to guarantee unique IP allocations.

Manager:

manager1==>sudo docker network create --config-only --subnet 100.98.26.0/24 -o parent=ens160.60 --ip-range 100.98.26.100/24 collabnet

 

Worker-1:

worker1==>sudo docker network create --config-only --subnet 100.98.26.0/24 -o parent=ens160.60 --ip-range 100.98.26.100/24 collabnet

 

 

Instantiating the macvlan network globally

Manager:

manager1==> $sudo docker network create -d macvlan --scope swarm --config-from collabnet swarm-macvlan

 

Deploying a service to the swarm-macvlan network:

Let us go ahead and deploy WordPress application. We will be creating 2 services – wordpressapp and wordpressdb1 and attach it to “swarm-macvlan” network as shown below:

Creating Backend Service:

docker service create --replicas 1 --name wordpressdb1 --network swarm-macvlan --env MYSQL_ROOT_PASSWORD=collab123 --env MYSQL_DATABASE=wordpress mysql

Let us verify if MacVLAN network scope holds this container:

 

Creating Frontend Service

Next, it’s time to create wordpress application i.e. wordpressapp

docker service create --env WORDPRESS_DB_HOST=wordpressdb1 --env WORDPRESS_DB_PASSWORD=collab123 --network swarm-macvlan --replicas 4 --name wordpressapp --publish 80:80/tcp wordpress:latest

Verify if both the services are up and running:

 

Verifying if all the containers on the master node picks up desired IP address from the subnet:

 

Docker Compose File showcasing MacVLAN Configuration

Ensure that you run the below commands to setup MacVLAN configuration for your services before you execute the above docker stack deploy CLI:

root@ubuntu-1610:~# docker network create --config-only --subnet 100.98.26.0/24 --gateway 100.98.26.1 -o parent=ens160.60 --ip-range 100.98.26.120/24 collabnet
da2912d762cbf5f5ea412e6e4d69352a3285f720e23740529af9e533c7168729
 
root@ubuntu-1610:~#docker network create -d macvlan --scope swarm --config-from collabnet swarm-macvlan
jp76lts6hbbheqlbbhggumujd

 

Verify that the containers inspection shows the correct information:

root@ubuntu-1610:~/docker101/play-with-docker/wordpress/example1# docker network inspect swarm-macvlan
[
{
“Name”: “swarm-macvlan”,
“Id”: “jp76lts6hbbheqlbbhggumujd”,
“Created”: “2017-09-27T02:12:00.827562388-04:00”,
“Scope”: “swarm”,
“Driver”: “macvlan”,
“EnableIPv6”: false,
“IPAM”: {
“Driver”: “default”,
“Options”: null,
“Config”: [
{
“Subnet”: “100.98.26.0/24”,
“IPRange”: “100.98.26.120/24”,
“Gateway”: “100.98.26.1”
}
]
},
“Internal”: false,
“Attachable”: false,
“Ingress”: false,
“ConfigFrom”: {
“Network”: “collabnet”
},
“ConfigOnly”: false,
“Containers”: {
“3c3f1ec48225ef18e8879f3ebea37c2d0c1b139df131b87adf05dc4d0f4d8e3f”: {
“Name”: “myapp2_wordpress.1.nd2m62alxmpo2lyn079x0w9yv”,
“EndpointID”: “a15e96456870590588b3a2764da02b7f69a4e63c061dda2798abb7edfc5e5060”,
“MacAddress”: “02:42:64:62:1a:02”,
“IPv4Address”: “100.98.26.2/24”,
“IPv6Address”: “”
},
“d47d9ebc94b1aa378e73bb58b32707643eb7f1fff836ab0d290c8b4f024cee73”: {
“Name”: “myapp2_db.1.cxz3y1cg1m6urdwo1ixc4zin7”,
“EndpointID”: “201163c233fe385aa9bd8b84c9d6a263b18e42893176271c585df4772b0a2f8b”,
“MacAddress”: “02:42:64:62:1a:03”,
“IPv4Address”: “100.98.26.3/24”,
“IPv6Address”: “”
}
},
“Options”: {
“parent”: “ens160”
},
“Labels”: {},
“Peers”: [
{
“Name”: “ubuntu-1610-1633ea48e392”,
“IP”: “100.98.26.60”
}
]
}
]

Docker Stack Deploy CLI:

docker stack deploy -c docker-stack.yml myapp2
Ignoring unsupported options: restart
Creating service myapp2_db
Creating service myapp2_wordpress

Verifying if the services are up and running:

root@ubuntu-1610:~/# docker stack ls
NAME SERVICES
myapp2 2
root@ubuntu-1610:~/# docker stack ps myapp2
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
nd2m62alxmpo myapp2_wordpress.1 wordpress:latest ubuntu-1610 Running Running 15 minutes ago
cxz3y1cg1m6u myapp2_db.1 mysql:5.7 ubuntu-1610 Running Running 15 minutes ago

Looking for Docker Compose file for Single Node?

 

Cool..I am going to leverage this for my Apache JMeter Setup so that I can push loads from different IPs using Docker containers.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more what’s new upcoming under Docker 17.06 CE release by clicking on this link.

Topology Aware Scheduling under Docker v17.05.0 Swarm Mode Cluster

Estimated Reading Time: 6 minutes

 

Docker 17.05.0 Final release went public exactly 2 week back.This community release was the first release as part of new Moby project.  With this release, numerous interesting features  like Multi-Stage Build support to the builder, using build-time ARG in FROM, DEB packaging for Ubuntu 17.04(Zesty Zapus)  has been included. With this latest release, Docker Team brought numerous new features and improvements in terms of Swarm Mode. Example – synchronous service commands, automatic service rollback on failure, improvement over raft transport package, service logs formatting etc. 

swarm_1705

 

Placement Preference under Swarm Mode:

One of the prominent new feature introduced is placement preference  under 17.05.0-CE Swarm Mode . Placement preference feature allows you to divide tasks evenly over different categories of nodes. It allows you to balance tasks between multiple datacenters or availability zones. One can use a placement preference to spread out tasks to multiple datacenters and make the service more resilient in the face of a localized outage. You can use additional placement preferences to further divide tasks over groups of nodes.  Under this blog, we will setup 5-node Swarm Mode cluster on play-with-docker platform and see how to balance them over multiple racks within each datacenter. (Note – This is not real time scenario but based on assumption that nodes are being placed in 3 different racks).

Assumption:  There are 3 datacenter Racks holding respective nodes as shown:

{Rack-1=> Node1, Node2 and Node3},

{Rack-2=> Node4}  &

{Rack-3=> Node5}

 

Creating Swarm Master Node:

Open up Docker Playground to build up Swarm Cluster.

docker swarm init --advertise-addr 10.0.116.3

Screen Shot 2017-05-20 at 7.04.26 AM

 

Adding Worker Nodes to Swarm Cluster

docker swarm join --token <token-id> 10.0.116.3:2377

Screen Shot 2017-05-20 at 7.07.53 AM

Create 3 more instances and add those nodes as worker nodes. This should build up 5 node Swarm Mode cluster.

Screen Shot 2017-05-20 at 7.08.51 AM

 

Setting up Visualizer Tool 

To showcase this demo, I will leverage a fancy popular Visualizer tool.

 

git clone https://github.com/ajeetraina/docker101
cd docker101/play-with-docker/visualizer

 

All you need is to execute docker-compose command to bring up visualizer container:

 

docker-compose up -d

 

Screen Shot 2017-05-20 at 7.09.57 AM

 

Click on port “8080” which gets displayed on top centre of this page and it should display a fancy visualiser depicting Swarm Mode cluster nodes.

Screen Shot 2017-05-20 at 7.13.50 AM

Creating an Overlay Network:

 $docker network create -d overlay collabnet

Screen Shot 2017-05-20 at 7.15.15 AM

 

Let us try to create service with no preference placement or no node labels.

Setting up WordPress DB service:

docker service create --replicas 10 --name wordpressdb1 --network collabnet --env MYSQL_ROOT_PASSWORD=collab123 --env MYSQL_DATABASE=wordpress mysql:latest

Screen Shot 2017-05-20 at 7.19.02 AM

When you run the above command, the swarm will spread the containers evenly node-by-node. Hence, you will see 2-containers per node as shown below:

 Screen Shot 2017-05-20 at 7.22.58 AM 

Setting up WordPress Web Application:

docker service create --env WORDPRESS_DB_HOST=wordpressdb1 --env WORDPRESS_DB_PASSWORD=collab123 --network collabnet --replicas 3 --name wordpressapp --publish 80:80/tcp wordpress:latest

Screen Shot 2017-05-20 at 7.30.55 AM

Visualizer:

Screen Shot 2017-05-20 at 7.34.51 AM 

As per the visualizer, you might end up with uneven distribution of services. Example., Rack-1 holding node-1, node-2 and node-3 looks to have almost equal distribution of services, Rack-2 which holds node3 lack WordPress fronted application.

Here Comes Placement Preference for a rescue…

Under the latest release, Docker team has introduced a new feature called “Placement Preference Scheduling”. Let us spend some time to understand what it actually means. You can set up the service to divide tasks evenly over different categories of nodes. One example of where this can be useful is to balance tasks over a set of datacenters or availability zones. 

This uses --placement-pref with a spread strategy (currently the only supported strategy) to spread tasks evenly over the values of the datacenter node label. In this example, we assume that every node has a datacenter node label attached to it. If there are three different values of this label among nodes in the swarm, one third of the tasks will be placed on the nodes associated with each value. This is true even if there are more nodes with one value than another. For example, consider the following set of nodes:

  • Three nodes with node.labels.datacenter=india
  • One node with node.labels.datacenter=uk
  • One node with node.labels.datacenter=us

Considering the last example, since we are spreading over the values of the datacenter label and the service has 5 replicas, at least 1 replica should be available  in each datacenter. There are three nodes associated with the value “india”, so each one will get one of the three replicas reserved for this value. There is 1 node with the value “uk”, and hence 1 replica for this value will be receiving it. Finally, “us” has a single node that will again get atleast 1 replica of each service reserved.

To understand more clearly, let us assign node labels to Rack nodes as shown below:

Rack-1 : 

Node-1

docker node update --label-add datacenter=india node1
docker node update --label-add datacenter=india node2
docker node update --label-add datacenter=india node3

Rack-2

docker node update --label-add datacenter=uk node4

Rack-3

docker node update --label-add datacenter=us node5

Screen Shot 2017-05-20 at 7.46.33 AM

 

Removing both the services:

docker service rm wordpressdb1 wordpressapp

Let us now pass placement preference parameter to the docker service command:

docker service create --replicas 10 --name wordpressdb1 --network collabnet --placement-pref “spread=node.labels.datacenter” --env MYSQL_ROOT_PASSWORD=collab123 --env MYSQL_DATABASE=wordpress mysql:latest

Screen Shot 2017-05-20 at 8.05.52 AM

 

Visualizer:

Screen Shot 2017-05-20 at 8.05.14 AM

Rack-1(node1+node2+node3) has 4 copies, Rack-2(node4) has 3 copies and Rack-3(node5) has 3 copies.

Let us run WordPress Application service likewise:

docker service create --env WORDPRESS_DB_HOST=wordpressdb1 --env WORDPRESS_DB_PASSWORD=collab123 --placement-pref “spread=node.labels.datacenter” --network collabnet --replicas 3 --name wordpressapp --publish 80:80/tcp wordpress:latest

Screen Shot 2017-05-20 at 8.09.29 AM

Visualizer: As shown below, we have used placement preference feature to ensure that the service containers get distributed across the swarm cluster on both the racks.

Screen Shot 2017-05-20 at 8.10.41 AM

 

As shown above, –placement-pref ensures that the task is spread evenly over the values of the datacenter node label. Currently spread strategy is only supported.Both engine labels and node labels are supported by placement preferences.

Please Note: If you want to try this feature with Docker compose , you will need Compose v3.3 which is slated to arrive under 17.06 release.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more about the latest Docker releases clicking on this link.

 

Docker Service Inspection Filtering & Template Engine under Swarm Mode

Estimated Reading Time: 4 minutes

Go programming language has really helped in shaping Docker as a powerful software and enabling fast development for distributed systems. It has been helping developers and operations team to quickly construct programs and tools for Cloud computing environment. Go offers built-in support for JSON  (JavaScript Object Notation) encoding and decoding, including to and from built-in and custom data types.

golang

 

In last 1 year, Docker Swarm Mode has matured enough to become production-ready. The Orchestration platform is quite stable with numerous features like Logging, Secrets Management, Security Scanning, improvement over Scheduling, networking etc. making it more simple to use and scale-out in just few liner commands. With the introduction of new APIs like swarm, node, volume plugins, services etc., Swarm Mode brings dozens of features to control every aspect of swarm cluster sitting on the master node. But when you start building services in the range of 100s & 1000s and that too distributed across another 100s and 1000s of nodes, there arise a need of quick and handy way of filtering the output, especially when you are interested to capture one specific data out of the whole cluster-wide configuration. Here comes ‘a filtering flag’ as a rescue.

swarm11

The filtering flag (-f or --filter) format is a key=value pair which is actually a very powerful weapon for developers & system administrators.If you have ever played around with Docker CLI, you must have used docker inspect command to get metadata on a container/ image. Using it with -f provides you more specific information like IP address, network etc. There are numerous guide on how to use filters with standalone host holding the Docker images but I found lack of guides talking about Swarm Mode filters.

Under this blog post, I have prepared a quick list of consolidated filtering commands and outputs in tabular format for Swarm Mode Cluster(shown below).

I have 3 node Swarm Mode cluster running on one of my Google Cloud Engine. Let us first create a simple wordpress application as shown below:

 

swarm-1

 

We will create a simple wordpress service as shown:

master==>docker service create –env WORDPRESS_DB_HOST=wordpressdb1 –env WORDPRESS_DB_PASSWORD=collab123 –network collabnet –replicas 4 –name wordpressapp –publish 80:80/tcp wordpress:latest

Below is the tabular list of commands and outputs which talks about various filtering mode for docker service inspect command:

 

Inspecting the “wordpress” service:

[wpsm_comparison_table id="3" class=""]

To retrieve the network information for specific service:

[wpsm_comparison_table id="6" class=""]

To list out the port which wordpress is using for specific service:

[wpsm_comparison_table id="7" class=""]

To list out the protocol detail for specific service:

[wpsm_comparison_table id="8" class=""]

To list out the type of Published port:

[wpsm_comparison_table id="9" class=""]

To verify if its DNS RR or Virtual IP(VIP) based for specific service:

[wpsm_comparison_table id="10" class=""]

 

To list out the type of Published port for specific service:

[wpsm_comparison_table id="11" class=""]

To list out the replication factor for a specific service:

[wpsm_comparison_table id="12" class=""]

To list out the environmental variable passed for specific service:

[wpsm_comparison_table id="13" class=""]

To list out the image detail for specific service:

[wpsm_comparison_table id="14" class=""]

To list out the complete networking details for specific service:

[wpsm_comparison_table id="15" class=""]

To list out detailed consolidated  meta data information for the specific service::

[wpsm_comparison_table id="4" class=""]
 
To list out further detailed information of the service :

[wpsm_comparison_table id="5" class=""]

Let us create another service called “wordpressdb1” which is actually a database service under Docker Swarm Mode:

$docker service create –replicas 1 –name wordpressdb1 –network collabnet –env MYSQL_ROOT_PASSWORD=collab123 –env MYSQL_DATABASE=wordpress mysql:latest

Inspecting the service “collabdb1”:

[wpsm_comparison_table id="16" class=""]

Displaying the output in human-friendly way:

[wpsm_comparison_table id="17" class=""]

To list out the VIPs details for specific service:

[wpsm_comparison_table id="18" class=""]

Retrieving the last StatusUpdate details:

[wpsm_comparison_table id="19" class=""]

I have plans to add more examples during the course of time based on experience and exploration. I hope you will find it handy.

 

 

Running Apache JMeter 3.1 Distributed Load Testing Tool using Docker Compose v3.1 on Swarm Mode Cluster

Estimated Reading Time: 6 minutes

Apache JMeter is a popular open source software used as a load testing tool for analyzing and measuring the performance of web application or multitude of services. It is 100% pure Java application, designed to test performance both on static and dynamic resources and web dynamic applications. It simulates a heavy load on one or multitude of servers, network or object to test its strength/capability or to analyze overall performance under different load types. The various applications/server/protocol includes – HTTP, HTTPS (Java, NodeJS, PHP, ASP.NET), SOAP / REST Web services, FTP,Database via JDBC, LDAP, Message-oriented middleware (MOM) via JMS,Mail – SMTP(S), POP3(S) and IMAP(S), native commands or shell scripts, TCP, Java Objects and many more.

Logo

JMeter is extremely easy to use and doesn’t take time to get familiar with it.It allows concurrent and simultaneous sampling of different functions by a separate thread group. JMeter is highly extensible which means you can write your own tests. JMeter also supports visualization plugins allow you extend your testing.JMeter supports many testing strategies such as Load Testing, Distributed Testing, and Functional Testing. JMeter can simulate multiple users with concurrent threads, create a heavy load against web application under test.
 
 

Arch

 
 
Architecture of JMeter Distributed Load Testing:
 
JMeter Distributed Load Testing uses client-server model. It is a kind of testing which use multiple systems to perform stress testing. Distributed testing is applied for testing web sites and server applications when they are working with multiple clients simultaneously.
It includes:
 
1. Master –  the system running JMeter GUI, which controls the test.
2. Slave –  the system running JMeter-server, which takes commands from the GUI and send requests to the target system(s).
3. Target (System Under Test)– the web server which undergoes stress test.

 

What Docker has to offer for Apache JMeter?

Good Question ! With every new installation of Apache JMeter, you need to download/install JMeter on every new node participating in Distributed Load Testing. Installing JMeter requires dependencies to be installed first like JAVA, default-jre-headless, iputils etc. The complexity begins when you have multitude OS distributions running on your infrastructure. You can always use automation tools or scripts to enable this solution but again there is an extra cost of maintenance and skills required to troubleshoot with the logs when something goes wrong in the middle of the testing.

With Docker, it is just a matter of 1 Docker Compose file and an one-liner command to get the entire infrastructure ready. Under this blog, I am going to show how a single Docker Compose v3.1 file format can help you setup the entire JMeter Distributed Load Testing tool – all working on Docker 17.03 Swarm Mode cluster.

I will leverage 4-node Docker 17.04 Swarm Cluster running on my Google Cloud Engine platform to showcase this solution as shown below:

gcloud

Under this setup, I will be using instance “master101” as master/client node while the rest of worker nodes as “server/slaves” nodes. All of these instances are running Ubuntu 17.04 with Docker 17.03 installed. You are free to choose any latest Linux distributions which supports Docker executables.

First, let us ensure that you have the latest Docker 17.03 running on your machine:

$curl -sSL https://get.docker.com/ | sh

Version

 

Ensure that the latest Docker compose is installed which supports v3.0 file format:

 $curl -L https://github.com/docker/compose/releases/download/1.11.2/docker-compose-`uname -s`-`uname -m` > /usr/local 
   /bin/docker-compose
 $chmod +x /usr/local/bin/docker-compose

Compose_Version

Docker Compose for Apache JMeter Distributed Load Testing

One can use docker stack deploy command to quickly setup JMeter Distributed Testing environment. This command requires docker-compose.yml file which holds the declaration of services (apache-jmeter-master and apache-jmeter-server) respectively. Let us clone the repository as shown below to get started –

Run the below commands on the Swarm Master node –

$git clone https://github.com/ajeetraina/jmeter-docker/

$cd jmeter-docker

Under this directory, you will find docker-compose.yml file which is shown below:

Picture1

 

In the above docker-compose.yml, there are two service definitions – master and server. Through this docker compose file format, we can push master/client service definition to the master node and server specific service definition which has to be pushed to all the slave nodes. The constraints sections takes care of this functionality. This compose file will automatically create an overlay network called jm-network across the cluster.

Let us start the required services out of the docker-compose file as shown below:

$sudo docker stack deploy -c docker-compose.yml myjmeter

Untitled picture

Checking if the services are up and running through docker service ls command:

service_ls

Let us verify if constraints worked correctly or not as shown below:

1_service

2_service

As shown above, the constraints worked well pushing the containers to the right node.

It’s time to enter into one of the container running on the master node as shown below:

exec_cont

Using docker exec command, one can enter into the container and browse to /jmeter/apache-jmeter-3.1/bin directory.

exec_cont2

Pushing the JMX file into the container

   $docker exec -i <container-running-on-master-node> sh -c 'cat > /jmeter/apache-jmeter-3.1/bin/jmeter-docker.jmx' < jmeter-docker.jmx

Starting the Load testing

  $docker exec -it <container-on-master-node> bash
  root@daf39e596b93:/#cd /jmeter/apache-jmeter-3.1/bin
  $./jmeter -n -t jmeter-docker.jmx -R<list of containers running on slave nodes seperated by commas)

Planning to move from Apache JMeter running on VMs to Containers??

This is very important topic which I don’t want to miss out , though we are at the end of this blog post. You might be using Virtual Infrastructure for setting up Apache JMeter Distributed Load tool. Generally, each  VM is assigned with static IP and it does make sense as your System Under Test will see load coming from different static IP. The above docker-compose file uses an overlay network and IPs are in the range of 172.16.x.x which gets assigned automatically and taken care by Docker  Swarm Networking. In case you are looking out for static IP which should get assigned to each containers automatically, then for sure you need macvlan to be implemented. Follow https://github.com/ajeetraina/jmeter-docker/blob/master/static-README.md to achieve this.

In my next blog post, I will cover further interesting stuffs related to JMeter Distributed Load Testing tool. Drop me a message through tweet (@ajeetsraina) if you face any issue with the above solution.

Loved reading it? Feel free to share it.

[clickandtweet handle=”@ajeetsraina” hashtag=”#Jmeter” related=”@docker” layout=”” position=””]Running Apache JMeter 3.1 Distributed Load Testing via Docker Compose on Swarm Mode[/clickandtweet]

References:

https://github.com/ajeetraina/jmeter-docker

http://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.pdf

https://docs.docker.com/compose/compose-file/

 

Introducing new RexRay 0.8 with Docker 17.03 Managed Plugin System for Persistent Storage on Cloud Platforms

Estimated Reading Time: 6 minutes

DellEMC Rex-Ray 0.8 Final Release was announced last week. Graduated as top-level project within {code} community, RexRay 0.8 release has been considered as one of the largest releases till date. The new release introduced support for long lists of new storage platforms like S3FS, EBS, EFS, GCEPD & ScaleIO shown below:

rexray010

Public cloud storage is one of the fastest growing sector in storage with leaders like Amazon AWS, Google Cloud Storage and Microsoft Azure. With the release of RexRay 0.8, {code} community took the right approach in targeting the first community-contributed driver starting with Amazon EFS driver and then quickly adding additional community-contributed drivers like Digital Ocean, FittedCloud, Google Cloud Engine (GCEPD) & Microsoft Azure Unmanaged Disk driver.

Introducing New Docker 17.03 Volume Plugin System

With Docker 17.03 release, a new managed plugin system has been introduced. This is quite different from the old Docker plugin system. Plugins are now distributed as Docker images and can be hosted on Docker Hub or on a private registry.A volume plugin enables Docker volumes to persist across multiple Docker hosts.

rex01

In case you are very new to Docker Plugins, they basically extend Docker’s functionality.  A plugin is a process running on the same or a different host as the docker daemon, which registers itself by placing a file on the same docker host in one of the plugin directories described either as a.sock files( UNIX domain sockets placed under /run/docker/plugins), .spec files( text files containing a URL, such as unix:///other.sock or tcp://localhost:8080 placed under /etc/docker/plugins or /usr/lib/docker/plugins)or.json files ( text files containing a full json specification for the plugin placed under /etc/docker/plugin). You can refer this in case you want to develop your own Docker Volume Plugin.

Running RexRay inside Docker container

Yes, you read it correct ! With the introduction of Docker 17.03 New Plugin Managed System, you can now run RexRay inside Docker container flawlessly. Rex-Ray Volume Plugin is written in Go and provides advanced storage functionality for many platforms including VirtualBox, EC2, Google Compute Engine, OpenStack, and EMC.

You can list out available Docker Volume Plugin for various storage platforms using docker search rexray as shown below:

rexray_plugin2

Let us test-drive RexRay Volume Plugin for Swarm Mode cluster for the first time. I have 4-node Swarm Mode cluster running on Google Cloud Platform as shown below:

rex1

Verify that all the cluster nodes are running the latest 17.03.0-ce (Community Edition).

Installing the RexRay Volume plugin is just one-liner command:

rexray_plugininstallation

 

You can inspect the Rex-Ray Volume plugin using docker plugin inspect command:

rexray_inspect    rexray_inspect22

It’s time to create a volume using docker volume create utility :

$sudo docker volume create –driver rexray/gcepd –name storage1 –opt=size=32

rexray_volume1

You can verify if it is visible under GCE Console window:

RexRay_GCE

Let us try running few application which uses RexRay volume plugin as shown:

{code}==>docker run -dit –name mydb -e MYSQL_ROOT_PASSWORD=wordpress -e MYSQL_USER=wordpress -e MYSQL_PASSWORD:wordpress –volume-driver=rexray/gcepd -v dbdata:/var/lib/mysql mysql:5.7

Verify that MySQL service is up and running using docker logs <container-id> command as shown below:

dbdata1

By now, we should be able to see new volume called “dbdata” created and shown under docker volume ls command:

rexray_dbdata

Under GCE console, it should get displayed too:

dbdata_gce

Using Rex-Ray Volume Plugin under Docker 17.03 Swarm Mode

This is the most interesting section of this blog post. RexRay volume plugin worked great for us till now, especially for a single Docker host running multiple number of services.But what if I want to enable RexRay Volume to persist across multiple Docker Hosts(Swarm Mode cluster)? Yes, there is one possible way to achieve this – using Swarm Executor. It executes docker command across the swarm cluster. Credits to Madhu Venugopal @ Docker Team for assisting me testing with this tool.

swarmexec_1

Please remember that this is UNOFFICIAL way of achieving Volume Plugin implementation across swarm cluster.  I found this tool really cool and hope that it gets integrated within Docker official repository.

First, we need to clone this repository:

$git clone https://github.com/mavenugo/swarm-exec

Run the below command to push the plugin across the swarm cluster:

$cd swarm-exec

$./swarm-exec.sh docker plugin install –grant-all-permissions rexray/gcepd GCEPD_TAG=rexray

swarm_exec01

First, let’s quickly  verify the plugin on the master node  as shown below:

swarm_exec2

While verifying it on the worker nodes:

swarm_exec3

rexray_gce11

The docker volume inspect <volname> should display this particular volume as rexray volume driver as shown below:

rexray_gce222

Creating a MySQL service which uses Rex-Ray volume under Swarm Mode cluster:

up_service

Verifying that the service is up and running:

rexray_100

To conclude, the new version of RexRay looks promising and brings support for various Cloud storage platform. It continues to be leading open source container orchestration engine and now with inclusion of Docker 17.03 Managed Plugin architecture, it will definitely reduce the pain for implementing persistent storage solution.

Docker 1.12 Networking Model Overview

Estimated Reading Time: 8 minutes

“The Best way to orchestrate Docker is Docker”

In our previous post, we talked about Swarm Mode’s built-in orchestration and distribution engine.Docker’s new deployment API objects like Service and Node , built-in multi-host and multi-container cluster management integrated with Docker Engine, decentralized design,declarative service model, scaling and resiliency services , desired state conciliation, service discovery,  load balancing,  out-of-box security by default and rolling updates etc. just makes Docker 1.12  all-in-all automated deployment and management tool for Dockerized distributed applications and microservices at scale in production.

1st

 

Under this post, we are going to deep-dive into Docker 1.12 Networking model. I have 5 node swarm cluster test environment as shown below:

Docker112

If you SSH to test-master1 and check the default network layout:

networkls

Every container has an IP address on three overlay networks:

  1. Ingress
  2. docker_gwbridge
  3. user-defined overlay

Ingress Networking:

The swarm manager uses ingress load balancing to expose the services you want to make available externally to the swarm. The swarm manager can automatically assign the service a PublishedPort or you can configure a PublishedPort for the service in the 30000-32767 range.  What it actually means is Network ingress into the cluster is based on a node port model in which each service is randomly assigned a cluster-wide reserved port between 30000 and 32000 (default range). This means that every node in the cluster listens on this port and routes traffic for that service to it. This is true irrespective of whether a particular worker node is actually running the specified service.

It is important to note that only those services that has a port published (using the -p option) require the ingress network. But for those backend services which doesn’t publish ports, the corresponding containers are NOT attached to the ingress network.

External components, such as cloud load balancers, can access the service on the PublishedPort of any node in the cluster whether or not the node is currently running the task for the service. All nodes in the swarm cluster route ingress connections to a running task instance. Hence, ingress  follow a node port model in which each service has the same port on every node in the cluster.

docker_gwbridge:

The `default_gwbridge` network is added only for non-internal networks. Internal networks can be created with “–internal” option.. Containers connected to the multi-host network are automatically connected to the docker_gwbridge network. This network allows the containers to have external connectivity outside of their cluster, and is created on each worker node.

Docker Engine provides you flexibility to create this default_gwbridge by hand instead of letting the daemon create it automatically. In case you want docker to create a docker_gwbridge network in desired subnet, you can tweak it as shown below:

$docker network create –subnet={Your prefered subnet } -o com.docker.network.bridge.enable_icc=false -o com.docker.network.bridge.name=docker_gwbridge docker_gwbridge.

User-defined Overlay:

This is the overlay network that the user has specified that the container should be on. In our upcoming example , we will call it mynet. A container can be on multiple user-defined overlays.

 

Enough with the theoretical aspects !! Let’s try hands on the networking model practically.

As shown below, I have 3 nodes swarm cluster with 1 master and 2 worker nodes.

pic1

I created a user-defined overlay through the below command:

$ sudo docker network create -d overlay mynet

I can see that the new overlay network gets listed under “swarm” scope(as we are already into swarm mode)  as shown below:

pic8

I have a service “frontier” running tasks on node1, node2 and master1 as shown below:

pic2

pic18

We can check the container running under node-1 and node-2 respectively:

pic3

pic4

Meanwhile, I added a new Node-3 and then scaled it to 10.

pic5

Now I can see that the containers are scaled across the swarm cluster.

To look into how overlay networking works, let us target the 4th Node and add it to the Swarm Cluster.

pic6

Now, the node list gets updated as shown below:

pic9

Whenever you add any node to the swarm cluster, it doesn’t automatically reflect the mynet overlay network as shown below:

pic7

The overlay network only gets reflected whenever the new task is assigned and this happens on-demand.

Let us try to scale our old service and see if node-4 network layout gets reflected with mynet network.

Earlier, we had 10 replicas running which is scaled across master1, node1, node2 and node4. Once we scaled it to 20, the swarm cluster engine will scale it across all the nodes as shown below:

pic10

Let us now check the network layout at node-4:

Hence, the overlay network gets created on-demand whenever the new task is assigned to this node.

Self-Healing:

Swarm nodes are “self-organizing and self-healing. What it means? Whenever any node or container goes crash or sudden unplanned shutdown, the scaling swarm engine attempts to correct to make things right again. Let us look into this aspect in detail:

As we see above, here is an example of nodes and running tasks:

Master-1 running 4 tasks

Node-1 running 4 tasks

Node-2 running 4 tasks

Node-3 running 4 tasks

Node-4 running  4 instances

Now let’s bring down node-4.

pic13

As soon as all the containers running on node-4 is stopped, it tries to start another 4 containers with different IDs on the same node.

pic15

So this shows self-healing aspect of Docker Swarm Engine.

Self-Organizing:

Let’s try bringing down node-4 completely. As soon as you bring down the node-4, the containers running on node-4 gets started on other node automatically.

pic16

 

Master-1 running 5 tasks

Node-1 running 5 tasks

Node-2 running 5 tasks

Node-3 running 5 tasks

Hence, this is how it organizes 20 tasks which is now scaled across master1, node1, node2 and node3.

Global Services:

This option enables service tasks to sit on all the nodes. You can create a service with –mode-global option to enable this functionality as shown below:

pic17

Constraints:

There are scenerios when you sometime want to segregate workloads on your cluster where you specifically want workloads to go to only a certain set of nodes .One example I have pulled from DockerCon slide which shows SSD based constraints which can be applied as shown below:

pic24

 

Routing Mesh 

We have reached the last topic of this blog post and that’s not complete without talking about Routing mesh. I have pulled out the presentation demonstrated at DockerCon 2016 which clearly shows us how routing mesh works.

                                                                                                                                                                                                        ~ Source: DockerCon 2016

To understand how Routing mesh works, suppose that there is one manager node and 3 worker nodes which is serving myapp:80. Whenever an operator try to access myapp:80 on the exposed port, he probably because of an external load-balancer might happen to hit worker-2 and it sounds good because worker-2 is having 2 copies of the frontend containers and it is ready to serve it without any issue. Imagine a scenario when user access myapp:80 and gets redirected to worker-3 which currently has no copies of the containers. This is where Routing Mesh technology comes into the picture. Even though the worker-3 has no copies of this container, the docker swarm engine is going to re-route the traffic to worker-2 which has necessary copies to serve it.Your external Load-balancer don’t need to understand where is the container running. Routing mesh takes care of that automatically. In short, Container-aware routing mesh is capable of transparent rerouting the traffic from node-3 to a node that is running container(which is node-2) shown above. Internally, Docker Engine  allocates a cluster wide port and it maps that port to the containers of a service and the routing mesh will take care of routing traffic to the containers of that service by exposing a port on every node in the swarm.

In our next post, we will talk about routing mesh in detail and cover volume aspect of Swarm Mode. Till then, Happy Swarming !!!

New Container Network Model @ Docker 1.9

Estimated Reading Time: 13 minutes

Docker 1.9 new networking is Software Defined Networking (SDN) for containers. Pushing the experimental version to the public was a right thing Docker Inc. did few months back and now when it is production ready, it is surely going to make Docker, an Enterprise Ready product.With SDN ,developers can breathe a flexibility to network their apps as you want without having to wait on the network operations team.

Docker 1.9 brings totally a new way of getting started with Networking straight away by using the new docker network command.In Docker 1.9, Networking is ready to use in production and works with Swarm and Compose. Networking is a feature of Docker Engine that allows you to create virtual networks and attach containers to them so you can create the network topology that is right for your application. The networked containers can even span multiple hosts, so you don’t have to worry about what host your container lands on. They seamlessly communicate with each other wherever they are – thus enabling true distributed applications.

Docker networking allows connectivity for containers to each other across different physical or virtual hosts. An Interesting stuff is containers using Networking can be easily stopped, started and restarted without disrupting the connections to other containers –    You don’t need to create a container before you can link to it. With Networking containers be created in any order and discover each other using their container names.

Libnetwork implements Container Network Model (CNM) which offers networking for containers while providing an abstraction that can be used to support multiple network drivers. It lies on 3 major components:

network-2

Sandbox

A Sandbox contains the configuration of a container’s network stack. This includes management of the container’s interfaces, routing table and DNS settings. An implementation of a Sandbox could be a Linux Network Namespace, a FreeBSD Jail or other similar concept. A Sandbox may contain many endpoints from multiple networks.

Endpoint

An Endpoint joins a Sandbox to a Network. An implementation of an Endpoint could be a veth pair, an Open vSwitch internal port or similar. An Endpoint can belong to only one network but may only belong to one Sandbox.

Network

A Network is a group of Endpoints that are able to communicate with each-other directly. An implementation of a Network could be a Linux bridge, a VLAN, etc. Networks consist of many endpoints.

I spent couple of hours understanding how Docker network actually works.

Let me share my findings with you all:

Installing Docker 1.9 on Ubuntu 14.04.3

Below script will help you setup Docker 1.9 on Ubuntu 14.04.3 on the fly:

root@dell-virtual-machine:~# less script

# Check that HTTPS transport is available to APT
if [ ! -e /usr/lib/apt/methods/https ]; then
apt-get update
apt-get install -y apt-transport-https
fi

# Add the repository to your APT sources
echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list

# Then import the repository key
apt-key adv –keyserver hkp://p80.pool.sks-keyservers.net:80 –recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9

# Install docker
apt-get update
apt-get install -y lxc-docker

#
# Alternatively, just use the curl-able install.sh script provided at https://get.docker.com
#

Executing the script will help you with Docker 1.9. I could see lxc-docker getting installing removing docker.io package. Good going ..

The following extra packages will be installed:
lxc-docker-1.9.0
The following packages will be REMOVED:
docker.io
The following NEW packages will be installed:
lxc-docker lxc-docker-1.9.0
0 upgraded, 2 newly installed, 1 to remove and 181 not upgraded.
Need to get 8,487 kB of archives.
After this operation, 1,236 kB of additional disk space will be used.
Get:1 https://get.docker.com/ubuntu/ docker/main lxc-docker amd64 1.9.0 [2,092 B]
Fetched 8,487 kB in 40s (211 kB/s)
(Reading database … 209652 files and directories currently installed.)
Removing docker.io (1.6.2~dfsg1-1ubuntu4~14.04.1) …
docker stop/waiting
Processing triggers for man-db (2.6.7.1-1ubuntu1) …
Selecting previously unselected package lxc-docker-1.9.0.
(Reading database … 209562 files and directories currently installed.)
Preparing to unpack …/lxc-docker-1.9.0_1.9.0_amd64.deb …
Unpacking lxc-docker-1.9.0 (1.9.0) …
Selecting previously unselected package lxc-docker.
Preparing to unpack …/lxc-docker_1.9.0_amd64.deb …
Unpacking lxc-docker (1.9.0) …
Processing triggers for man-db (2.6.7.1-1ubuntu1) …
Processing triggers for ureadahead (0.100.0-16) …
Setting up lxc-docker-1.9.0 (1.9.0) …
Installing new version of config file /etc/init.d/docker …
Installing new version of config file /etc/init/docker.conf …
docker start/running, process 1838
Processing triggers for ureadahead (0.100.0-16) …
Setting up lxc-docker (1.9.0) …
root@dell-virtual-machine:~#

Running Docker daemon

root@dell-virtual-machine:~# docker daemon -H unix:///var/run/docker.sock       INFO[0000] API listen on /var/run/docker.sock
INFO[0000] [graphdriver] using prior storage driver “aufs”
INFO[0000] Firewalld running: true
INFO[0000] Default bridge (docker0) is assigned with an IP address 172.17.42.1/16. Daemon option –bip can be used to set a preferred IP address
WARN[0000] Your kernel does not support swap memory limit.
INFO[0000] Loading containers: start.
………………………………………………………………………………………………………………………………….
INFO[0000] Loading containers: done.
INFO[0000] Daemon has completed initialization
INFO[0000] Docker daemon                                 commit=76d6bc9 execdriver=native-0.2 graphdriver=aufs version=1.9.0

Switch to new terminal in case you are keen on seeing what goes behind the hood.

Hurray!! a new Docker 1.9 is right there on your Ubuntu box.

root@dell-virtual-machine:/home/dell# docker version
Client:
Version:      1.9.0
API version:  1.21
Go version:   go1.4.3
Git commit:   76d6bc9
Built:        Tue Nov  3 19:20:09 UTC 2015
OS/Arch:      linux/amd64

Server:
Version:      1.9.0
API version:  1.21
Go version:   go1.4.3
Git commit:   76d6bc9
Built:        Tue Nov  3 19:20:09 UTC 2015
OS/Arch:      linux/amd64
root@dell-virtual-machine:/home/dell#

I am keen on looking what new commands has arrived right there:

docker-network

root@dell-virtual-machine:/home/dell# docker
Usage: docker [OPTIONS] COMMAND [arg…]
docker daemon [ –help | … ]
docker [ –help | -v | –version ]

A self-sufficient runtime for containers.

Options:

–config=~/.docker                 Location of client config files
-D, –debug=false                  Enable debug mode
–disable-legacy-registry=false    Do not contact legacy registries
-H, –host=[]                      Daemon socket(s) to connect to
-h, –help=false                   Print usage
-l, –log-level=info               Set the logging level
–tls=false                        Use TLS; implied by –tlsverify
–tlscacert=~/.docker/ca.pem       Trust certs signed only by this CA
–tlscert=~/.docker/cert.pem       Path to TLS certificate file
–tlskey=~/.docker/key.pem         Path to TLS key file
–tlsverify=false                  Use TLS and verify the remote
-v, –version=false                Print version information and quit

Commands:
attach    Attach to a running container
build     Build an image from a Dockerfile
commit    Create a new image from a container’s changes
cp        Copy files/folders between a container and the local filesystem
create    Create a new container
diff      Inspect changes on a container’s filesystem
events    Get real time events from the server
exec      Run a command in a running container
export    Export a container’s filesystem as a tar archive
history   Show the history of an image
images    List images
import    Import the contents from a tarball to create a filesystem image
info      Display system-wide information
inspect   Return low-level information on a container or image
kill      Kill a running container
load      Load an image from a tar archive or STDIN
login     Register or log in to a Docker registry
logout    Log out from a Docker registry
logs      Fetch the logs of a container
network   Manage Docker networks
pause     Pause all processes within a container
port      List port mappings or a specific mapping for the CONTAINER
ps        List containers
pull      Pull an image or a repository from a registry
push      Push an image or a repository to a registry
rename    Rename a container
restart   Restart a container
rm        Remove one or more containers
rmi       Remove one or more images
run       Run a command in a new container
save      Save an image(s) to a tar archive
search    Search the Docker Hub for images
start     Start one or more stopped containers
stats     Display a live stream of container(s) resource usage statistics
stop      Stop a running container
tag       Tag an image into a repository
top       Display the running processes of a container
unpause   Unpause all processes within a container
version   Show the Docker version information
volume    Manage Docker volumes
wait      Block until a container stops, then print its exit code

Run ‘docker COMMAND –help’ for more information on a command.
root@dell-virtual-machine:/home/dell#

Docker 1.9.0 Client Binary
Docker Machine 0.5.0
Docker Compose 0.5.0
Docker Toolbox 1.9.0
Docker Swarm 1.0.0

Wow…swarm, toolbox, compose, machine all with a new version. Awesome !!

Let’s start playing around Docker network command. I am going to pull nginx first.

root@dell-virtual-machine:~# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
d0ca40da9e35: Pull complete
d1f66aef36c9: Pull complete
192997133528: Pull complete
c4b09a941684: Pull complete
4174aa7c7be8: Pull complete
0620b22b5443: Pull complete
87c3b9f58480: Pull complete
7d984375a5e7: Pull complete
e491c4f10eb2: Pull complete
edeba58b4ca7: Pull complete
a96311efcda8: Pull complete
914c82c5a678: Pull complete
Digest: sha256:b24651e86659a5d1e4103f8c1ea49567335528281c1678697783ae7569114e1e
Status: Downloaded newer image for nginx:latest
root@dell-virtual-machine:~#

Let’s see what Docker network has to say:

root@dell-virtual-machine:~# docker network ls
NETWORK ID          NAME                DRIVER
69b253ef50d1        bridge              bridge
a743bde2e8b9        web                 bridge
c03582079d99        none                null
9a7a4791e2ec        host                host
root@dell-virtual-machine:~#

Cool…

Its time to start nginx on the new network of my interest. I will name it “web”.(sounds good Huh !!!)

root@dell-virtual-machine:~# docker run -itd –net=web –name web nginx
2012c42e577b0f0eb4da7cbe7955bd5137021a6851770578a791e4f32c2f677f
root@dell-virtual-machine:~#

Let me check the docker network again.
root@dell-virtual-machine:~# docker network ls
NETWORK ID          NAME                DRIVER
69b253ef50d1        bridge              bridge
a743bde2e8b9        web                 bridge
c03582079d99        none                null
9a7a4791e2ec        host                host
root@dell-virtual-machine:~#

Fair enough. I can see it listed here.

Let’s play around it again. This time let me name it as “newapp”.

root@dell-virtual-machine:~# docker run -itd –net=newapp –net=myapp nginx
23722f062a29e735d027c324968e732124215e028d72d416f601807c5e28d448
root@dell-virtual-machine:~#

Let’s check it again.

root@dell-virtual-machine:~# docker network ls
NETWORK ID          NAME                DRIVER
69b253ef50d1        bridge              bridge
a743bde2e8b9        web                 bridge
c03582079d99        none                null
9a7a4791e2ec        host                host
8fdc0ce3c468        myapp               bridge
root@dell-virtual-machine:~#

Yipee.. there goes my “myapp” listed.

let us connect my web container to myapp network as shown:

root@dell-virtual-machine:~# docker network connect myapp web
root@dell-virtual-machine:~# docker network ls
NETWORK ID          NAME                DRIVER
c03582079d99        none                null
9a7a4791e2ec        host                host
8fdc0ce3c468        myapp               bridge
69b253ef50d1        bridge              bridge
a743bde2e8b9        web                 bridge
root@dell-virtual-machine:~#

Good. Let us try to see if inspect works for network command too.

root@dell-virtual-machine:~# docker network inspect myapp
[
{
“Name”: “myapp”,
“Id”: “8fdc0ce3c468e8fccd513acc63171e168a823f80d61aca3529605961c5b96aab”,
“Scope”: “local”,
“Driver”: “bridge”,
“IPAM”: {
“Driver”: “default”,
“Config”: [
{}
]
},
“Containers”: {
“2012c42e577b0f0eb4da7cbe7955bd5137021a6851770578a791e4f32c2f677f”: {
“EndpointID”: “fa6efb254809007debb75ea3ce694624809452466a14a1306844eaf97ca2094a”,
“MacAddress”: “02:42:ac:13:00:03”,
“IPv4Address”: “172.19.0.3/16”,
“IPv6Address”: “”
},
“23722f062a29e735d027c324968e732124215e028d72d416f601807c5e28d448”: {
“EndpointID”: “35cfb86fe4c7bab266a1e671a37fc3ea28fb5382ccd7e1d032f6d1c53b50e509”,
“MacAddress”: “02:42:ac:13:00:02”,
“IPv4Address”: “172.19.0.2/16”,
“IPv6Address”: “”
}
},
“Options”: {}
}
]
root@dell-virtual-machine:~#

root@dell-virtual-machine:~# docker network inspect web
[
{
“Name”: “web”,
“Id”: “a743bde2e8b912838dc1216b338b367b0c8fc9f224c7625f1078fbf96a7990ef”,
“Scope”: “local”,
“Driver”: “bridge”,
“IPAM”: {
“Driver”: “default”,
“Config”: [
{}
]
},
“Containers”: {
“2012c42e577b0f0eb4da7cbe7955bd5137021a6851770578a791e4f32c2f677f”: {
“EndpointID”: “e3297e59f8613806ad1e4d9fb505f9636e581ad9986c3e5bbd2b1391d0d488ed”,
“MacAddress”: “02:42:ac:12:00:02”,
“IPv4Address”: “172.18.0.2/16”,
“IPv6Address”: “”
}
},
“Options”: {}
}
]

This is super cool. I can see detailed information of all container tied to my web applications.

How about network bridge? Let’s try it :

root@dell-virtual-machine:~# docker network inspect bridge
[
{
“Name”: “bridge”,
“Id”: “69b253ef50d1640934b467c9a1ced5dee1b187082fa95da9ed6c9e1e9eb972bb”,
“Scope”: “local”,
“Driver”: “bridge”,
“IPAM”: {
“Driver”: “default”,
“Config”: [
{
“Subnet”: “172.17.42.1/16”,
“Gateway”: “172.17.42.1”
}
]
},
“Containers”: {},
“Options”: {
“com.docker.network.bridge.default_bridge”: “true”,
“com.docker.network.bridge.enable_icc”: “true”,
“com.docker.network.bridge.enable_ip_masquerade”: “true”,
“com.docker.network.bridge.host_binding_ipv4”: “0.0.0.0”,
“com.docker.network.bridge.name”: “docker0”,
“com.docker.network.driver.mtu”: “1500”
}
}
]
root@dell-virtual-machine:~#

root@dell-virtual-machine:~# docker network inspect host
[
{
“Name”: “host”,
“Id”: “9a7a4791e2ecc6b745721109d7d77c4ef5fa601e3b43c4b6415fd4851351d759”,
“Scope”: “local”,
“Driver”: “host”,
“IPAM”: {
“Driver”: “default”,
“Config”: []
},
“Containers”: {},
“Options”: {}
}
]
root@dell-virtual-machine:~#

I think the new “docker network” command is simply awesome.Docker is all about application and Docker folks have done a right job of concentrating completely on application design. This tool is surely going to be an amazing tool for developers as they now don’t have to worry about “Network Administrator” job. Its all purely “Dev-Ops Re-Org”.

Will be back with more exploration on “Docker Networking”.