Introducing new RexRay 0.8 with Docker 17.03 Managed Plugin System for Persistent Storage on Cloud Platforms

Estimated Reading Time: 5 minutes

DellEMC Rex-Ray 0.8 Final Release was announced last week. Graduated as top-level project within {code} community, RexRay 0.8 release has been considered as one of the largest releases till date. The new release introduced support for long lists of new storage platforms like S3FS, EBS, EFS, GCEPD & ScaleIO shown below:

rexray010

Public cloud storage is one of the fastest growing sector in storage with leaders like Amazon AWS, Google Cloud Storage and Microsoft Azure. With the release of RexRay 0.8, {code} community took the right approach in targeting the first community-contributed driver starting with Amazon EFS driver and then quickly adding additional community-contributed drivers like Digital Ocean, FittedCloud, Google Cloud Engine (GCEPD) & Microsoft Azure Unmanaged Disk driver.

Introducing New Docker 17.03 Volume Plugin System

With Docker 17.03 release, a new managed plugin system has been introduced. This is quite different from the old Docker plugin system. Plugins are now distributed as Docker images and can be hosted on Docker Hub or on a private registry.A volume plugin enables Docker volumes to persist across multiple Docker hosts.

rex01

In case you are very new to Docker Plugins, they basically extend Docker’s functionality.  A plugin is a process running on the same or a different host as the docker daemon, which registers itself by placing a file on the same docker host in one of the plugin directories described either as a.sock files( UNIX domain sockets placed under /run/docker/plugins), .spec files( text files containing a URL, such as unix:///other.sock or tcp://localhost:8080 placed under /etc/docker/plugins or /usr/lib/docker/plugins)or.json files ( text files containing a full json specification for the plugin placed under /etc/docker/plugin). You can refer this in case you want to develop your own Docker Volume Plugin.

Running RexRay inside Docker container

Yes, you read it correct ! With the introduction of Docker 17.03 New Plugin Managed System, you can now run RexRay inside Docker container flawlessly. Rex-Ray Volume Plugin is written in Go and provides advanced storage functionality for many platforms including VirtualBox, EC2, Google Compute Engine, OpenStack, and EMC.

You can list out available Docker Volume Plugin for various storage platforms using docker search rexray as shown below:

rexray_plugin2

Let us test-drive RexRay Volume Plugin for Swarm Mode cluster for the first time. I have 4-node Swarm Mode cluster running on Google Cloud Platform as shown below:

rex1

Verify that all the cluster nodes are running the latest 17.03.0-ce (Community Edition).

Installing the RexRay Volume plugin is just one-liner command:

rexray_plugininstallation

 

You can inspect the Rex-Ray Volume plugin using docker plugin inspect command:

rexray_inspect    rexray_inspect22

It’s time to create a volume using docker volume create utility :

$sudo docker volume create –driver rexray/gcepd –name storage1 –opt=size=32

rexray_volume1

You can verify if it is visible under GCE Console window:

RexRay_GCE

Let us try running few application which uses RexRay volume plugin as shown:

{code}==>docker run -dit –name mydb -e MYSQL_ROOT_PASSWORD=wordpress -e MYSQL_USER=wordpress -e MYSQL_PASSWORD:wordpress –volume-driver=rexray/gcepd -v dbdata:/var/lib/mysql mysql:5.7

Verify that MySQL service is up and running using docker logs <container-id> command as shown below:

dbdata1

By now, we should be able to see new volume called “dbdata” created and shown under docker volume ls command:

rexray_dbdata

Under GCE console, it should get displayed too:

dbdata_gce

Using Rex-Ray Volume Plugin under Docker 17.03 Swarm Mode

This is the most interesting section of this blog post. RexRay volume plugin worked great for us till now, especially for a single Docker host running multiple number of services.But what if I want to enable RexRay Volume to persist across multiple Docker Hosts(Swarm Mode cluster)? Yes, there is one possible way to achieve this – using Swarm Executor. It executes docker command across the swarm cluster. Credits to Madhu Venugopal @ Docker Team for assisting me testing with this tool.

swarmexec_1

Please remember that this is UNOFFICIAL way of achieving Volume Plugin implementation across swarm cluster.  I found this tool really cool and hope that it gets integrated within Docker official repository.

First, we need to clone this repository:

$git clone https://github.com/mavenugo/swarm-exec

Run the below command to push the plugin across the swarm cluster:

$cd swarm-exec

$./swarm-exec.sh docker plugin install –grant-all-permissions rexray/gcepd GCEPD_TAG=rexray

swarm_exec01

First, let’s quickly  verify the plugin on the master node  as shown below:

swarm_exec2

While verifying it on the worker nodes:

swarm_exec3

rexray_gce11

The docker volume inspect <volname> should display this particular volume as rexray volume driver as shown below:

rexray_gce222

Creating a MySQL service which uses Rex-Ray volume under Swarm Mode cluster:

up_service

Verifying that the service is up and running:

rexray_100

To conclude, the new version of RexRay looks promising and brings support for various Cloud storage platform. It continues to be leading open source container orchestration engine and now with inclusion of Docker 17.03 Managed Plugin architecture, it will definitely reduce the pain for implementing persistent storage solution.

Docker 1.12 Networking Model Overview

Estimated Reading Time: 8 minutes

“The Best way to orchestrate Docker is Docker”

In our previous post, we talked about Swarm Mode’s built-in orchestration and distribution engine.Docker’s new deployment API objects like Service and Node , built-in multi-host and multi-container cluster management integrated with Docker Engine, decentralized design,declarative service model, scaling and resiliency services , desired state conciliation, service discovery,  load balancing,  out-of-box security by default and rolling updates etc. just makes Docker 1.12  all-in-all automated deployment and management tool for Dockerized distributed applications and microservices at scale in production.

1st

 

Under this post, we are going to deep-dive into Docker 1.12 Networking model. I have 5 node swarm cluster test environment as shown below:

Docker112

If you SSH to test-master1 and check the default network layout:

networkls

Every container has an IP address on three overlay networks:

  1. Ingress
  2. docker_gwbridge
  3. user-defined overlay

Ingress Networking:

The swarm manager uses ingress load balancing to expose the services you want to make available externally to the swarm. The swarm manager can automatically assign the service a PublishedPort or you can configure a PublishedPort for the service in the 30000-32767 range.  What it actually means is Network ingress into the cluster is based on a node port model in which each service is randomly assigned a cluster-wide reserved port between 30000 and 32000 (default range). This means that every node in the cluster listens on this port and routes traffic for that service to it. This is true irrespective of whether a particular worker node is actually running the specified service.

It is important to note that only those services that has a port published (using the -p option) require the ingress network. But for those backend services which doesn’t publish ports, the corresponding containers are NOT attached to the ingress network.

External components, such as cloud load balancers, can access the service on the PublishedPort of any node in the cluster whether or not the node is currently running the task for the service. All nodes in the swarm cluster route ingress connections to a running task instance. Hence, ingress  follow a node port model in which each service has the same port on every node in the cluster.

docker_gwbridge:

The `default_gwbridge` network is added only for non-internal networks. Internal networks can be created with “–internal” option.. Containers connected to the multi-host network are automatically connected to the docker_gwbridge network. This network allows the containers to have external connectivity outside of their cluster, and is created on each worker node.

Docker Engine provides you flexibility to create this default_gwbridge by hand instead of letting the daemon create it automatically. In case you want docker to create a docker_gwbridge network in desired subnet, you can tweak it as shown below:

$docker network create –subnet={Your prefered subnet } -o com.docker.network.bridge.enable_icc=false -o com.docker.network.bridge.name=docker_gwbridge docker_gwbridge.

User-defined Overlay:

This is the overlay network that the user has specified that the container should be on. In our upcoming example , we will call it mynet. A container can be on multiple user-defined overlays.

 

Enough with the theoretical aspects !! Let’s try hands on the networking model practically.

As shown below, I have 3 nodes swarm cluster with 1 master and 2 worker nodes.

pic1

I created a user-defined overlay through the below command:

$ sudo docker network create -d overlay mynet

I can see that the new overlay network gets listed under “swarm” scope(as we are already into swarm mode)  as shown below:

pic8

I have a service “frontier” running tasks on node1, node2 and master1 as shown below:

pic2

pic18

We can check the container running under node-1 and node-2 respectively:

pic3

pic4

Meanwhile, I added a new Node-3 and then scaled it to 10.

pic5

Now I can see that the containers are scaled across the swarm cluster.

To look into how overlay networking works, let us target the 4th Node and add it to the Swarm Cluster.

pic6

Now, the node list gets updated as shown below:

pic9

Whenever you add any node to the swarm cluster, it doesn’t automatically reflect the mynet overlay network as shown below:

pic7

The overlay network only gets reflected whenever the new task is assigned and this happens on-demand.

Let us try to scale our old service and see if node-4 network layout gets reflected with mynet network.

Earlier, we had 10 replicas running which is scaled across master1, node1, node2 and node4. Once we scaled it to 20, the swarm cluster engine will scale it across all the nodes as shown below:

pic10

Let us now check the network layout at node-4:

Hence, the overlay network gets created on-demand whenever the new task is assigned to this node.

Self-Healing:

Swarm nodes are “self-organizing and self-healing. What it means? Whenever any node or container goes crash or sudden unplanned shutdown, the scaling swarm engine attempts to correct to make things right again. Let us look into this aspect in detail:

As we see above, here is an example of nodes and running tasks:

Master-1 running 4 tasks

Node-1 running 4 tasks

Node-2 running 4 tasks

Node-3 running 4 tasks

Node-4 running  4 instances

Now let’s bring down node-4.

pic13

As soon as all the containers running on node-4 is stopped, it tries to start another 4 containers with different IDs on the same node.

pic15

So this shows self-healing aspect of Docker Swarm Engine.

Self-Organizing:

Let’s try bringing down node-4 completely. As soon as you bring down the node-4, the containers running on node-4 gets started on other node automatically.

pic16

 

Master-1 running 5 tasks

Node-1 running 5 tasks

Node-2 running 5 tasks

Node-3 running 5 tasks

Hence, this is how it organizes 20 tasks which is now scaled across master1, node1, node2 and node3.

Global Services:

This option enables service tasks to sit on all the nodes. You can create a service with –mode-global option to enable this functionality as shown below:

pic17

Constraints:

There are scenerios when you sometime want to segregate workloads on your cluster where you specifically want workloads to go to only a certain set of nodes .One example I have pulled from DockerCon slide which shows SSD based constraints which can be applied as shown below:

pic24

 

Routing Mesh 

We have reached the last topic of this blog post and that’s not complete without talking about Routing mesh. I have pulled out the presentation demonstrated at DockerCon 2016 which clearly shows us how routing mesh works.

                                                                                                                                                                                                        ~ Source: DockerCon 2016

To understand how Routing mesh works, suppose that there is one manager node and 3 worker nodes which is serving myapp:80. Whenever an operator try to access myapp:80 on the exposed port, he probably because of an external load-balancer might happen to hit worker-2 and it sounds good because worker-2 is having 2 copies of the frontend containers and it is ready to serve it without any issue. Imagine a scenario when user access myapp:80 and gets redirected to worker-3 which currently has no copies of the containers. This is where Routing Mesh technology comes into the picture. Even though the worker-3 has no copies of this container, the docker swarm engine is going to re-route the traffic to worker-2 which has necessary copies to serve it.Your external Load-balancer don’t need to understand where is the container running. Routing mesh takes care of that automatically. In short, Container-aware routing mesh is capable of transparent rerouting the traffic from node-3 to a node that is running container(which is node-2) shown above. Internally, Docker Engine  allocates a cluster wide port and it maps that port to the containers of a service and the routing mesh will take care of routing traffic to the containers of that service by exposing a port on every node in the swarm.

In our next post, we will talk about routing mesh in detail and cover volume aspect of Swarm Mode. Till then, Happy Swarming !!!

New Container Network Model @ Docker 1.9

Estimated Reading Time: 8 minutes

Docker 1.9 new networking is Software Defined Networking (SDN) for containers. Pushing the experimental version to the public was a right thing Docker Inc. did few months back and now when it is production ready, it is surely going to make Docker, an Enterprise Ready product.With SDN ,developers can breathe a flexibility to network their apps as you want without having to wait on the network operations team.

Docker 1.9 brings totally a new way of getting started with Networking straight away by using the new docker network command.In Docker 1.9, Networking is ready to use in production and works with Swarm and Compose. Networking is a feature of Docker Engine that allows you to create virtual networks and attach containers to them so you can create the network topology that is right for your application. The networked containers can even span multiple hosts, so you don’t have to worry about what host your container lands on. They seamlessly communicate with each other wherever they are – thus enabling true distributed applications.

Docker networking allows connectivity for containers to each other across different physical or virtual hosts. An Interesting stuff is containers using Networking can be easily stopped, started and restarted without disrupting the connections to other containers –    You don’t need to create a container before you can link to it. With Networking containers be created in any order and discover each other using their container names.

Libnetwork implements Container Network Model (CNM) which offers networking for containers while providing an abstraction that can be used to support multiple network drivers. It lies on 3 major components:

network-2

Sandbox

A Sandbox contains the configuration of a container’s network stack. This includes management of the container’s interfaces, routing table and DNS settings. An implementation of a Sandbox could be a Linux Network Namespace, a FreeBSD Jail or other similar concept. A Sandbox may contain many endpoints from multiple networks.

Endpoint

An Endpoint joins a Sandbox to a Network. An implementation of an Endpoint could be a veth pair, an Open vSwitch internal port or similar. An Endpoint can belong to only one network but may only belong to one Sandbox.

Network

A Network is a group of Endpoints that are able to communicate with each-other directly. An implementation of a Network could be a Linux bridge, a VLAN, etc. Networks consist of many endpoints.

I spent couple of hours understanding how Docker network actually works.

Let me share my findings with you all:

Installing Docker 1.9 on Ubuntu 14.04.3

Below script will help you setup Docker 1.9 on Ubuntu 14.04.3 on the fly:

root@dell-virtual-machine:~# less script

# Check that HTTPS transport is available to APT
if [ ! -e /usr/lib/apt/methods/https ]; then
apt-get update
apt-get install -y apt-transport-https
fi

# Add the repository to your APT sources
echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list

# Then import the repository key
apt-key adv –keyserver hkp://p80.pool.sks-keyservers.net:80 –recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9

# Install docker
apt-get update
apt-get install -y lxc-docker

#
# Alternatively, just use the curl-able install.sh script provided at https://get.docker.com
#

Executing the script will help you with Docker 1.9. I could see lxc-docker getting installing removing docker.io package. Good going ..

The following extra packages will be installed:
lxc-docker-1.9.0
The following packages will be REMOVED:
docker.io
The following NEW packages will be installed:
lxc-docker lxc-docker-1.9.0
0 upgraded, 2 newly installed, 1 to remove and 181 not upgraded.
Need to get 8,487 kB of archives.
After this operation, 1,236 kB of additional disk space will be used.
Get:1 https://get.docker.com/ubuntu/ docker/main lxc-docker amd64 1.9.0 [2,092 B]
Fetched 8,487 kB in 40s (211 kB/s)
(Reading database … 209652 files and directories currently installed.)
Removing docker.io (1.6.2~dfsg1-1ubuntu4~14.04.1) …
docker stop/waiting
Processing triggers for man-db (2.6.7.1-1ubuntu1) …
Selecting previously unselected package lxc-docker-1.9.0.
(Reading database … 209562 files and directories currently installed.)
Preparing to unpack …/lxc-docker-1.9.0_1.9.0_amd64.deb …
Unpacking lxc-docker-1.9.0 (1.9.0) …
Selecting previously unselected package lxc-docker.
Preparing to unpack …/lxc-docker_1.9.0_amd64.deb …
Unpacking lxc-docker (1.9.0) …
Processing triggers for man-db (2.6.7.1-1ubuntu1) …
Processing triggers for ureadahead (0.100.0-16) …
Setting up lxc-docker-1.9.0 (1.9.0) …
Installing new version of config file /etc/init.d/docker …
Installing new version of config file /etc/init/docker.conf …
docker start/running, process 1838
Processing triggers for ureadahead (0.100.0-16) …
Setting up lxc-docker (1.9.0) …
root@dell-virtual-machine:~#

Running Docker daemon

root@dell-virtual-machine:~# docker daemon -H unix:///var/run/docker.sock       INFO[0000] API listen on /var/run/docker.sock
INFO[0000] [graphdriver] using prior storage driver “aufs”
INFO[0000] Firewalld running: true
INFO[0000] Default bridge (docker0) is assigned with an IP address 172.17.42.1/16. Daemon option –bip can be used to set a preferred IP address
WARN[0000] Your kernel does not support swap memory limit.
INFO[0000] Loading containers: start.
………………………………………………………………………………………………………………………………….
INFO[0000] Loading containers: done.
INFO[0000] Daemon has completed initialization
INFO[0000] Docker daemon                                 commit=76d6bc9 execdriver=native-0.2 graphdriver=aufs version=1.9.0

Switch to new terminal in case you are keen on seeing what goes behind the hood.

Hurray!! a new Docker 1.9 is right there on your Ubuntu box.

root@dell-virtual-machine:/home/dell# docker version
Client:
Version:      1.9.0
API version:  1.21
Go version:   go1.4.3
Git commit:   76d6bc9
Built:        Tue Nov  3 19:20:09 UTC 2015
OS/Arch:      linux/amd64

Server:
Version:      1.9.0
API version:  1.21
Go version:   go1.4.3
Git commit:   76d6bc9
Built:        Tue Nov  3 19:20:09 UTC 2015
OS/Arch:      linux/amd64
root@dell-virtual-machine:/home/dell#

I am keen on looking what new commands has arrived right there:

docker-network

root@dell-virtual-machine:/home/dell# docker
Usage: docker [OPTIONS] COMMAND [arg…]
docker daemon [ –help | … ]
docker [ –help | -v | –version ]

A self-sufficient runtime for containers.

Options:

–config=~/.docker                 Location of client config files
-D, –debug=false                  Enable debug mode
–disable-legacy-registry=false    Do not contact legacy registries
-H, –host=[]                      Daemon socket(s) to connect to
-h, –help=false                   Print usage
-l, –log-level=info               Set the logging level
–tls=false                        Use TLS; implied by –tlsverify
–tlscacert=~/.docker/ca.pem       Trust certs signed only by this CA
–tlscert=~/.docker/cert.pem       Path to TLS certificate file
–tlskey=~/.docker/key.pem         Path to TLS key file
–tlsverify=false                  Use TLS and verify the remote
-v, –version=false                Print version information and quit

Commands:
attach    Attach to a running container
build     Build an image from a Dockerfile
commit    Create a new image from a container’s changes
cp        Copy files/folders between a container and the local filesystem
create    Create a new container
diff      Inspect changes on a container’s filesystem
events    Get real time events from the server
exec      Run a command in a running container
export    Export a container’s filesystem as a tar archive
history   Show the history of an image
images    List images
import    Import the contents from a tarball to create a filesystem image
info      Display system-wide information
inspect   Return low-level information on a container or image
kill      Kill a running container
load      Load an image from a tar archive or STDIN
login     Register or log in to a Docker registry
logout    Log out from a Docker registry
logs      Fetch the logs of a container
network   Manage Docker networks
pause     Pause all processes within a container
port      List port mappings or a specific mapping for the CONTAINER
ps        List containers
pull      Pull an image or a repository from a registry
push      Push an image or a repository to a registry
rename    Rename a container
restart   Restart a container
rm        Remove one or more containers
rmi       Remove one or more images
run       Run a command in a new container
save      Save an image(s) to a tar archive
search    Search the Docker Hub for images
start     Start one or more stopped containers
stats     Display a live stream of container(s) resource usage statistics
stop      Stop a running container
tag       Tag an image into a repository
top       Display the running processes of a container
unpause   Unpause all processes within a container
version   Show the Docker version information
volume    Manage Docker volumes
wait      Block until a container stops, then print its exit code

Run ‘docker COMMAND –help’ for more information on a command.
root@dell-virtual-machine:/home/dell#

Docker 1.9.0 Client Binary
Docker Machine 0.5.0
Docker Compose 0.5.0
Docker Toolbox 1.9.0
Docker Swarm 1.0.0

Wow…swarm, toolbox, compose, machine all with a new version. Awesome !!

Let’s start playing around Docker network command. I am going to pull nginx first.

root@dell-virtual-machine:~# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
d0ca40da9e35: Pull complete
d1f66aef36c9: Pull complete
192997133528: Pull complete
c4b09a941684: Pull complete
4174aa7c7be8: Pull complete
0620b22b5443: Pull complete
87c3b9f58480: Pull complete
7d984375a5e7: Pull complete
e491c4f10eb2: Pull complete
edeba58b4ca7: Pull complete
a96311efcda8: Pull complete
914c82c5a678: Pull complete
Digest: sha256:b24651e86659a5d1e4103f8c1ea49567335528281c1678697783ae7569114e1e
Status: Downloaded newer image for nginx:latest
root@dell-virtual-machine:~#

Let’s see what Docker network has to say:

root@dell-virtual-machine:~# docker network ls
NETWORK ID          NAME                DRIVER
69b253ef50d1        bridge              bridge
a743bde2e8b9        web                 bridge
c03582079d99        none                null
9a7a4791e2ec        host                host
root@dell-virtual-machine:~#

Cool…

Its time to start nginx on the new network of my interest. I will name it “web”.(sounds good Huh !!!)

root@dell-virtual-machine:~# docker run -itd –net=web –name web nginx
2012c42e577b0f0eb4da7cbe7955bd5137021a6851770578a791e4f32c2f677f
root@dell-virtual-machine:~#

Let me check the docker network again.
root@dell-virtual-machine:~# docker network ls
NETWORK ID          NAME                DRIVER
69b253ef50d1        bridge              bridge
a743bde2e8b9        web                 bridge
c03582079d99        none                null
9a7a4791e2ec        host                host
root@dell-virtual-machine:~#

Fair enough. I can see it listed here.

Let’s play around it again. This time let me name it as “newapp”.

root@dell-virtual-machine:~# docker run -itd –net=newapp –net=myapp nginx
23722f062a29e735d027c324968e732124215e028d72d416f601807c5e28d448
root@dell-virtual-machine:~#

Let’s check it again.

root@dell-virtual-machine:~# docker network ls
NETWORK ID          NAME                DRIVER
69b253ef50d1        bridge              bridge
a743bde2e8b9        web                 bridge
c03582079d99        none                null
9a7a4791e2ec        host                host
8fdc0ce3c468        myapp               bridge
root@dell-virtual-machine:~#

Yipee.. there goes my “myapp” listed.

let us connect my web container to myapp network as shown:

root@dell-virtual-machine:~# docker network connect myapp web
root@dell-virtual-machine:~# docker network ls
NETWORK ID          NAME                DRIVER
c03582079d99        none                null
9a7a4791e2ec        host                host
8fdc0ce3c468        myapp               bridge
69b253ef50d1        bridge              bridge
a743bde2e8b9        web                 bridge
root@dell-virtual-machine:~#

Good. Let us try to see if inspect works for network command too.

root@dell-virtual-machine:~# docker network inspect myapp
[
{
“Name”: “myapp”,
“Id”: “8fdc0ce3c468e8fccd513acc63171e168a823f80d61aca3529605961c5b96aab”,
“Scope”: “local”,
“Driver”: “bridge”,
“IPAM”: {
“Driver”: “default”,
“Config”: [
{}
]
},
“Containers”: {
“2012c42e577b0f0eb4da7cbe7955bd5137021a6851770578a791e4f32c2f677f”: {
“EndpointID”: “fa6efb254809007debb75ea3ce694624809452466a14a1306844eaf97ca2094a”,
“MacAddress”: “02:42:ac:13:00:03”,
“IPv4Address”: “172.19.0.3/16”,
“IPv6Address”: “”
},
“23722f062a29e735d027c324968e732124215e028d72d416f601807c5e28d448”: {
“EndpointID”: “35cfb86fe4c7bab266a1e671a37fc3ea28fb5382ccd7e1d032f6d1c53b50e509”,
“MacAddress”: “02:42:ac:13:00:02”,
“IPv4Address”: “172.19.0.2/16”,
“IPv6Address”: “”
}
},
“Options”: {}
}
]
root@dell-virtual-machine:~#

root@dell-virtual-machine:~# docker network inspect web
[
{
“Name”: “web”,
“Id”: “a743bde2e8b912838dc1216b338b367b0c8fc9f224c7625f1078fbf96a7990ef”,
“Scope”: “local”,
“Driver”: “bridge”,
“IPAM”: {
“Driver”: “default”,
“Config”: [
{}
]
},
“Containers”: {
“2012c42e577b0f0eb4da7cbe7955bd5137021a6851770578a791e4f32c2f677f”: {
“EndpointID”: “e3297e59f8613806ad1e4d9fb505f9636e581ad9986c3e5bbd2b1391d0d488ed”,
“MacAddress”: “02:42:ac:12:00:02”,
“IPv4Address”: “172.18.0.2/16”,
“IPv6Address”: “”
}
},
“Options”: {}
}
]

This is super cool. I can see detailed information of all container tied to my web applications.

How about network bridge? Let’s try it :

root@dell-virtual-machine:~# docker network inspect bridge
[
{
“Name”: “bridge”,
“Id”: “69b253ef50d1640934b467c9a1ced5dee1b187082fa95da9ed6c9e1e9eb972bb”,
“Scope”: “local”,
“Driver”: “bridge”,
“IPAM”: {
“Driver”: “default”,
“Config”: [
{
“Subnet”: “172.17.42.1/16”,
“Gateway”: “172.17.42.1”
}
]
},
“Containers”: {},
“Options”: {
“com.docker.network.bridge.default_bridge”: “true”,
“com.docker.network.bridge.enable_icc”: “true”,
“com.docker.network.bridge.enable_ip_masquerade”: “true”,
“com.docker.network.bridge.host_binding_ipv4”: “0.0.0.0”,
“com.docker.network.bridge.name”: “docker0”,
“com.docker.network.driver.mtu”: “1500”
}
}
]
root@dell-virtual-machine:~#

root@dell-virtual-machine:~# docker network inspect host
[
{
“Name”: “host”,
“Id”: “9a7a4791e2ecc6b745721109d7d77c4ef5fa601e3b43c4b6415fd4851351d759”,
“Scope”: “local”,
“Driver”: “host”,
“IPAM”: {
“Driver”: “default”,
“Config”: []
},
“Containers”: {},
“Options”: {}
}
]
root@dell-virtual-machine:~#

I think the new “docker network” command is simply awesome.Docker is all about application and Docker folks have done a right job of concentrating completely on application design. This tool is surely going to be an amazing tool for developers as they now don’t have to worry about “Network Administrator” job. Its all purely “Dev-Ops Re-Org”.

Will be back with more exploration on “Docker Networking”.