Implementing Docker-Datacenter-In-A-Box (Container-as-a-Service) on VMware ESXi platform

Estimated Reading Time: 8 minutes

The critical component for any enterprise IT, either having multiple data centers or a hybrid cloud or multiple cloud providers, is the ability to migrate workloads from one environment to another, without causing application issues. With Docker Datacenter (CaaS in-a-box), you can abstract the infrastructure away from the application, allowing the application containers to be run anywhere and portable across any infrastructure, from on-premises datacenters to public clouds, across a vast array of network and storage providers.

As per Docker Inc. “Docker Datacenter is an integrated solution including open source and commercial software, the integrations between them, full Docker API support, validated configurations and commercial support for your Docker Datacenter environment. A pluggable architecture allows flexibility in compute, networking and storage providers used in your CaaS infrastructure without disrupting the application code. Enterprises can leverage existing technology investments with Docker Datacenter. The open APIs allow Docker Datacenter CaaS to easily integrate into your existing systems like LDAP/AD, monitoring, logging and more.”

Docker_UCP

Basically, Docker CaaS is an IT Ops managed and secured application environment of infrastructure and content that allows developers to build and deploy applications in a self service manner.
With Docker Datacenter, IT ops teams can store their IP and maintain their management plane on premises (datacenter or VPC). Its an equal treat for developers too – Developers are able to leverage trusted base content, build and ship applications freely as needed without worrying about altering the code to deploy in production.

Docker Datacenter comes with the following integrated components and newer capabilities shown below:

NewDE1111

 

 

DDC_Image1

To get started with the implementation, I leveraged 3-node VMs (running on my ESXi 6.0) running Docker 1.11.1 version.

DDC_ESXi

The whole idea is to install Universal Control Plane(UCP) on the master node  and start joining Docker Engines(Client Nodes) to the Swarm cluster with just a few commands in the CLI. Once up and running, use the intuitive web admin UI to configure your system settings, integrate to LDAP/AD, add users or connect your Trusted Registry. End users can also use the web UI to interact with applications, containers, networks, volumes and images as part of their development process.  At the end of this blog post, you will realize its just a matter of few commands which can help you setup DDC-In-A-Box.

 

DDC_35

Setting up UCP Controller Node:

I picked up Ubuntu 14.04 system for setting up UCP controller node. First install Docker 1.11.1 on this system using the below command:

#curl -fsSL https://get.docker.com/ | sh

My machine showed the below docker information:

Client:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Tue Apr 26 23:30:23 2016
OS/Arch: linux/amd64

Server:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Tue Apr 26 23:30:23 2016
OS/Arch: linux/amd64

You install UCP by using the Engine CLI to run the ucp tool. The ucp tool is an image with subcommands to install a UCP controller or join a node to a UCP controller. Let’s setup a UCP controller node first as shown below:

root@ucp-client1:~# docker run –rm -it –name ucp   -v /var/run/docker.sock:/var/run/docker.sock   docker/ucp install -i   –host-address 10.94.214.195

Once this command is successful, you will need to add your user to “docker” group as shown below:

$sudo usermod -aG docker  <user>

I had a user “dell” added to this group, hence it will show up:

dell@dell-virtual-machine:~$ id
uid=1000(dell) gid=1000(dell) groups=1000(dell),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),108(lpadmin),124(sambashare),999(docker)

 

UCP_Image01

 

Attaching UCP Nodes to the Controller Node

Browse to UCP Controller WebUI  > Nodes > Add Node as shown below:

UCP_Node1

I assume Docker 1.11 is already installed on the UCP Client node to be attached to controller node.

Let’s run the below command to join it to the cluster:

root@ucp-client1:~# docker run –rm -it –name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp join   –admin-username admin   –interactive   –url https://10.94.214.195   –fingerprint 11:43:43:18:F2:82:D7:80:E7:8E:2C:2C:4A:F5:27:A0:C9:A2:FC:DC:E8:3E:62:56:15:BC:7F:FA:CE:0B:8D:C2

…..

This will end up with the following SUCCESS message:

INFO[0011] Starting local swarm containers

INFO[0013] New configuration established.  Signalling the daemon to load it…

INFO[0014] Successfully delivered signal to daemon

UCP_Node12

You can check that this is part of UCP cluster as show below:

root@ucp-client1:~# docker info | tail -3

WARNING: No swap limit support

Registry: https://index.docker.io/v1/

Cluster store: etcd://10.94.214.195:12379

Cluster advertise: 10.94.214.208:12376

Similarly, add the 3rd Node to the controller node and hence you will see it displayed under UCP Web UI:

UCP_Node3

 

If you try to see what containers are running on your client nodes:

[root@ucp-client2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d862f8d89841 docker/ucp-swarm:1.1.0 “/swarm join –discov” 38 minutes ago Up 38 minutes 2375/tcp ucp-swarm-join
a519b9a913d4 docker/ucp-proxy:1.1.0 “/bin/run” 38 minutes ago Up 38 minutes 0.0.0.0:12376->2376/tcp ucp-proxy

How UCP handles the newly created container might be one curious question for anyone who deploy DDC for the first time. Let’s try creating a nagios container and see how DDC actually handles that.

Browse to Dashboard > Containers > Deploy and let’s create a new container called nagios_ajeetraina as shown below:

DDC_1

Ensure that the port 80 has been listed under the exposed ports as shown below:

 

 

DDC_2

 

As shown below, the nagios container is built from ajeetraina/nagios image from Dockerhub.

DDC_3

You can see the complete status details of Nagios container under Container section :

DDC-4

As you scroll down, the ports information is rightly shown:

DDC-5

Now lets check what does it shows under 10.94.214.210 box:

If you go to 10.94.214.210 machine and check the running container, you will find:

 

[root@ucp-client1 ~]# docker ps

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS                        NAMES

0f4fd5a286d2        ajeetraina/nagios:latest   “/usr/bin/supervisord”   12 hours ago        Up 12 hours         25/tcp, 0.0.0.0:80->80/tcp   ajeetraina-nagios

d862f8d89841        docker/ucp-swarm:1.1.0     “/swarm join –discov”   21 hours ago        Up 21 hours         2375/tcp                     ucp-swarm-join

a519b9a913d4        docker/ucp-proxy:1.1.0     “/bin/run”               21 hours ago        Up 21 hours         0.0.0.0:12376->2376/tcp      ucp-proxy

DDC_34

Yipee !!! The UCP and swarm got the Nagios up and running on the available resource i.e., 10.94.214.210

What if I start another instance of Nagios? Shall we try that?  

I will create another container again under DDC Web UI. Here I have the snapshot of the container:

DDC_10

Let’s check what port it has been running on:

DDC_11

 

Well, I supplied port number: 80 while I created the new container. Let me see if Nagios is coming up fine or not.

DDC_13

 

This is just awesome !!! The UCP and Swarm together pushed the new Nagios container to the other client node which is now hosting the Nagios under the port:81 as expected.

Hence we saw that Docker Datacenter brings interesting features like UCP, Docker API and embeds Swarm integration which  allows the application portability based on resource availability and  makes “scale-out” architecture possible through an easy setup.

 

 

 

Clap