Top 10 Cool New Features in Docker Datacenter for Dev & IT Operations Team

Docker Datacenter(DDC) provides an integrated platform for developers and IT operations teams to collaborate securely on the application life cycle. It is an integrated, end-to-end platform for agile application development and management from the datacenter to the cloud. Basically, DDC brings container management and deployment services to enterprises with a production-ready Containers-as-a-Service (CaaS) platform supported by Docker and hosted locally behind an organization’s firewall. The Docker native tools are integrated to create an on-premises CaaS platform, allowing organizations to save time and seamlessly take applications built-in dev to production.
 

finale_1

 
Under this blog, I will focus on the top 10 new cool features which comes with DDC:
 

ddc_1

Let us get started with setting up a fresh Docker Datacenter setup. I am going to leverage 6-node instances of Google Cloud Platform to share my experience with DDC UI.

dc_01

Built-in Docker 1.12 Swarm Mode Capabilities:

Run the below command on the first node where we are going to install Docker CS Engine.

$ sudo curl -SLf https://packages.docker.com/1.12/install.sh | sh

dc_0

Next, its time to install UCP:

$sudo docker run --rm -it \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp install \
--host-address 10.140.0.5 \
--interactive

dc_1

This brings up UCP UI as shown below. Kudos to Docker UCP Team for “a very lightweight UI” with the latest release.

dc_3

Docker Inc. provides you with 30-days trial license once you register for Docker Datacenter. Upload the license accordingly.

ddc-6

Once you login, you will see that you have Swarm Mode cluster already initialized.

I was interested to see how easy it is to add nodes to the cluster. Click on Add Nodes > Select nodes as either Manager or Worker based on your requirement. Docker UCP Team has done great job in providing features like “-advertise-addr` to build up the cluster in few seconds.

ddc-11ddc-12

ucp_111

It just took 5 minutes to bring up 6 nodes cluster.

Please ensure that the following ports are not under firewall.

ddc-20ddc-21

ucp_112

 

HTTP Routing Mesh & Load Balancing

Let us try out another interesting new feature – Routing Mesh. It makes use of LB concepts.It provides global publish port for a given service. The routing mesh uses port based service discovery and load balancing. So to reach any service from outside the cluster you need to expose ports and reach them via the Published Port.

Docker 1.12 Engine creates “ingress” overlay network to achieve the routing mesh. Usually the frontend web service and sandbox are part of “ingress” network and take care in routing mesh.All nodes become part of “ingress” overlay network by default using the sandbox network namespace created inside each node.

Let us try to setup a simple wordpress application and see how Routing Mesh works.

i. Create a network called “collabnet”. UCP Team has done a great job in covering almost all the features which we use under CLI option.

ddc-14

As shown below, a network “collabnet” with the scope “swarm” gets created:

 

ddc-15

ii. Creating a wordpress application

Typically, while creating a frontend service with name “wordpressapp” we usually run the below command. If you want to pass the same parameter through UCP UI, its matter of just few seconds:

$sudo docker service create –env WORDPRESS_DB_HOST=wordpressdb1 –env WORDPRESS_DB_PASSWORD=collab123 –network collabnet –replicas 4 –name wordpressapp –publish 80:80/tcp wordpress:latest

ddc-16

ddc-17ddc-001

ddc-18

ddc-19ddc-21

ddc-22

ddc-23

ddc-24You can easily browse through the master node and get the wordpress page working.

pci2

Let us enable Routing Mesh as shown below:

hrm_1

Once Routing Mesh is enabled, you can access it from any node even if the particular node is not running any container which serves the wordpress application. Let us try accessing it from worker-5 as shown below:

pic1

Cool.. Routing Mesh just works flawlessly.

 

Integrating Notary Installation and HA as part of DTR:

Installing DTR is a matter of single on-liner command as shown below:

ddc-24

Setting up Centralized Logging through UCP

Under admin settings > Logs section, one can supply the rsyslog server details to push all the cluster logs to a centralized location.

log1

logs

TLS Mutual Authentication and Encryption:

UCP Team has done another great job in including TLS mutual authentication and encryption feature to secure communications between itself and all other nodes. There is also cert rotation, which is awesome especially from a compliance point of view. The TLS encryption also ensures that the nodes are connecting to the correct managers in the swarm.

Rotation Join tokens are secrets that allow a node to join the swarm. There are two different join tokens available, one for the worker role and one for the manager role. One usually pass the token using the --token flag when you run swarm join. Nodes use the join token only when they join the swarm. One can view or rotate the join tokens using swarm join-token. We have now these features rightly available under Docker Datacenter as shown below:

new1

Raft Consensus, Orchestrator  & Dispatcher specific Changes:

One of the compelling feature which has been introduced under the latest Docker datacenter is capability to alter and control Raft consensus parameter, orchestration and Dispatcher specific changes. These features were enabled in the recent 1.12.2 release and have been put under docker swarm update command as shown below:

one1

--snapshot-interval is an important parameter useful for performance tuning while –dispatcher-heartbeat duration controls heartbeat period which is default 5 seconds.

In the future post, I am going to talk about DTR, Docker-compose V2 specific examples and cluster configuration under the latest Docker datacenter.

0
0

Building Microservice applications on Docker-Datacenter (DDC) using Docker Compose

Docker-Datacenter (DDC) is targeted for both developers and IT operations team. DDC provides both developers and IT operations a common option for managing containers. Developers create containers using the toolset they’re comfortable with and deliver the resulting containers to the Docker Registry Service. Operations Team then creates rules that describe how resources can be managed to run those containers, so that developers can perform self-service provisioning for their work.

DDC is an end-to-end Docker platform for enterprises. It is built as a tightly integrated enterprise product – the inclusion of Docker Swarm as the default orchestration engine and Universal Control Plane (UCP) – which sits above the Docker Swarm and provides an on-premises container management solution for Docker apps and infrastructure regardless of where your applications are running.

One of the most amazing feature which DDC provides is building Microservices using Docker compose.

If Docker compose is completely new to you, I request you to refer  https://docs.docker.com/compose/overview/ to get started with Docker compose.  In this post, we will look at how DDC provides an easy way to build wordpress applications using Docker compose.

I assume Docker-Datacenter is already up and running. If not, please refer http://collabnix.com/archives/1149 for step by step guide to setup DDC.

Let’s open up the DDC Web UI as shown below:

UCP_Image01

 

Click on Applications tab shown in the left. Click on Compose Application tab.

Pic1

DDC provides a capability to upload docker-compose.yml right from your Desktop. Also, you can manually copy-paste it from your notepad or Linux machine. Click on Create Application and that’s all you need to get your app up and running. Please refer https://github.com/ajeetraina/collabnix-wordpress/blob/master/docker-compose.yml for the complete docker-compose.yml content:

Pic2

Under Docker compose file, we didn’t specified “network” type and hence it chose the default “collabwebapp_default”. There are various ways to specify the network. In case you created an overlay network for multi-host networking, you can add this application or containers under  the pre-existing network through the following entry in docker-compose.yml:

networks:
  default:
    external:
      name: CollabNet

In case you want to specify the custom network driver, the entry should look like as shown below:

networks:
  frontend:
     driver: custom-driver-1
  backend:
     driver: custom-driver-2
    driver_opts:
      collabNet1: “1”
      collabNet2: “2”

 

While your application is being built, you have a close look at the images and containers pulled up and created respectively as shown below:

Pic-3

 

Pic-4

Pic-5

Let’s check the logs of application creation:

Pic-8

Pic-8

Pic-10

Yippee !! Your first application is up and running using Docker Compose. In the future post, we will look at how can we restrict containers to run on the particular UCP client nodes using the affinity feature of Docker Swarm.

0
0

Implementing Multi-Host Docker Networking with Docker-Datacenter (DDC)

In our previous post, we looked at detailed implementation of Docker-Datacenter-In-A-Box (Container-as-a-Service) on VMware ESXi platform. DDC pluggable architecture and open APIs allows flexibility in compute, networking and storage providers used in your CaaS infrastructure without disrupting the application code. Under this blog post, we will talk about Multi-host Docker networking. We will see  how overlay networking brings the capability of multi-host networking where the hundreds and thousands of containers can reach each other, even when they are running across different hosts machines and resolve each others DNS names, making service discovery a breeze to name a few.

A Quick brief about Overlay:

Docker’s overlay network driver supports multi-host networking natively out-of-the-box. This support is accomplished with the help of libnetwork, a built-in VXLAN-based overlay network driver, and Docker’s libkv library. Libnetwork provides a network overlay that can be used by your containers so that they appear on the same subnet. The huge bonus is that they can reach each other and resolve each other’s DNS names, making service discovery a breeze.

The overlay network requires a valid key-value store service. Currently, Docker’s libkv supports Consul, Etcd, and ZooKeeper (Distributed store). Before creating a network you must install and configure your chosen key-value store service. The Docker hosts that you intend to network and the service must be able to communicate.

If you have Docker Datacenter installed, it means you already have environment to play around the overlay networking.

Setting up Overlay Networking:

Machine #1: 10.94.214.195

Machine #2: 10.94.214.210

I assume that Docker-datacenter is properly configured and running in your setup. I have DDC Web UI running under Machine #1. Browse to Networks section and click on “Create Network” as shown below:

 

picture1

Suppose you have multiple teams in your floor like DB-team, OS-team, VMware-team and HPC-team and all you want is to create respective network for them. You have multiple Dockerhost running as your VM environment on VMware ESXi 6. Let’s go ahead and create a network “DB-network” for DB-team first:

picture-2

This is equivalent to the below command if you run on your UCP host:

#docker network create –driver overlay DB-network

P8

Ensure that Driver option chosen as “Overlay” and NOT bridge, none or null.Once you click on “Create Network”, it should succeed with the acknowledgment as shown below:

P3

We have overlay network setup for DB-team. Interestingly, due to Docker swarm and UCP integration, one can easily see this network from any of UCP client nodes. All you need is run the “docker network ls” utility to see the list of the networks.

Creating container in Overlay Network:

Click on Containers > Deploy Container section to create a container under DB-team overlay network.

P4

We choose “mysql” as image name which will be fetch automatically from the Dockerhub. I have named this new container as “db-mysql-new” for keeping it simple.

P5As we are going to put this newly create container under DB-network(overlay), I have specified hostname and required port(3306) and bind it to the Dockerhost accordingly.

 

P6

Though the above looks a minimal parameters to be considered to bring up the MySQL container but still it might throw warning if you miss out choosing the below parameters:

P7

It will take sometime to get this container running based on your network speed. Once done, it just shows up as shown below:

Pfinal

You can login to any one of Dockerhost machine in the cluster to see the details about this newly created container through docker network inspect utility. From Machine #2, the overlay network and containers running inside this container can be viewed very clearly as shown below:

P11

Let’s create mediawiki container under the same network as shown:

P22Once you deploy mediwiki container, it will show up under Container section as shown below:

 

P23

You can check under what UCP client node is this mediawiki running:

P25

P28

As shown above, this container is running under Machine #2. Fire up the IP to see the mediawiki default page:

P24

It’s just cool..the mediawiki page comes up magically.

P27

You can go ahead and configure Mediawiki to work with MySQL. If you want to use Docker compose, its more simple by clicking over “Compose the application” and it just work flawlessly.

In the forthcoming post, we will look at “Docker Volumes” from Docker-Datacenter (DDC) perspective.

 

 

 

0
0

Implementing Docker-Datacenter-In-A-Box (Container-as-a-Service) on VMware ESXi platform

The critical component for any enterprise IT, either having multiple data centers or a hybrid cloud or multiple cloud providers, is the ability to migrate workloads from one environment to another, without causing application issues. With Docker Datacenter (CaaS in-a-box), you can abstract the infrastructure away from the application, allowing the application containers to be run anywhere and portable across any infrastructure, from on-premises datacenters to public clouds, across a vast array of network and storage providers.

As per Docker Inc. “Docker Datacenter is an integrated solution including open source and commercial software, the integrations between them, full Docker API support, validated configurations and commercial support for your Docker Datacenter environment. A pluggable architecture allows flexibility in compute, networking and storage providers used in your CaaS infrastructure without disrupting the application code. Enterprises can leverage existing technology investments with Docker Datacenter. The open APIs allow Docker Datacenter CaaS to easily integrate into your existing systems like LDAP/AD, monitoring, logging and more.”

Docker_UCP

Basically, Docker CaaS is an IT Ops managed and secured application environment of infrastructure and content that allows developers to build and deploy applications in a self service manner.
With Docker Datacenter, IT ops teams can store their IP and maintain their management plane on premises (datacenter or VPC). Its an equal treat for developers too – Developers are able to leverage trusted base content, build and ship applications freely as needed without worrying about altering the code to deploy in production.

Docker Datacenter comes with the following integrated components and newer capabilities shown below:

NewDE1111

 

 

DDC_Image1

To get started with the implementation, I leveraged 3-node VMs (running on my ESXi 6.0) running Docker 1.11.1 version.

DDC_ESXi

The whole idea is to install Universal Control Plane(UCP) on the master node  and start joining Docker Engines(Client Nodes) to the Swarm cluster with just a few commands in the CLI. Once up and running, use the intuitive web admin UI to configure your system settings, integrate to LDAP/AD, add users or connect your Trusted Registry. End users can also use the web UI to interact with applications, containers, networks, volumes and images as part of their development process.  At the end of this blog post, you will realize its just a matter of few commands which can help you setup DDC-In-A-Box.

 

DDC_35

Setting up UCP Controller Node:

I picked up Ubuntu 14.04 system for setting up UCP controller node. First install Docker 1.11.1 on this system using the below command:

#curl -fsSL https://get.docker.com/ | sh

My machine showed the below docker information:

Client:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Tue Apr 26 23:30:23 2016
OS/Arch: linux/amd64

Server:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Tue Apr 26 23:30:23 2016
OS/Arch: linux/amd64

You install UCP by using the Engine CLI to run the ucp tool. The ucp tool is an image with subcommands to install a UCP controller or join a node to a UCP controller. Let’s setup a UCP controller node first as shown below:

root@ucp-client1:~# docker run –rm -it –name ucp   -v /var/run/docker.sock:/var/run/docker.sock   docker/ucp install -i   –host-address 10.94.214.195

Once this command is successful, you will need to add your user to “docker” group as shown below:

$sudo usermod -aG docker  <user>

I had a user “dell” added to this group, hence it will show up:

dell@dell-virtual-machine:~$ id
uid=1000(dell) gid=1000(dell) groups=1000(dell),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),108(lpadmin),124(sambashare),999(docker)

 

UCP_Image01

 

Attaching UCP Nodes to the Controller Node

Browse to UCP Controller WebUI  > Nodes > Add Node as shown below:

UCP_Node1

I assume Docker 1.11 is already installed on the UCP Client node to be attached to controller node.

Let’s run the below command to join it to the cluster:

root@ucp-client1:~# docker run –rm -it –name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp join   –admin-username admin   –interactive   –url https://10.94.214.195   –fingerprint 11:43:43:18:F2:82:D7:80:E7:8E:2C:2C:4A:F5:27:A0:C9:A2:FC:DC:E8:3E:62:56:15:BC:7F:FA:CE:0B:8D:C2

…..

This will end up with the following SUCCESS message:

INFO[0011] Starting local swarm containers

INFO[0013] New configuration established.  Signalling the daemon to load it…

INFO[0014] Successfully delivered signal to daemon

UCP_Node12

You can check that this is part of UCP cluster as show below:

root@ucp-client1:~# docker info | tail -3

WARNING: No swap limit support

Registry: https://index.docker.io/v1/

Cluster store: etcd://10.94.214.195:12379

Cluster advertise: 10.94.214.208:12376

Similarly, add the 3rd Node to the controller node and hence you will see it displayed under UCP Web UI:

UCP_Node3

 

If you try to see what containers are running on your client nodes:

[root@ucp-client2 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d862f8d89841 docker/ucp-swarm:1.1.0 “/swarm join –discov” 38 minutes ago Up 38 minutes 2375/tcp ucp-swarm-join
a519b9a913d4 docker/ucp-proxy:1.1.0 “/bin/run” 38 minutes ago Up 38 minutes 0.0.0.0:12376->2376/tcp ucp-proxy

How UCP handles the newly created container might be one curious question for anyone who deploy DDC for the first time. Let’s try creating a nagios container and see how DDC actually handles that.

Browse to Dashboard > Containers > Deploy and let’s create a new container called nagios_ajeetraina as shown below:

DDC_1

Ensure that the port 80 has been listed under the exposed ports as shown below:

 

 

DDC_2

 

As shown below, the nagios container is built from ajeetraina/nagios image from Dockerhub.

DDC_3

You can see the complete status details of Nagios container under Container section :

DDC-4

As you scroll down, the ports information is rightly shown:

DDC-5

Now lets check what does it shows under 10.94.214.210 box:

If you go to 10.94.214.210 machine and check the running container, you will find:

 

[root@ucp-client1 ~]# docker ps

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS                        NAMES

0f4fd5a286d2        ajeetraina/nagios:latest   “/usr/bin/supervisord”   12 hours ago        Up 12 hours         25/tcp, 0.0.0.0:80->80/tcp   ajeetraina-nagios

d862f8d89841        docker/ucp-swarm:1.1.0     “/swarm join –discov”   21 hours ago        Up 21 hours         2375/tcp                     ucp-swarm-join

a519b9a913d4        docker/ucp-proxy:1.1.0     “/bin/run”               21 hours ago        Up 21 hours         0.0.0.0:12376->2376/tcp      ucp-proxy

DDC_34

Yipee !!! The UCP and swarm got the Nagios up and running on the available resource i.e., 10.94.214.210

What if I start another instance of Nagios? Shall we try that?  

I will create another container again under DDC Web UI. Here I have the snapshot of the container:

DDC_10

Let’s check what port it has been running on:

DDC_11

 

Well, I supplied port number: 80 while I created the new container. Let me see if Nagios is coming up fine or not.

DDC_13

 

This is just awesome !!! The UCP and Swarm together pushed the new Nagios container to the other client node which is now hosting the Nagios under the port:81 as expected.

Hence we saw that Docker Datacenter brings interesting features like UCP, Docker API and embeds Swarm integration which  allows the application portability based on resource availability and  makes “scale-out” architecture possible through an easy setup.

 

 

 

0
0

Adding new Host to Docker Swarm & Universal Control Plane through Docker Machine

In our previous post, we spent considerable time in understanding what is Universal Control Plane(UCP) and how to add hosts to UCP.  For UCP to work, swarm cluster is a per-requisite, hence let’s understand how to quickly setup Swarm agent node through Docker Machine.

Swarm_1

By this time, you will be able to see a new Docker-8 instance gets created.

Swarm_2

Let’s open up Docker-8 instance which we created by clicking on SSH at the right end. Though you can also use Docker-Machine to login to run commands.

Swarm-3

Finally, I am going to add this new machine to existing swarm cluster as shown below:

Swarm_4

Ensure that the right token is added as shown above.

You will be able to see the new node added to the swarm cluster as shown below:

Swarm_5

Finally, add this new node to UCP as shown below:

Swarm_6

Let’s verify that UCP has detected the 3rd Node in the swarm cluster.

Swarm_7

Hurray !! We have added a new node to the swarm cluster and then got it reflected under Universal Control Plane.

0
0

Docker Container Management using Universal Control Plane

Docker recently announced Universal Control Plane (UCP) beta availability to the public. This tool delivers Enterprise-ready capabilities and is meant to be run in companies’ on-premises data centers and public cloud environments too.  The Beta access is a Christmas gift for both developers and operation engineers.

UCP looks promising in managing the entire lifecycle of Docker-based applications — automating the workflow between development and production — regardless of whether the containers run on hosted or in-house platforms.Developers can deploy applications in containers with the new tool, while operations people can use it to determine which data center infrastructure gets used.

The UCP is meant as one central system to cut across any cloud, any infrastructure and easily provision anything one need from a compute, network, storage and policy standpoint.

As Docker Inc. clearly states its capabilities in terms of:

  • Enterprise-ready capabilities such as LDAP/AD integration, on-premises deployment and high availability, scale, and load balancing.
  • For developers and IT ops, a quick and easy way to build, ship, and run distributed apps from a single Docker framework.Docker native solution using core Docker tools, API and vast ecosystem.
  • Fastest time to value with an easy to deploy and use solution for Docker management.

The control plane integrates the native Docker tools – Engine, Compose and Swarm – and integrates them on a graphical front end. I enrolled for beta access few days back and tried my hands in setting up UCP on my Google Cloud Engine. I had already 4 nodes swarm cluster running on recently purchased Google Cloud Engine. Here is the available Docker Swarm setup which I used to explore Universal Control Plane  and demonstrate how easily it manages and configures containers, hosts, and network flawlessly.

Setup:UCP_Note

Remember:- UCP requires a minimum of 1.50 GB, ensure that you don’t choose micro instances for setting up UCP.

1.Ensure that Docker 1.9.1 is installed on the nodes:

#wget -qO- https://get.docker.com/ | sh

Processing triggers for systemd (225-1ubuntu9) …
Processing triggers for man-db (2.7.4-1) …
Setting up docker-engine (1.9.1-0~wily) …
Installing new version of config file /etc/bash_completion.d/docker …
Installing new version of config file /etc/init.d/docker …
Installing new version of config file /etc/init/docker.conf …
Processing triggers for ureadahead (0.100.0-19) …
Processing triggers for systemd (225-1ubuntu9) …
+ sh -c docker version
Client:
Version:      1.9.1
API version:  1.21
Go version:   go1.4.2
Git commit:   a34a1d5
Built:        Fri Nov 20 13:20:08 UTC 2015
OS/Arch:      linux/amd64
Server:
Version:      1.9.1
API version:  1.21
Go version:   go1.4.2
Git commit:   a34a1d5
Built:        Fri Nov 20 13:20:08 UTC 2015
OS/Arch:      linux/amd64
If you would like to use Docker as a non-root user, you should now consider
adding your user to the “docker” group with something like:
sudo usermod -aG docker your-user
Remember that you will have to log out and back in for this to take effect!

The above single command is enough to install the latest 1.9.1 version on the nodes. Repeat the command on all the nodes to get UCP working.

2.Run the below command on one of the available node to setup UCP.

Machine: 10.240.0.5

root@docker-3:~# docker run –rm -it -v /var/run/docker.sock:/var/run/docker.sock –name ucp dockerorca/ucp install  -i
INFO[0000] Verifying your system is compatible with UCP
Please choose your initial Orca admin password:
Confirm your initial password:
INFO[0009] Pulling required images
Please enter your Docker Hub username: ajeetraina
Please enter your Docker Hub password:
Please enter your Docker Hub e-mail address: ajeetraina@gmail.com
INFO[0045] Pulling required images
WARN[0147] None of the hostnames we’ll be using in the UCP certificates [docker-3 127.0.0.1 172.17.42.1 10.240.0.5]
contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortna
mes or IPs to connect.  You can use the –san flag to add more aliases
You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases:
INFO[0221] Installing UCP with host address 10.240.0.5 – If this is incorrect, please use the ‘–host-address’ flag
to specify a different address
WARN[0000] None of the hostnames we’ll be using in the UCP certificates [docker-3 127.0.0.1 172.17.42.1 10.240.0.5
10.240.0.5] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of th
ese shortnames or IPs to connect.  You can use the –san flag to add more aliases
INFO[0005] Generating Swarm Root CA
INFO[0024] Generating UCP Root CA
INFO[0032] Deploying UCP Containers
INFO[0074] UCP instance ID: MKBT:XJMI:63OD:PKUY:BH7F:OCZL:7S6V:OIGV:4OAB:U2Y3:TYBF:EWN7
INFO[0074] UCP Server SSL: SHA1 Fingerprint=85:07:66:3B:D3:46:9D:3F:FE:4D:4A:22:59:D1:80:41:2A:57:DE:70
INFO[0074] Login as “admin”/(your admin password) to UCP at https://10.240.0.5:443
root@docker-3:~#

That’s it. You can now browse to the web browser to see the UCP working.

UCP_1

Once login, you will see the single host machine (as we haven’t still added any further nodes) as shown:

UCP-2

Adding Nodes to Docker UCP:

Machine:10.240.0.2

Run the below command to add more nodes:

root@docker-1:~# docker run –rm -it  –name ucp  -v /var/run/docker.sock:/var/run/docker.sock  dockerorca/ucp join  –url https://10.240.0.5:443  –san 10.240.0.2
–host-address 10.240.0.2 –interactive
Please enter the URL to your Orca Server: https://10.240.0.5:443
Orca server https://10.240.0.5:443
Subject: ucp
Issuer: UCP Root CA
SHA1 Fingerprint=85:07:66:3B:D3:46:9D:3F:FE:4D:4A:22:59:D1:80:41:2A:57:DE:70
Do you want to trust this server and proceed with the join? (y/n): y
Please enter your UCP Admin username: admin
Please enter your UCP Admin password:
INFO[0024] Pulling required images
Please enter your Docker Hub username: ajeetraina
Please enter your Docker Hub password:
Please enter your Docker Hub e-mail address: ajeetraina@gmail.com
INFO[0047] Pulling required images
WARN[0121] None of the hostnames we’ll be using in the UCP certificates [docker-1 127.0.0.1 172.17.42.1 10.240.0.2 10.240.0.2] contain a domain component.  Your gen
erated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the –san flag to add more aliases
You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases:
WARN[0000] None of the hostnames we’ll be using in the UCP certificates [docker-1 127.0.0.1 172.17.42.1 10.240.0.2 10.240.0.2 10.240.0.2] contain a domain component
.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the –san flag to add more aliases
INFO[0000] This engine will join UCP and advertise itself with host address 10.240.0.2
INFO[0000] Verifying your system is compatible with UCP
^[[CINFO[0017] Starting local swarm containers
root@docker-1:~#

Wow !! You have now node added and displayed under UCP as shown:

UCP_2Nodes

Universal Control Plane is feature-rich and holds a set of GUI-based workflows, this control plane can reach into an enterprise registry and pull out specific containers and then allow the administrator to choose which infrastructure to run them on,that could be AWS, Azure, Google Cloud or on-premises OpenStack clouds.

In future, I am going to explore UCP further in terms of its integration with Docker Compose, Docker Engine and Docker Machine. Till then, keep Reading !!

0
0