LinuxKit 101: Getting Started with LinuxKit for Google Cloud Platform

 

“…LinuxKit? A New Beast?

     What problem does it solve for us?..”

 

01

In case you missed out Dockercon 2017 and have no idea what is LinuxKit all about, then you have arrived at the right place. For the next 30 minutes of your time, I will be talking about an open source container toolkit which Docker Inc. has recently made to the public & will help you get started with it in very easy and precise way.

What is LinuxKit?

LinuxKit is just like Docker’s other open-source container toolkits such as InfraKit and VPNkit. It is essentially a container-native toolkit that allows organizations to build their own containerized operating systems that are secure, lean, modular and portable. Essentially, it is more of a developer kit than an end-user product.This project is completely open source and  is hosted on GitHub, under an Apache 2 licence.

What problem does it solve?

Last year Docker Inc. started shipping Docker for Mac, Docker for Windows, Docker for Azure & Docker for GCP and that brought a Docker-native experience to these various platforms. One of the common problem which the community faced was non-standard Linux OS running on all those platform.  Esp. Cloud platform do not ship with a standard Linux which brought lots of concerns around portability, security and  incompatibility. This lead Docker Inc. to bundle Linux into the Docker platform to run on all of these places uniformly.

Talking about portability, Docker Inc. has always focused on product which should run anywhere. Hence, they worked with partners like HP, Intel, ARM and Microsoft to ensure that LinuxKit toolkit should flawlessly run on the desktop, server, cloud ARM, x86, virtual environment and on bare metal. LinuxKit was built with an intention of  an optimized tooling for portability which can accommodate a new architecture, a new system in very easier way.

What does LinuxKit hold?

LinuxKit includes the tooling to allow building custom Linux subsystems that only include exactly the components the runtime platform requires. All system services are containers that can be replaced, and everything that is not required can be removed.The toolkit works with Docker’s containerd. All components can be substituted with ones that match specific needs.You can optimize LinuxKit images for specific hardware platforms and host operating systems with just the drivers and other dependencies you need, and nothing more, rather than use a full-fat generic base. The toolkit basically tries to help you create your own slimline containerized operating system as painlessly as possible. The size of a LinuxKit image is in MBs ( around 35-50MB).

100

The above shown is YAML file which specifies a kernel and base init system, a set of containers that are built into the generated image and started at boot time. It also specifies what formats to output(shown at the last line), such as bootable ISOs and images for various platforms. Interestingly, system services are sandboxed in containers, with only the privileges they need. The configuration is designed for the container use case. The whole system is built to be used as immutable infrastructure, so it can be built and tested in your CI pipeline, deployed, and new versions are redeployed when you wish to upgrade. To know more about YAML specification, check this out.

What tool does LinuxKit uses?

There are two basic tools which LinuxKit uses – Linuxkit & Moby.

2

In short, the moby tool converts the yaml specification into one or more bootable images.

Let us get started with LinuxKit to understand how it builds customized ISO images and run uniformly across various platform. Under this blog post, I have chosen Google Cloud Platform. We will build LinuxKit based customized ISO image locally on my Macbook Air and push it to Google Cloud Platform to run as VM instance. I will be using forked linuxkit repository which I have built around and runs Docker container(ex. running Portainer docker container) inside VM instance too.

Steps:

  1. Install LinuxKit & Moby tool on macOS
  2. Building a LinuxKit ISO Image with Moby 
  3. Create a bucket under Google Cloud Platform
  4. Upload the LinuxKit ISO image to a GCP bucket using LinuxKit tool
  5. Initiate the GCP instance from the LinuxKit ISO image placed under GCP bucket
  6. Verifying Docker running inside LinuxKitOS 
  7. Running Portainer as Docker container

 

Pre-requisite:

– Install Google Cloud SDK on your macOS system through this link. You will need to verify your google account using the below command:

$gcloud auth login

– Ensure that the build essential tools like make are perfectly working

– Ensure that GO packages are installed on macOS..

Steps:

  1. Clone the repository:

 

sudo git clone https://github.com/ajeetraina/linuxkit

4

2.  Change directory to linuxkit and run make which builds “moby” and “linuxkit” for us

cd linuxkit && sudo make

 

3.  Verify that these tools are built and placed under /bin:

cd bin/
ls
moby         linuxkit

4.  Copy these tools into system PATH:

 
sudo cp bin/* /usr/local/bin/

5. Use moby tool to build the customized ISO image:

 

cd examples/
sudo moby build gcpwithdocker.yml

 

5

 

[Update: 6/21/2017 – With the latest release of LinuxKit, Output section is no longer allowed inside YAML file. It means that whenever you use moby build command to build an image, specify -output gcp to build an image in a format that GCP will understand. For example:

moby build -output gcp example/gcpwithdocker.yml

This will create a local gcpwithdocker.img.tar.gz compressed image file.]

 

6.  Create a GCE bucket “mygcp” under your Google Cloud Platform:

7

7. Run  linuxkit push command  to push it to GCP:

 

sudo linuxkit push gcp -project synthetic-diode-161714 -bucket mygcp gcpwithdocker.img.tar.gz

 

8

[Note: “synthetic-diode-161714” is my GCP project name and “mygcp” is the bucket name which I created in earlier step. Please input as per your environment.]

Please note that you might need to enable Google Cloud API using this link in case you encounter “unable to connect GCP”  error. 

8.  You can execute the image you created and this will should show up under VM instance on Google  Cloud Platform:

9

10

This will build up a LinuxKit OS which you can verify below:

15

You can also verify if this brings up VM instance on GCP platform:

010

9. You can use runc command to list out all the services which were defined under gcpwithdocker.yml file:

11

10. As shown above, one of the service which I am interested is called “docker”. You can use the below command to enter into docker service:

 

runc exec -t docker sh

Wow ! It is running the latest Docker 17.04.0-ce version.

11.  Let us try to run Portainer application and check if it works good.

12

You can verify the IP address running ifconfig for that specific container which in my case is 35.187.162.100:

14

Now this is what I call ” a coolest stuff on earth”. Linuxkit allows you to build your own secure, modular, portable, lean and mean containerized OS and that too in just minutes. I am currently exploring LinuxKit in terms of bare metal OS and will share it under my next blog post.

Did you find this blog helpful? Are you planning to explore LinuxKit? Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution, join me at Docker Community Slack Channel.

0
0

Introducing new RexRay 0.8 with Docker 17.03 Managed Plugin System for Persistent Storage on Cloud Platforms

DellEMC Rex-Ray 0.8 Final Release was announced last week. Graduated as top-level project within {code} community, RexRay 0.8 release has been considered as one of the largest releases till date. The new release introduced support for long lists of new storage platforms like S3FS, EBS, EFS, GCEPD & ScaleIO shown below:

rexray010

Public cloud storage is one of the fastest growing sector in storage with leaders like Amazon AWS, Google Cloud Storage and Microsoft Azure. With the release of RexRay 0.8, {code} community took the right approach in targeting the first community-contributed driver starting with Amazon EFS driver and then quickly adding additional community-contributed drivers like Digital Ocean, FittedCloud, Google Cloud Engine (GCEPD) & Microsoft Azure Unmanaged Disk driver.

Introducing New Docker 17.03 Volume Plugin System

With Docker 17.03 release, a new managed plugin system has been introduced. This is quite different from the old Docker plugin system. Plugins are now distributed as Docker images and can be hosted on Docker Hub or on a private registry.A volume plugin enables Docker volumes to persist across multiple Docker hosts.

rex01

In case you are very new to Docker Plugins, they basically extend Docker’s functionality.  A plugin is a process running on the same or a different host as the docker daemon, which registers itself by placing a file on the same docker host in one of the plugin directories described either as a.sock files( UNIX domain sockets placed under /run/docker/plugins), .spec files( text files containing a URL, such as unix:///other.sock or tcp://localhost:8080 placed under /etc/docker/plugins or /usr/lib/docker/plugins)or.json files ( text files containing a full json specification for the plugin placed under /etc/docker/plugin). You can refer this in case you want to develop your own Docker Volume Plugin.

Running RexRay inside Docker container

Yes, you read it correct ! With the introduction of Docker 17.03 New Plugin Managed System, you can now run RexRay inside Docker container flawlessly. Rex-Ray Volume Plugin is written in Go and provides advanced storage functionality for many platforms including VirtualBox, EC2, Google Compute Engine, OpenStack, and EMC.

You can list out available Docker Volume Plugin for various storage platforms using docker search rexray as shown below:

rexray_plugin2

Let us test-drive RexRay Volume Plugin for Swarm Mode cluster for the first time. I have 4-node Swarm Mode cluster running on Google Cloud Platform as shown below:

rex1

Verify that all the cluster nodes are running the latest 17.03.0-ce (Community Edition).

Installing the RexRay Volume plugin is just one-liner command:

rexray_plugininstallation

 

You can inspect the Rex-Ray Volume plugin using docker plugin inspect command:

rexray_inspect    rexray_inspect22

It’s time to create a volume using docker volume create utility :

$sudo docker volume create –driver rexray/gcepd –name storage1 –opt=size=32

rexray_volume1

You can verify if it is visible under GCE Console window:

RexRay_GCE

Let us try running few application which uses RexRay volume plugin as shown:

{code}==>docker run -dit –name mydb -e MYSQL_ROOT_PASSWORD=wordpress -e MYSQL_USER=wordpress -e MYSQL_PASSWORD:wordpress –volume-driver=rexray/gcepd -v dbdata:/var/lib/mysql mysql:5.7

Verify that MySQL service is up and running using docker logs <container-id> command as shown below:

dbdata1

By now, we should be able to see new volume called “dbdata” created and shown under docker volume ls command:

rexray_dbdata

Under GCE console, it should get displayed too:

dbdata_gce

Using Rex-Ray Volume Plugin under Docker 17.03 Swarm Mode

This is the most interesting section of this blog post. RexRay volume plugin worked great for us till now, especially for a single Docker host running multiple number of services.But what if I want to enable RexRay Volume to persist across multiple Docker Hosts(Swarm Mode cluster)? Yes, there is one possible way to achieve this – using Swarm Executor. It executes docker command across the swarm cluster. Credits to Madhu Venugopal @ Docker Team for assisting me testing with this tool.

swarmexec_1

Please remember that this is UNOFFICIAL way of achieving Volume Plugin implementation across swarm cluster.  I found this tool really cool and hope that it gets integrated within Docker official repository.

First, we need to clone this repository:

$git clone https://github.com/mavenugo/swarm-exec

Run the below command to push the plugin across the swarm cluster:

$cd swarm-exec

$./swarm-exec.sh docker plugin install –grant-all-permissions rexray/gcepd GCEPD_TAG=rexray

swarm_exec01

First, let’s quickly  verify the plugin on the master node  as shown below:

swarm_exec2

While verifying it on the worker nodes:

swarm_exec3

rexray_gce11

The docker volume inspect <volname> should display this particular volume as rexray volume driver as shown below:

rexray_gce222

Creating a MySQL service which uses Rex-Ray volume under Swarm Mode cluster:

up_service

Verifying that the service is up and running:

rexray_100

To conclude, the new version of RexRay looks promising and brings support for various Cloud storage platform. It continues to be leading open source container orchestration engine and now with inclusion of Docker 17.03 Managed Plugin architecture, it will definitely reduce the pain for implementing persistent storage solution.

0
0

What’s new in Docker 1.12.0 Load-Balancing feature?

In the previous blog post, we deep-dived into Service Discovery aspects of Docker. A service is now a first class citizen in Docker 1.12.0 which allows replication, update of images and dynamic load-balancing. With Docker 1.12, services can be exposed on ports on all Swarm nodes and load balanced internally by Docker using either a virtual IP(VIP) based or DNS round robin(RR) based Load-Balancing method or both.

AAEAAQAAAAAAAAa5AAAAJGFjMWI2N2VhLTcyYTQtNGUyOC04OGI0LTIxZTkxZGRhY2E3Ng

In case you are very new to Load-balancing concept, the load balancer assigns workload to a set of networked computer servers or components in such a manner that the computing resources are used in an optimal manner. A load balancer provides high availability by detecting server or component failure and re-configuring the system appropriately. Under this post, I will try to answer the following queries:

Que1

Let’s get started –

Is Load-Balancing new to Docker?

Load-balancing(LB) feature is not at all new for Docker. It was firstly introduced under Docker 1.10 release where Docker Engine implements an embedded DNS server for containers in user-defined networks.In particular, containers that are run with a network alias ( — net-alias) were resolved by this embedded DNS with the IP address of the container when the alias is used.

No doubt, DNS Round robin is extremely simple to implement and is an excellent mechanism to increment capacity in certain scenarios, provided that you take into account the default address selection bias but it possess certain limitations and issues like some applications cache the DNS host name to IP address mapping and  this causes applications to timeout when the mapping gets changed.Also, having non-zero DNS TTL value causes delay in DNS entries reflecting the latest detail. DNS based load balancing does not do proper load balancing based on the client implementation. To learn more about DNS RR which is sometimes called as poor man’s protocol, you can refer here.

What’s new in Load-balancing feature under Docker 1.12.0?

  • Docker 1.12.0 comes with built-in Load Balancing feature now.LB is designed as an integral part of Container Network Model (rightly called as CNM) and works on top of CNM constructs like network, endpoints and sandbox. Docker 1.12 comes with VIP-based Load-balancing.VIP based services use Linux IPVS load balancing to route to the backend containers
  • No more centralized Load-Balancer, it’s distributed and hence scalable. LB is plumbed into individual container. Whenever container wants to talk to another service, LB is actually embedded into the container where it happens. LB is more powerful  now and just works out of the box.

Pci321

  • New Docker 1.12.0 swarm mode uses IPVS(kernel module called “ip_vs”) for load balancing. It’s a load balancing module integrated into the Linux kernel
  • Docker 1.12 introduces Routing Mesh for the first time.With IPVS routing packets inside the kernel, swarm’s routing mesh delivers high performance container-aware load-balancing.Docker Swarm Mode includes a Routing Mesh that enables multi-host networking. It allows containers on two different hosts to communicate as if they are on the same host. It does this by creating a Virtual Extensible LAN (VXLAN), designed for cloud-based networking. we will talk more on Routing Mesh at the end of this post.

Whenever you create a new service in Swarm cluster, the service gets Virtual IP(VIP) address. Whenever you try to make a request to the particular VIP, the swarm Load-balancer will distribute that request to one of the container of that specified service. Actually the built-in service discovery resolves service name to Virtual-IP. Lastly, the service VIP to container IP load-balancing is achieved using IPVS. It is important to note here that VIP is only useful within the cluster. It has no meaning outside the cluster because it is a private non-routable IP.

I have 6 node cluster running Docker 1.12.0 in Google Cloud Engine. Let’s examine the  VIP address through the below steps:

  1. Create a new overlay network:

    $docker network create –driver overlay \

     –subnet 10.0.3.0/24 \

     –opt encrypted \

      collabnet

To1

 

2. Let’s create a new service called collabweb which is a simple Nginx server as shown:

        $ docker service create \

       —replicas 3 \

       —name collabweb \

       —network collabnet \

       nginx

3. As shown below, there are 3 nodes where 3 replicas of containers are running the service under the swarm overlay network called “collabnet”.

To3

4. Use docker inspect command to look into the service internally as shown below:

To5

 

It shows “VIP” address added to each service. There is a single command which can help us in getting the Virtual IP address as shown in the diagram below:

To9

5. You can use nsenter utility to enter into its sandbox to check the iptables configuration:

To10

In any iptables, usually a packets enters the Mangle Table chains first and then the NAT Table chains.Mangling refers to modifying the IP Packet whereas NAT refers to only address translation. As shown above in the mangle table,10.0.3.2 service IP gets marking of 0x10c using iptables OUTPUT chain. IPVS uses this marking and load balances it to containers 10.0.3.3, 10.0.3.5 and 10.0.3.6 as shown:

To11

As shown above, you can use  ipvsadm  to set up, maintain or inspect the IP virtual server table in the Linux kernel.This tool can be installed on any of Linux machine through apt or yum based on the Linux distribution.

A typical DNS RR and IPVS LB can be differentiated as shown in the below diagram where DNS RR shows subsequent list of IP addresses when we try to access the service each time(either through curl or dig) while VIP  load balances it to containers(i.e. 10.0.0.1, 10.0.0.2 and 10.0.0.3)

LB-1

 

6. Let’s create  a new service called collab-box under the same network. As shown in the diagram, a new Virtual-IP (10.0.3.4) will be automatically attached to this service as shown below:

To33

Also, the service discovery works as expected,

pc45

Why IPVS?

IPVS (IP Virtual Server) implements transport-layer load balancing inside the Linux kernel, so called Layer-4 switching. It’s a load balancing module integrated into the linux kernel. It is based on Netfilter.It supports TCP, SCTP & UDP, v4 and v7. IPVS running on a host acts as a load balancer before a cluster of real servers, it can direct requests for TCP/UDP based services to the real servers, and makes services of the real servers to appear as a virtual service on a single IP address.

It is important to note that IPVS is not a proxy — it’s a forwarder that runs on Layer 4. IPVS forwards traffic from clients to back-ends, meaning you can load balance anything, even DNS! Modes it can use include:

  • UDP support
  • Dynamically configurable
  • 8+ balancing methods
  • Health checking

IPVS holds lots of interesting features and has been in kernel for more than 15 years. Below chart differentiate IPVS from other LB tools:

LB-4

 
Is Routing Mesh a Load-balancer?

Routing Mesh is not Load-Balancer. It makes use of LB concepts.It provides global publish port for a given service. The routing mesh uses port based service discovery and load balancing. So to reach any service from outside the cluster you need to expose ports and reach them via the Published Port.

In simple words, if you had 3 swarm nodes, A, B and C, and a service which is running on nodes A and C and assigned node port 30000, this would be accessible via any of the 3 swarm nodes on port 30000 regardless of whether the service is running on that machine and automatically load balanced between the 2 running containers. I will talk about Routing Mesh in separate blog if time permits.

It is important to note that Docker 1.12 Engine creates “ingress” overlay network to achieve the routing mesh. Usually the frontend web service and sandbox are part of “ingress” network and take care in routing mesh.All nodes become part of “ingress” overlay network by default using the sandbox network namespace created inside each node. You can refer this link to learn more about the internals of Routing Mesh.

Is it possible to integrate an external LB to the services in the cluster.Can I use HA-proxy in Docker Swarm Mode?

You can expose the ports for services to an external load balancer. Internally, the swarm lets you specify how to distribute service containers between nodes.If you would like to use an L7 LB you either need to point them to any (or all or some) node IPs and PublishedPort. This is only if your L7 LB cannot be made part of the cluster. If the L7 LB can be made of the cluster by running the L7 LB itself as a service then they can just point to the service name itself (which will resolve to a VIP). A typical architecture would look like this:

LB

In my next blog, I am going to elaborate on External Load balancer primarily. Keep Reading !

0
0

Docker 1.12 Swarm Mode – Under the hood

Today Docker Inc. released Engine 1.12 Release Candidate 4 with numerous improvements and added security features. With an optional “Swarm Mode” feature rightly integrated into core Docker Engine, a native management of a cluster of Docker Engines, orchestration, decentralized design, service and application deployment, scaling, desired state reconciliation, multi-host networking, service discovery and routing mesh implementation is just a matter of few liner commands.

In the previous posts, we introduced Swarm Mode, implemented a simple service applications and went through 1.12 networking model. Under this post, we will deep dive into Swarm Mode and study what kind of communication gets generated between master and worker nodes in the Swarm cluster.

Setting up Swarm Master Node

Let’s start setting up Swarm Mode cluster and see how underlying communication takes place. I will be using docker-machine to setup master and worker nodes on my Google Cloud Engine.

$docker-machine create -d google –google-project <project-id> –engine-url https://test.docker.com test-master1

If you have less time setting up Swarm Cluster, do refer https://github.com/ajeetraina/google-cloud-swarm. I have forked it from here.

As you see below, Docker Hosts machines gets created through docker-machine with all the nodes running Docker Engine 1.12-rc4.

Let’s initialize the swarm mode on the first master node as shown below:

I have used one liner docker-machine command to keep it clean and simple. The docker-machine command will SSH to the master node and initialize the swarm mode.

The newly released RC4 version holds improvement in terms of security which is enabled by default. In earlier release, one has to pass –secret parameter to secure and control which worker node can join and which can’t. But going forward, the swarm mode automatically generates random secret key. This is just awesome !!!

[Under the hood] – Whenever we do “docker swarm init”, a TLS root CA (Certificate Authority) gets created as shown below.

Then a key-pair is issued for the first node and signed by root CA.

Let’s add the first worker node as shown below:

Looking at inotify output:

When further nodes joins the swarm, they are issues their own keypair, signed by the root CA, and they also receive the root CA public key and certificate. All the communication is encrypted over TLS.

The node keys and certificates are automatically renewed on regular intervals (by default 90 days) but one can tune it with docker swarm update command.

Let us spend some time understanding the master and worker architecture in detail.

 

Every node in Swarm Mode has a role which can be categorized as  Manager and Worker. Manager node has responsibility to actually orchestrate the cluster, perform the health-check, running containers serving the API and so on. The worker node just execute the tasks which are actually containers. It can-not decide to schedule the containers on the different machine. It can-not change the desired state. The workers only takes work and report back the status. You can enable node promotion or demotion easily through one-liner command.

Managers and Workers uses two different communication models. Managers have built-in RAFT system that allows them to share information for new leader election. At one time, only manager is actually performing the scaling and they use a leader follower model to figure out which one is supposed to be what. No External K-V store is required as built-in internal distributed state store is available.

Workers, on the other side, uses GOSSIP network protocol which is quite fast and consistent. Whenever any new container/tasks gets generated in the cluster, the gossip is going to broadcast it to all the other containers in a specific overlay network that this new container has started. Please remember that ONLY the containers which are running in the specific overlay network will be communicated and NOT globally. Gossip is optimized for heavy traffic.

Let us go one level more deeper to understand how the underlying service is created and dispatched to the worker nodes. Before creating the service, let us first create a new overlay network called mynetwork.

The inotify triggers the relevant output accordingly:

Let’s create our first service:

$sudo docker-machine ssh test-master1 ‘sudo docker service create –name collabnix –replicas 3 \
   –network mynetwork dockercloud/hello-world

Once you run the above command, 3 replicas of services gets generated and distributed across the cluster nodes.

[Under the hood] – Let’s understand what happens whenever a new service is created.

 

Whenever we create overlay network through “docker network create -d overlay” command, it basically goes to manager. Manager is built up of multiple pipeline stages. One of them is Allocator. Allocator takes the network creation request and choose particular pre-defined sub network that is available. Allocation purely happen in the memory and hence it goes quick. Once network is created, it’s time to connect service to that network. Say, you start with service creation, orchestrator is involved and try to generate the requisite number of tasks which is nothing but containers in real world. But the tasks needs IP address, VXLAN ids as the overlay network needs that too. The allocation happens in the manager nodes. Once allocation gets completed, tasks are created and the state is preserved in the raft store. Once allocation is done, only then the scheduler will be able to move that particular task into the assigned state which is then dispatched to one of the worker node. Manager can also be worker. Every task goes through multiple stages – New, Allocated, Assigned etc. if the task has not been moved to allocator stage, it will not be assigned to worker nodes. With the help of network control plane(gossip protocol), multiple tasks distributed across the multiple worker node is taken care and managed effectively.

I hope you liked reading this deep-dive article. In future blog post, I will try to cover deep dive session into Docker network and volume aspects. Till then, Happy Swarming !!!

 

 

 

0
0

Docker 1.12 Networking Model Overview

“The Best way to orchestrate Docker is Docker”

In our previous post, we talked about Swarm Mode’s built-in orchestration and distribution engine.Docker’s new deployment API objects like Service and Node , built-in multi-host and multi-container cluster management integrated with Docker Engine, decentralized design,declarative service model, scaling and resiliency services , desired state conciliation, service discovery,  load balancing,  out-of-box security by default and rolling updates etc. just makes Docker 1.12  all-in-all automated deployment and management tool for Dockerized distributed applications and microservices at scale in production.

1st

 

Under this post, we are going to deep-dive into Docker 1.12 Networking model. I have 5 node swarm cluster test environment as shown below:

Docker112

If you SSH to test-master1 and check the default network layout:

networkls

Every container has an IP address on three overlay networks:

  1. Ingress
  2. docker_gwbridge
  3. user-defined overlay

Ingress Networking:

The swarm manager uses ingress load balancing to expose the services you want to make available externally to the swarm. The swarm manager can automatically assign the service a PublishedPort or you can configure a PublishedPort for the service in the 30000-32767 range.  What it actually means is Network ingress into the cluster is based on a node port model in which each service is randomly assigned a cluster-wide reserved port between 30000 and 32000 (default range). This means that every node in the cluster listens on this port and routes traffic for that service to it. This is true irrespective of whether a particular worker node is actually running the specified service.

It is important to note that only those services that has a port published (using the -p option) require the ingress network. But for those backend services which doesn’t publish ports, the corresponding containers are NOT attached to the ingress network.

External components, such as cloud load balancers, can access the service on the PublishedPort of any node in the cluster whether or not the node is currently running the task for the service. All nodes in the swarm cluster route ingress connections to a running task instance. Hence, ingress  follow a node port model in which each service has the same port on every node in the cluster.

docker_gwbridge:

The `default_gwbridge` network is added only for non-internal networks. Internal networks can be created with “–internal” option.. Containers connected to the multi-host network are automatically connected to the docker_gwbridge network. This network allows the containers to have external connectivity outside of their cluster, and is created on each worker node.

Docker Engine provides you flexibility to create this default_gwbridge by hand instead of letting the daemon create it automatically. In case you want docker to create a docker_gwbridge network in desired subnet, you can tweak it as shown below:

$docker network create –subnet={Your prefered subnet } -o com.docker.network.bridge.enable_icc=false -o com.docker.network.bridge.name=docker_gwbridge docker_gwbridge.

User-defined Overlay:

This is the overlay network that the user has specified that the container should be on. In our upcoming example , we will call it mynet. A container can be on multiple user-defined overlays.

 

Enough with the theoretical aspects !! Let’s try hands on the networking model practically.

As shown below, I have 3 nodes swarm cluster with 1 master and 2 worker nodes.

pic1

I created a user-defined overlay through the below command:

$ sudo docker network create -d overlay mynet

I can see that the new overlay network gets listed under “swarm” scope(as we are already into swarm mode)  as shown below:

pic8

I have a service “frontier” running tasks on node1, node2 and master1 as shown below:

pic2

pic18

We can check the container running under node-1 and node-2 respectively:

pic3

pic4

Meanwhile, I added a new Node-3 and then scaled it to 10.

pic5

Now I can see that the containers are scaled across the swarm cluster.

To look into how overlay networking works, let us target the 4th Node and add it to the Swarm Cluster.

pic6

Now, the node list gets updated as shown below:

pic9

Whenever you add any node to the swarm cluster, it doesn’t automatically reflect the mynet overlay network as shown below:

pic7

The overlay network only gets reflected whenever the new task is assigned and this happens on-demand.

Let us try to scale our old service and see if node-4 network layout gets reflected with mynet network.

Earlier, we had 10 replicas running which is scaled across master1, node1, node2 and node4. Once we scaled it to 20, the swarm cluster engine will scale it across all the nodes as shown below:

pic10

Let us now check the network layout at node-4:

Hence, the overlay network gets created on-demand whenever the new task is assigned to this node.

Self-Healing:

Swarm nodes are “self-organizing and self-healing. What it means? Whenever any node or container goes crash or sudden unplanned shutdown, the scaling swarm engine attempts to correct to make things right again. Let us look into this aspect in detail:

As we see above, here is an example of nodes and running tasks:

Master-1 running 4 tasks

Node-1 running 4 tasks

Node-2 running 4 tasks

Node-3 running 4 tasks

Node-4 running  4 instances

Now let’s bring down node-4.

pic13

As soon as all the containers running on node-4 is stopped, it tries to start another 4 containers with different IDs on the same node.

pic15

So this shows self-healing aspect of Docker Swarm Engine.

Self-Organizing:

Let’s try bringing down node-4 completely. As soon as you bring down the node-4, the containers running on node-4 gets started on other node automatically.

pic16

 

Master-1 running 5 tasks

Node-1 running 5 tasks

Node-2 running 5 tasks

Node-3 running 5 tasks

Hence, this is how it organizes 20 tasks which is now scaled across master1, node1, node2 and node3.

Global Services:

This option enables service tasks to sit on all the nodes. You can create a service with –mode-global option to enable this functionality as shown below:

pic17

Constraints:

There are scenerios when you sometime want to segregate workloads on your cluster where you specifically want workloads to go to only a certain set of nodes .One example I have pulled from DockerCon slide which shows SSD based constraints which can be applied as shown below:

pic24

 

Routing Mesh 

We have reached the last topic of this blog post and that’s not complete without talking about Routing mesh. I have pulled out the presentation demonstrated at DockerCon 2016 which clearly shows us how routing mesh works.

                                                                                                                                                                                                        ~ Source: DockerCon 2016

To understand how Routing mesh works, suppose that there is one manager node and 3 worker nodes which is serving myapp:80. Whenever an operator try to access myapp:80 on the exposed port, he probably because of an external load-balancer might happen to hit worker-2 and it sounds good because worker-2 is having 2 copies of the frontend containers and it is ready to serve it without any issue. Imagine a scenario when user access myapp:80 and gets redirected to worker-3 which currently has no copies of the containers. This is where Routing Mesh technology comes into the picture. Even though the worker-3 has no copies of this container, the docker swarm engine is going to re-route the traffic to worker-2 which has necessary copies to serve it.Your external Load-balancer don’t need to understand where is the container running. Routing mesh takes care of that automatically. In short, Container-aware routing mesh is capable of transparent rerouting the traffic from node-3 to a node that is running container(which is node-2) shown above. Internally, Docker Engine  allocates a cluster wide port and it maps that port to the containers of a service and the routing mesh will take care of routing traffic to the containers of that service by exposing a port on every node in the swarm.

In our next post, we will talk about routing mesh in detail and cover volume aspect of Swarm Mode. Till then, Happy Swarming !!!

0
0

Docker Engine 1.12 comes with built-in Distribution & Orchestration System

Docker Engine 1.12 can be rightly called ” A Next Generation Docker Clustering & Distributed System”. Though Docker Engine 1.12 Final Release is around corner but the recent RC2 brings lots of improvements and exciting features. One of the major highlight of this release is Docker Swarm Mode which provides powerful yet optional ability to create coordinated groups of decentralized Docker Engines. Swarm Mode combines your engine in swarms of any scale. It’s self-organizing and self-healing. It enables infrastructure-agnostic topology.The newer version democratizes orchestration with out-of-box capabilities for multi-container on multi-host app deployments as shown below:

Built on Engine as a uniform building block for self organizing and healing group of Engines, Docker ensures that orchestration is accessible for every developer and operation user. The new Swarm Mode adopts  the de-centralized architecture rather than centralized one (key-value store) as seen in the earlier  Swarm releases. Swarm Mode uses the Raft consensus algorithm  to perform leader selection, and maintain the cluster’s states.

In Swarm Mode, all Docker Engine will unite into a cluster with management tier. It is basically master – slave system but all Docker Engine will be united and they will maintain a cluster state. Instead of running a single container, you will declare a desired state for your application which means multiple container and then engine themselves will maintain that state. Additionally, a new “docker service” feature has been added under the new release. The “docker service create” is expected to be an evolution of “docker run”. Docker run is imperative command and all it helps you  to get container up and running.The new “docker service create” command declare that you have to setup a server which can run one or more containers and those container will run , provided the state you declare for the service will be maintained in Engine / inside the distributed store based on raft consensus protocol.That brings the notion of desired state reconciliation. Whenever any node in the cluster goes down, the swarm itself will recognize that there has been deviation between the desired state and it will bring up  new instance to reconstruct the reconciliation. I highly recommend visualizing http://thesecretlivesofdata.com/raft/ to understand what does it mean.

Docker Swarm Node is used  for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and much more. Let’s see what features does Docker Swarm Mode adds to Docker Cluster functionality:Pic-6

Looking at the above features, Docker Swarm mode brings the following benefits :

  • Distributed: Swarm Mode uses the Raft Consensus Algorithm in order to coordinate and does not rely on a single point of failure to perform decisions.
  • Secure: Node communication and membership within a Swarm are secure out of the box. Swarm Mode uses mutual TLS for node authentication, role authorization and transport encryption, automating both certificate issuance and rotation.
  • Simple: Swarm Mode is operationally simple and minimizes infrastructure dependencies. It does not need an external database to operate. It uses internal distributed State store.

Below picture depicts Swarm Mode cluster architecture. Fundamentally its a master and slave architecture. Every node in a swarm is Docker Host running Docker Engine. Some of the node has privilege role called Manager.The manager node participate in “raft consensus” group. As shown below, components in blue color are sharing Internal Distributed State
store of the cluster while the green colored components/boxes are worker Nodes. The worker node receive work instructions from the manager group and this is clearly shown in dash lines.Pic-3

Below picture shows how Docker Engine Swarm Mode nodes works together:
Pic-9

 

For operation team, it might be relief-tablet as there is no need of any external key-value store like etcd and consul.Docker Engine 1.12 has internal distributed state store to coordinate and hence no longer single point of failure. Additionally, Docker security is no longer an additional implementation,the secure mode is enabled by default.

Getting started with Docker Engine 1.12

Under this blog post, I will cover the following aspects:

  1. Initializing the Swarm Mode
  2. Creating the services and Tasks
  3. Scaling the Service
  4. Rolling Updates
  5. Promoting the node to Manager group

To test drive Docker Mode, I used 4 node cluster in Google Cloud Engine all running the latest stable Ubuntu 16.04 system as shown below:

GCE-0

Setting up docker 1.12-rc2 on all the nodes should be simple enough with the below command:

                                                      #curl -fsSL https://test.docker.com/ | sh

Run the below command to initialize Swarm Mode under the master node:

Snap-1

Let’s look at docker info command:

Snp-2

Listing the Docker Swarm Master node:

Snap-3

Let us add the first Swarm agent node(worker node) as shown below:

Snap-4

Let’s go back to Swarm Master Node to see the latest Swarm Mode status:

Snap-5

Similarly, we can add the 2nd Swarm agent node to Swarm Mode list:

agent-2

Finally, we see all the nodes listed:

Snap-7

Let’s add 3rd Swarm Agent node in the similar fashion as shown above:

Snap-8

Finally, the list of worker and master nodes gets displayed as shown below:

Let’s try creating a single service:

Snap-11

As of now, we dont have any service created. Let’s start creating a service called collab which uses busybox image from Dockerhub and all it does is ping collabnix.com website.

Snap-12

Verifying and inspecting the services is done through the below command:

Snap12

 

 

Quick Look at Scaling !!!

Task is an atomic unit of service.We actually create a task whenever we add a new service. For example, as shown below we created a task called collab.

Snap15

Let’s scale this service to 5:

Snap16

Now you can see that the service has been scaled to 5.

scal-listing

Rolling Updates Made Easy

Updating a service is pretty simple. The “docker service update” is feature rich and provides loads of options to play around with the service.

snap-4

Let’s try updating redis container from 3.0.6 to 3.0.7 with 10s delay and parallelism count of 2.

try-1

Wow !!! Rolling updates just went flawless.

Time to promote the Agent Node to Manager Node

Let’s try to promote Swarm Agent Node-1 to Manager group as shown below:

Promotion

In short, Swarm Mode is definitely a neat and powerful feature which provides an easy way to orchestrate Docker containers and replication of services. In our next post, we will look at how overlay networking works under Swarm Mode.

0
0

Setting up Docker Hosts on Google Compute Engine using Docker Machine

Docker Machine enables a simplified approach to set up Docker hosts on supported platforms, including Linux, Windows, OS X, and various cloud providers, in a standard way. As per Docker Inc.  “It automatically creates hosts, installs Docker on them, then configures the docker client to talk to them. A “machine” is the combination of a Docker host and a configured client.”

Docker Inc., in their official documentations speaks about Docker Machine compatibility with the following providers:

1. AWS
2.  Digital Ocean
3. Google Compute Engine
4. IBM Softlayer
5. Microsoft Azure && Hyper-V
6. OpenStack
7. VirtualBox
8. Rackspace
9. Ubuntu server over SSH – generic driver
10. VMware Fusion/vCloud Air/vSphere

You can play around with different releases and features of Docker Machine at https://github.com/docker/machine/releases

I had Windows 8.1 machine where I wanted to setup Docker Machine to work with my Google Cloud instances. Rather than logging into Google Cloud console and then trying to install manually, I preferred getting stuffs handled automatically through Docker Machine. Just few commands and I am ready to create Docker Host instances on the fly. Here is how I got Docker Machine working with Google Compute Engines.

Setting up Google Cloud SDK Platform

1. Download Google Cloud SDK from https://dl.google.com/dl/cloudsdk/channels/rapid/GoogleCloudSDKInstaller.exe
2. As pre-requisite, you will need Python 2.7 to install it. Select All Users to get this installed automatically.

GCE_1

GCE_2

GCE_3

As shown above, python 2.7 is required to be installed.

GCE_4

Let the installation take care of python 2.7 installation.

GCE_5

GCE_6

Great !! The Google Cloud SDK is successfully installed.

Next, its time to authenticate your local Windows machine to the remote Google Cloud Compute Engine.

GCE_7

As stated, GCE_8

Ensure that the right project ID is entered while providing the following command: gcloud config set project PROJECT_ID.

Setting up New Container Host through Docker Machine:

Finally one can use Docker Machine to setup a new docker container Host through the following command:

Docker_Google_working

Docker6_instance

You can easily see the Google Cloud Engine instances as shown below:

gCloud_instances

That’s all. You can easily setup multiple Docker hosts on Google Cloud Engine sitting on your local Windows machine through Docker Machine.

0
0