2-minutes to Kubernetes Cluster on Docker for Mac 18.01 using Swarm CLI

Docker for Mac 18.01.0 CE is available for the general public. It holds experimental Kubernetes release running on Linux Kernel 4.9.75, Docker Compose 1.180 and Docker Machine 0.13.0. It is available only under Edge Release. It is still not available under Stable Release branch. This release brought a major fixes around insecure registry, VPNKit port, DNS timeout issues and many more which you can refer under Release Notes section.

In my previous blog, I talked about how to build Kubernetes Cluster in 3 minutes using Kubectl tool which comes by default with this release. But what if you are a die-hard fan of   Docker Swarm CLI like me, here is the good news – You can now use Swarm CLI to bring up Kubernetes Cluster. Under this post, I will show you how Swarm CLI can be used to bring up Kubernetes cluster in just 2 minutes.

 

Pre-requisite:

  • Docker for Mac 18.01.0 CE Edge Release
  • Enable Kubernetes under Preference > Kubernetes Tab
  • Select Checkbox under Show System Container

A Quick 2-minutes ASCIINEMA video:

Here is 2-minutes video which shows how to get started from Zero to NGINX web server setup. It initiate with 0 pods, 0 external service and 0 deployments in Kubernetes terminology. Under this video, we will use the familiar docker stack CLI to bring up K8s cluster and then cleaning up in no seconds.

I hope it was quick but informative. I am planning to bring more interesting stuffs on Kubernetes for Docker Swarm Users. Till then, feel free to refer the below links:

Getting Started with Kubernetes Concepts & Architecture

Building Kubernetes Dashboard on Docker for Mac in 1 min

Demystifying Kubernetes Namespace

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

 

 

0
0

When Kubernetes Meet Docker Swarm for the First time under Docker for Mac 17.12 Release

Docker For Mac 17.12 GA is the first release which includes both the orchestrators – Docker Swarm & Kubernetes under the same Docker platform. As of 1/7/2018 – Experimental Kubernetes has been released under Edge Release(still not available under D4M Stable Release). Experimental Kubernetes is still not available for Docker for Windows & Linux platform. It is slated to be available for Docker for Windows next month(mid of February) and then for Linux by March or April.

Now you might ask why Docker Inc. is making this announcement? What is the fun of having 2 orchestrator under the same roof?  To answer this, let me go little back to the past and see how Docker platform looked like:

                                                                                                                             ~ Source – Docker Inc.

 

Docker platform is like a stack with various layers. The first base layer is called containerd. Containerd is an industry-standard core container runtime with an emphasis on simplicity, robustness and portability. Based on the Docker Engine’s core container runtime, it is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments, etc. Containerd is designed to be embedded into a larger system, rather than being used directly by developers or end-users. It basically includes a daemon exposing gRPC API over a local UNIX socket. The API is a low-level one designed for higher layers to wrap and extend. It also includes a barebone CLI (ctr) designed specifically for development and debugging purpose. It uses runC to run containers according to the OCI specification. The code can be found on GitHub, and here are the contribution guidelines. Let us accept the fact that over the last few years, there has been lots of iteration around this layer but now Docker Inc. has finalised it to a robust, popular and widely accepted container runtime.

On top of containerd, there is an orchestration layer rightly called Docker Swarm. Docker Swarm ties all of your individual machines together which runs container runtime. It allows you to deploy application not on a single machine at a time but into a whole system, thereby making your application distributed.

To take advantage of these layers, as a developer you need tools & environment which can build & package your application that takes advantage of your environment, hence Docker Inc. provides Community Edition like  Docker for Mac, Docker for Windows etc. If you are considering to move your application to the production, Docker Enterprise Edition is the right choice.

If the stack looks really impressive, why again the change in architecture?

The reason is – Not everybody uses Swarm.

~ Source – Docker Inc.

Before Swarm & Kubernetes Integration – If you are a developer and you are using Docker, the workflow look something like as shown below. A Developer typically uses Docker for Mac or Docker for Windows.Using a familiar docker build, docker-compose build tool you build your environment and ensure that it gets deployed across a single node cluster OR use docker stack deploy to deploy it across the multiple cluster nodes.

~ Source – Docker Inc.

 

If your production is in swarm, then you can test it locally on Swarm as it is already inbuilt in Docker platform. But if your production environment runs in Kubernetes, then surely there is lot of work to be done like translating files, compose etc. using 3rd party open source tools and negotiating with their offerings. Though it is possible today but it is not still smooth as Swarm Mode CLI.

With the newer Docker platform, you can seamlessly use both Swarm and Kubernetes flawlessly. Interestingly, you use the same familiar tools like docker stack ls, docker stack deploy, docker ps, `docker stack ps`to display Swarm and Kubernetes containers. Isn’t it cool? You don’t need to learn new tools to play around with Kubernetes cluster.

~ Source – Docker Inc.

 

The new Docker platform includes both Kubernetes and Docker Swarm side by side and at the same level as shown below. Please note that it is a real kubernetes sitting next to Docker Swarm and NOT A FORK OR WRAPPER.

                                                                                                 ~ Source – Docker Inc.

Still not convinced why this announcement?

 

                                                                                                  ~ Source – Docker Inc.

How does SWARM CLI builds Kubernetes cluster side-by-side?

The docker compose file analyses the input file format and convert it to pods along with creating replicas set as per the instruction set. With the newer Docker for Mac 17.12 release, a new stack command has been added as the first class citizen to Kubernetes CLI.

 

Ajeets-MacBook-Air:~ ajeetraina$ kubectl get stacks -o wide
NAME      AGE
webapp    1h

 

 

Important Points –

 

  • Future Release of Docker Platform will include both orchestration options available – Kubernetes and Swarm
  • Swarm CLI will be used for Cluster Management while for orchestration you have a choice of Kubernetes & Swarm
  • Full Kubernetes API is exposed in the stack, hence support for overall Kubernetes Ecosystem is possible.
  • Docker Stack Deploy will be able to target both of Swarm or Kubernetes.
  • Kubernetes is recommended for the production environment
  • Running both Swarm & Kubernetes is not recommended for the production environment.
  • AND by now, you must be convinced – “SWARM MODE CLI is NOT GOING ANYWHERE”

Let us test drive the latest Docker for Mac 17.12 beta release and see how Swarm CLI can be used to bring up both Swarm and Kubernetes cluster seamlessly.

  • Ensure that you have Docker for Mac 17.12 Edge Release running on your Mac system. If you still don’t see 17.12-kube_beta client version, I suggest you to go through my last blog post.

A First Look at Kubernetes Integrated Docker For Mac Platform

 

 

Please note that Kubernetes/kubectl comes by default with Docker for Mac 17.12 Beta release. YOU DON”T NEED TO INSTALL KUBERNETES. By default, a single node cluster is already setup for you by default.

As we have Kubernetes & Swarm Orchestration already present, let us head over to build NGINX services as piece of demonstration on this single node Cluster node.

Writing a Docker Compose File for NGINX containers

Let us write a Docker compose file for  nginx image and deploy 3 containers of that image. This is how my docker-compose.yml looks like:

 

Deploying Application Stack using docker stack deploy 

Ajeets-Air:mynginx ajeetraina$ DOCKER_ORCHESTRATOR=kubernetes docker stack deploy --compose-file docker-compose.yml webapp
Stack webapp was created
Waiting for the stack to be stable and running…
-- Service nginx has one container running
Stack webapp is stable and running

 

Verifying the NGINX replica sets through the below command:

 

As shown above, there are 3 replicas of the same NGINX image running as containers.

Verify the cluster using Kubectl CLI displaying the stack information:

Ajeets-MacBook-Air:mynginx ajeetraina$ kubectl get stack -o wide
NAME      AGE
webapp    8h

As you see, kubectl and stack deploy displays the same cluster information.

Verifying the cluster using kubectl CLI displaying YAML file:

You can verify that Docker analyses the docker-compose.yaml input file format and  convert it to pods along with creating replicas set as per the instruction set which can be verified using the below YAML output format.

 

 

We can use the same old stack deploy CLI to verify the cluster information

 

Managing Docker Stack

Ajeets-MacBook-Air:mynginx ajeetraina$ docker stack services webapp
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
20e31598-e4c        nginx               replicated          3/3                 nginx               *:82->80/tcp,*:444->443/tcp

 

It’s time to verify if the NGINX webpage comes up well:

 

Hence, we saw that NGINX service is running both on Kubernetes & Swarm Cluster side by side.

Cleaning up

Run the below Swarm CLI related command to clean up the NGINX service as shown below:

docker stack ls
docker stack rm webapp
kubectl get pods

Output:

Want to see this in action?

https://asciinema.org/a/8lBZqBI3PWenBj6mSPzUd6i9Y

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

0
0

Top 10 Reasons why LinuxKit is better than the traditional OS distribution

 

“LinuxKit is NOT designed with an intention to replace any of traditional OS like Alpine, Ubuntu, Red Hat etc. It is an open-source toolbox for building fine-tuned Linux-based operating systems that run in containers and hence, lean, portable & secure out-of-the-box.”

                                                                                                                     ~ Collabnix

 

How is LinuxKit different from other available GNU operating system? How is it different from Hashicorp Packer & Cloud instances? Why should I consider LinuxKit? I came across these queries in various community forums and discussion and thought to come up a write-up which talks about the factor which differentiate LinuxKit from other available operating system, tools & utilities.

In case you’re new, LinuxKit is a toolkit for building minimal Linux distributions that is lean, secure and portable. LinuxKit is built with an intention to build a distributions that has just enough Linux to accomplish a task and just do it in a way that is more secure. It uses Moby to build the distribution images and the LinuxKit tool to run them on the local hypervisor, cloud and bare metal environment.

Under this blog post, I will go through 10 good reasons why LinuxKit is better than other available GNU / open source distributions:

1. Smaller in size(< 200MB) compared to GBs size of traditional OS

The idea behind LinuxKit is that you start with a minimal Linux kernel — the base distro is only 35MB — and add literally only what you need. Once you have that, you can build your application on it, and run it wherever you need to. 

                                                                                                                     ~ source 

LinuxKit is not a full host operating system. It primarily has two jobs to accomplish- first to  run containerd containers and secondly, be secure. Linuxkit provides the ability to build an extremely small distribution (~50m) stripped of everything but that which is required to run containers.The init system and all other userland system components (e.g. dhcp, sshd) are containers themselves and as such can be swapped out, or others be plugged in. The whole OS is immutable (unless data volumes are configured), and will run anywhere one finds silicon or a hypervisor: Windows, MacOS, IoT devices, the Cloud etc.The system does not contain extraneous packages or drivers by default. Because LinuxKit is customizable, it is up to individual operators to include any additional bits they may require.

Let us try building LinuxKit ISO out of the minimal.yaml (shown below) which is also available under LinuxKit repository and verify its size.

git clone https://github.com/linuxkit/linuxkit
cd linuxkit && make
cp -rf bin/linuxkit /usr/local/linuxkit

Building LinuxKit ISO

Ajeets-MacBook-Air:examples ajeetraina$  linuxkit build -format iso-bios -name kitiso minimal.yml

This will build up minimal LinuxKit ISO. 

Let us verify its size as shown below:

Ajeets-MacBook-Air:examples ajeetraina$ du -sh * | grep linuxkit.iso 
133M linuxkit.iso
Ajeets-MacBook-Air:examples ajeetraina$ file linuxkit.iso 
linuxkit.iso: DOS/MBR boot sector; partition 1 : ID=0x17, active, start-CHS (0x0,0,1), end-CHS (0x84,63,32), startsector 0, 272384 sectors
Ajeets-MacBook-Air:examples ajeetraina$

 

A Traditional OS, in the other hand, is a complete operating system and includes hundreds and thousands of application programs. 

2. Minimal Provisioning/Boot Time(1.5 – 3 minutes)

LinuxKit hardly takes 1.5 to 3 minutes to get up and running on either local hypervisor/cloud or bare metal system. This is way better than 3 to 5 minutes of provisioning time as compared to traditional OS distribution. LinuxKit is an entirely immutable system, coming in at around 50MB with nothing extraneous to that which is needed to run containers. The root filesystem is read-only, making it stateless and tamper-proof. LinuxKit runs almost everything – onboot processes and continuous services- in a container which makes the boot time quite fast.  Even the init phase – where the OS image will be booted – is configured by copying files from OCI images.

While booting up the minimal LinuxKit ISO, it just took 0.034515sec for the containerd to get booted up.(shown below)

 

3. Build-time Package Updates(LinuxKit) Vs Run-time Package Updates(OS)

LinuxKit uses Moby tool which assembles a set of containerized components into an image. It uses YAML configuration which specifies the components used to build up an image . All components are downloaded at build time to create an image. The image is self-contained and immutable, so it can be tested reliably for continuous delivery. Most importantly – The build itself takes a matter of seconds, and is eminently reproducible, making it an ideal candidate to pass through a CI system. This is definitely an advantage over traditional GNU operating system.

4. Built-in Security(LinuxKit) Vs Base Security(Traditional OS)

LinuxKit allows users to create very secure Linux subsystems because it is designed around containers. All of the processes, including system daemons, run in containers, enabling users to assemble a Linux subsystem with only the needed services. As a result, systems created with LinuxKit have a smaller attack surface than general purpose systems.

LinuxKit is architected to be secure by default. LinuxKit’s build process leverages Alpine Linux’s hardened userspace tools such as Musl libc, and compiler options that include -fstack-protector and position-independent executable output. 

Most importantly, LinuxKit uses modern kernels, and updates frequently following new releases.  LinuxKit tracks new kernel releases very closely, and also follows best practice settings for the kernel configuration from the Kernel Self Protection Project and elsewhere.

The core system components included in LinuxKit userspace are written in type safe languages, such as Rust, Go and OCaml, and run with maximum privilege separation and isolation. LinuxKit’s build process heavily leverages Docker images for packaging. Of note, all intermediate build images are referenced by digest to ensures reproducibility across LinuxKit builds. Tags are mutable, and thus subject to override (intentionally or maliciously) – referencing by digest mitigates classes of registry poisoning attacks in LinuxKit’s buildchain. Certain images, such as the kernel image, is usually signed by LinuxKit maintainers using Docker Content Trust, which guarantees authenticity, integrity, and freshness of the image.

If you compare it with the traditional OS, many kernel bugs usually lurk in the codebase for years. Therefore, it is imperative to not only patch the kernel to fix individual vulnerabilities but also benefit from the upstream security measures designed to prevent classes of kernel bugs.

5. Container & Cloud Native at the same time

It is important to note that Linuxkit is built with containers, for running containers. Linuxkit today is supported on local hypervisor, cloud and bare metal system. The same minimal LinuxKit ISO image which is container native runs well on Cloud platform like Google Cloud Platform, Amazon Web Services & Microsoft Azure flawlessly.

If we talk about the traditional OS available, the distribution is usually customized by Cloud vendors to fit into the Cloud which is quite different from the one which is available for bare metal system. For example, An Amazon Machine Image (AMI) or preemptive Google Cloud instances etc.

6. Batteries included but removable/swappable

LinuxKit is built on the philosophy of “batteries included but swappable”.  Everything is replaceable and customizable under LinuxKit. That’s one of the unique feature in LinuxKit. The YAML format specifies the components used to build an ISO image. This is made possible via the moby tool which assembles a set of containerized components into an image. Let us look at the various YAML file which talks about how this philosophy really works:

  • Minimal YAML file to build LinuxKit OS
  • YAML file which builds LinuxKit OS with SSH Enabled

If you compare the above YAML file with the minimal YAML, just SSHD service containers has been added and hence new LinuxKit ISO can be booted up. You can find various other YAML files under this link.

7. Immutable in Production

Lots of companies – small or very large – have used immutability as a core concept of their infrastructure.Immutable infrastructure basically consists of immutable components which are actually  replaced for every deployment, rather than being updated. These components are usually started from a common image that is built once per deployment and can be tested and validated. The common image can be built through automation, but doesn’t have to be. Important to note that immutability is independent of any tool or workflow for building the images. Its best use case is in a cloud or virtualized environment.

Why is it important?

Immutable components as part of your infrastructure are a way to reduce inconsistency in your infrastructure and improve the trust into your deployment process. 

Toolkit like LinuxKit have made building immutable components very easy. Together with existing cloud infrastructure, it is a powerful concept to help you build better and safer infrastructure. 

8. Designed to be managed by external tooling

LinuxKit today can be managed flawlessly with popular tooling like Infrakit, Terraform or CloudFormation. It uses external tooling such as Infrakit or CloudFormation templates to manage the update process externally from LinuxKit, including doing rolling cluster upgrades to make sure distributed applications stay up and responsive.

Updates may preserve the state disk used by applications if needed, either on the same physical node, or by reattaching a virtual cloud volume to a new node.

Soon after Dockercon 2017 Austin, I wrote a blog post on how Infrakit & LinuxKit work together for building immutable infrastructure.

Why Infrakit & LinuxKit are better together for Building Immutable Infrastructure?

If you want to know how LinuxKit & Terraform can work together, you should look at this link too.

 

9. Enough to bootstrap distributed applications

LinuxKit is designed for building and running clustered applications, including but not limited to container orchestration such as Docker or Kubernetes. It is obvious that in the production environment, most of the users will use a cluster of instances, usually running distributed applications, such as etcdDockerKubernetes or distributed databases. LinuxKit provides examples of how to run these effectively, largely using the tooling likeinfrakit, although other machine orchestration systems can equally well be used, for example Terraform or VMWare.

LinuxKit is gaining momentum in terms of  a toolkit for building custom minimal, immutable Linux distributions. Integration of Infrakit with LinuxKit help users  to build and deploy custom OS images to a variety of targets – from a single vm instance on the mac (via xhyve / hyperkit, no virtualbox) to a cluster of them, as well as booting a remote ARM host on Packet.net from the local laptop via a ngrok tunnel.

 

10. Eliminates Cloud-provides specific Base Image variance

If we talk about Cloud, one serious challenge is that each vendor has differentiated IP invested in how they handle certain aspects of their cloud infrastructure. It is not enough to just provision n-number of servers;what IT ops teams need is a simple and consistent way to declare the number of servers, what size they should be, and what sort of base software configuration is required. Linuxkit solves this problem of base software configuration issue by allowing users to build a stripped down version of traditional OS based on their own requirement.

As said earlier, LinuxKit is not here to replace Alpine OS. LinuxKit uses the minimalist Alpine Linux distribution by default as the foundation of its official container images, and will continue to do so in future. Alpine is seen as an ideal all-purpose generic base for running heavy-lifting software like Redis and Nginx within containers. You can also switch Alpine for Debian/ubuntu/centos, if you wish.

Interested to learn more about LinuxKit, head over to these series of articles.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

2
0

A First Look at Kubernetes Integrated Docker For Mac Platform

Docker support for Kubernetes is now in private beta. As a docker captain, I was able to be a part of the first group to get my hands on the update and can show you what to expect. Kubernetes is ONLY available to those users who are part of the private beta for Docker for Mac 17.12. To access beta builds, you must be signed in within Docker for Mac using your Docker ID. Please remember that this won’t be available to those users who missed to register for Docker Beta Program. 

[Updated: 1/7/2018] – Docker For Mac 17.12 GA is available and is the first release which includes both the orchestrators – Docker Swarm & Kubernetes under the same Docker platform. As of 1/7/2018 – Experimental Kubernetes has been released under Edge Release(still not available under D4M Stable Release). Experimental Kubernetes is still not available for Docker for Windows & Linux platform. It is slated to be available for Docker for Windows next month(mid of February) and then for Linux by March or April.

 

 

 

 

What’s new in Docker 17.12 CE Final Release?

The fresh new Docker 17.12 Community Edition Final Release  includes a standalone Kubernetes server & client, plus Docker CLI integration. The Kubernetes server runs locally within your Docker instance and is a single-node cluster. It is specifically meant for development & testing purpose only. 

Docker 17.12 CE includes the newer Docker Compose 1.18.0-rc2 release along with the below new features & further improvements like:

New Feature:

  •  VM disk size can be changed in settings. (See docker/for-mac#1037).

Bug fixes and minor changes

  • Avoid VM reboot when changing host proxy settings.
  • Don’t break HTTP traffic between containers by forwarding them via the external proxy (docker/for-mac#981)
  • Filesharing settings are now stored in settings.json
  • Daemon restart button has been moved to settings / Reset Tab
  • Display various component versions in About box
  • Better VM state handling & error messsages in case of VM crashes

Important Information:

  • The beta features are being released in a controlled manner: not all users who signed up for the beta will be able to access the features right away. You must be signed in within Docker for Mac using your Docker ID to access the beta builds.

  • The Kubernetes features are only accessible on macOS for now; Windows will follow at a later date.

  • Because this feature is still in beta, it can only be accessed using the latest Docker for Mac release, more precisely on the Edge channel. 

How to get this Beta Release?

You need to install Docker 17.12 Edge Release(it is NOT available under Stable Release yet). Once you install the latest Beta release, all you need is to log in with your Docker ID, select whale menu -> Sign in / Create Docker ID from the menu bar. 

 

 

To enable Kubernetes support and install a standalone instance of Kubernetes running as a Docker container, select Enable Kubernetes and click the Apply and restart button.

 

Once Kubernetes support is enabled, you are able to deploy your workloads, in parallel, on Kubernetes, Swarm, and as standalone containers. Enabling or disabling the Kubernetes server is not going to affect other workloads.

This new beta includes Docker CLI integration for Kubernetes too along with a standalone Kubernetes server & client and this can be verified as shown below. 

 

 

By default, Kubernetes containers are hidden from commands like docker service ls, because managing them manually is not supported. To make them visible, select Show system containers (advanced) and click Apply and restart

 

This means that now you can use the same old favourite docker ps command for displaying Kubernetes internal containers. Isn’t it cool?

 

How to get Kubectl working?

Kubectl comes out-of-the-box by default with this beta release and hence you don’t need to install it. Kubectl is a command line interface for running commands against Kubernetes clusters. It holds a lot of functionality similarity with docker CLI like docker run, docker attach, docker logs and so on. kubectl controls the Kubernetes cluster manager.

The Docker for Mac Kubernetes integration provides the Kubernetes CLI command at /usr/local/bin/kubectl. By default, the kubectl mightn’t work as expected as we still need to choose the correct context.

 

Follow the below steps to get started with kubectl:

Ajeets-MacBook-Air:~ ajeetraina$ kubectl config get-contexts

Let us pick up the docker-for-desktop context:

Ajeets-MacBook-Air:~ ajeetraina$ kubectl config use-context docker-for-desktop
Switched to context “docker-for-desktop”.

Now the kubectl should work fine and display a single node K8s cluster as shown below:

The above output shows us that we were able to connect to our Kubernetes cluster and display the status of our master node. In case you are new to Kubernetes terminology – a Kubernetes Node is a physical or virtual machine used to host application containers.  

Verifying the kubectl version:

Ajeets-MacBook-Air:~ ajeetraina$ sudo kubectl version
Client Version: version.Info{Major:”1″, Minor:”8″, GitVersion:”v1.8.2″, GitCommit:”bdaeafa71f6c7c04636251031f93464384d54963″, GitTreeState:”clean”, BuildDate:”2017-10-24T19:48:57Z”, GoVersion:”go1.8.3″, Compiler:”gc”, Platform:”darwin/amd64″}
Server Version: version.Info{Major:”1″, Minor:”8″, GitVersion:”v1.8.2″, GitCommit:”bdaeafa71f6c7c04636251031f93464384d54963″, GitTreeState:”clean”, BuildDate:”2017-10-24T19:38:10Z”, GoVersion:”go1.8.3″, Compiler:”gc”, Platform:”linux/amd64″}

Verifying the Kubernetes Containers

Displaying the Kubernetes Cluster Information

Ajeets-MacBook-Air:~ ajeetraina$ kubectl cluster-info
Kubernetes master is running at https://localhost:6443
KubeDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’

 

Test Drive WordPress Application on a single Node Kubernetes Cluster

With our Kubernetes cluster ready, let us now start deploying application containers. I have picked up my all time favourite WordPress application.

Let us test and deploy WordPress application on top of this single node Kubernetes cluster. We will follow the below steps:

  1. Creating a Persistent Volume
  2. Creating a Secret
  3. Deploying MySQL
  4. Deploying WordPress

Creating a Persistent Volume

We will first create two Persistent Volumes from the local-volumes.yaml file shown below:

Copy the code into a file called local-volume.yaml and then run the below command:

kubectl create -f local-volumes.yaml

You can verify the details below:

Creating a Secret

Ajeets-MacBook-Air:~ ajeetraina$ kubectl create secret generic mysql-pass --from-literal=password=collab123
secret “mysql-pass” created

Verifying the secrets:

Ajeets-MacBook-Air:~ ajeetraina$ kubectl get secrets
NAME TYPE DATA AGE
default-token-fwflq kubernetes.io/service-account-token 3 1h
mysql-pass Opaque 1 15s
Ajeets-MacBook-Air:~ ajeetraina$

Deploying MySQL

The below YAML describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The MYSQL_ROOT_PASSWORD environment variable sets the database password from the Secret.

Create a file called “mysql-deployment.yaml” and copy the above content. Once you save the file, run the below command to bring up MySQL container:

Ajeets-MacBook-Air:~ ajeetraina$ kubectl create -f mysql-deployment.yaml
service “wordpress-mysql” created
persistentvolumeclaim “mysql-pv-claim” created

Verifying the MySQL Pod:

Ajeets-MacBook-Air:~ ajeetraina$ kubectl get pods
NAME               READY       STATUS      RESTARTS         AGE
wordpress-mysql-7b4ffb6fb4-gfk8n 0/1 ContainerCreating 0 34s

 

 

Deploying WordPress

The below YAML describes a single-instance WordPress Deployment and Service. It uses a PVC for persistent storage & a Secret for the password as shown in the content. Did you see : type: NodePort. entry? This setting exposes WordPress to traffic from outside of the cluster.

Ajeets-MacBook-Air:~ ajeetraina$ kubectl create -f wordpress-deployment.yaml
service “wordpress” created
persistentvolumeclaim “wp-pv-claim” created
deployment “wordpress” created

Let us verify the pods using kubectl get pods command:

 

 

Verify that the Services are up and running by running the following command:

 

 

You can fetch overall status of the pods and running containers using the below command:

 

By now, you should be able to visit http://localhost and see WordPress page as shown below:

 

 

While you access the WordPress page,you can see logs under the WordPress container as shown below:

 

Hence, this is how your WordPress page can be customized flawlessly.

 

In my next blog post, I will deep dive into how Docker Swarm and Kubernetes gonna work together under the same cluster environment.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

0
0

Running LinuxKit locally on Oracle VirtualBox Platform Made Easy

LinuxKit GITHUB repository has already crossed 1800 commits, 3600+ stars &  been forked 420+ times since April 2017 when it was open sourced by Docker Inc for the first time. LinuxKit today support dozens of platforms which falls under Cloud, local hypervisor & Bare metal systems categories. Recently, arm64 support was added and I published a blog post which talks about building LinuxKit for the minimalist Raspberry Pi 3.  Improved Kubernetes support has been another enhancement and you can follow my blog to build Multi-Node Kubernetes  cluster using LinuxKit. In case you want to build Multi-Node Kubernetes cluster using the newly added CRI-containerd & Moby, don’t miss out this blog post.

 

 

In case you’re very new to LinuxKit , it is a toolkit for building secure, portable & lean operating systems for containers. It uses moby tooling to build system images. Everything runs in a container. LinuxKit uses the linuxkit tool for building, pushing and running Virtual Machine images. The moby tool assembles a set of containerized components into in image. The simplest type of image is just a tar file of the contents.The YAML configuration specifies the components used to build up an image . All components are downloaded at build time to create an image. The image is self-contained and immutable, so it can be tested reliably for continuous delivery.

A Look at LinuxKit Architecture

At the base of LinuxKit, there is a modern Linux kernel which specifies a kernel Docker image, containing a kernel and a filesystem tarball, eg containing modules. The minimal init is the base init process Docker image, which is unpacked as the base system, containing initcontainerd and a few tools. It is basically built from pkg/init/.  The onboot  containers are the system containers, executed sequentially in order. They should terminate quickly when done. The services is the system services, which normally run for the whole time the system is up. The .files are additional files to add to the image

 

What’s New in LinuxKit?

Below are the list of new features which has been introduced in LinuxKit recently –

 

 

Early this year, I wrote a blog post which talks about how to manually create LinuxKit ISO image and then mount it to run it under Oracle VirtualBox. The method was complicated as it required converting VMDK file into .VDI format first and then registering the VM using VBoxManage CLI.

Test-Drive LinuxKit OS on Oracle VirtualBox running on macOS Sierra

Now with the introduction of linuxkit run vbox CLI, it is just a matter of 2-3 minutes to get it  on VirtualBox up and running.

Under this blog post, we will see how LinuxKit OS can be built and run on Oracle VirtualBox in just 2 minutes.

Pre-requisites:

  • MacOS Sierra 
  • Docker for Mac installed on MacOS
  • Docker Up and Running
  • Oracle VirtualBox 

Clone the LinuxKit Repository:

git clone https://github.com/linuxkit/linuxkit

Building the LinuxKit Tool

cd linuxkit
make

Place LinuxKit under the right executable PATH:

cp bin/linuxkit /usr/local/bin/

Building ISO image for VirtulBox.

Before we go ahead and build ISO for Virtualbox, let us look at the newly introduced command line option:

 

Now you can use `LinuxKit build` option to build the ISO image. Let us look into this sub-command:

 

Let’s run the below command to build iso-bios format of docker.yml which can be found under linuxkit/examples directory under LinuxKit repository.

linuxkit build -format iso-bios --name testbox docker.yml

This builds up ISO image as shown below:

 

Running the ISO for VirtualBox

Justin Cormack, a LinuxKit maintainer did a great job in introducing a new CLI option linuxkit run box  as shown below:

Run the below command to initiate  LinuxKit OS on VirtualBox in the form of VM:

linuxkit run vbox --iso testbox.iso

This will initiate a VM called testbox under Virtualbox as shown below:

You can verify under VirtualBox Manager:

 

Open up Console to see LinuxKit running under this new VM:

 

So, you can access either through terminal or directly under the console but NOT both at the same time.

Accessing Docker Service Container

To access the Docker service container, first list out the running service containers:

ctr tasks ls

This will list out the running service containers as shown below:

 

Lets us enter into docker service container and verify Docker release version:

ctr tasks exec -t --exec-id 502 docker sh

This will allow it to enter into shell as shown. Run docker version command to verify the currently available Docker release.

 

 

Please note that networking doesn’t get enabled by default for these service container. You will need to  manually enable “Cable Connected” option under VirtualBox > Settings > Network > Advanced to get the IP address assigned to the network interface.

 

I have raised this issue with LinuxKit Team and you can track it here.

Let us go back to the terminal and try to pull few Docker images as shown below:

 

Wow ! So we have Docker service container running  inside LinuxKit OS on top of Oracle VirtualBox platform flawlessly.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

Interested to read more on LinuxKit? Check this out – 

Building a secure Docker Host VM on VMware ESXi using LinuxKit & Moby

Building a Secure VM based on LinuxKit on Microsoft Azure Platform

Building Docker For Mac 17.06 Community Edition using Moby & LinuxKit

Running LinuxKit on AWS Platform made easy

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

1
0

Docker, Prometheus & Pushgateway for NVIDIA GPU Metrics & Monitoring

In my last blog post, I talked about how to get started with NVIDIA docker & interaction with NVIDIA GPU system. I demonstrated NVIDIA Deep Learning GPU Training System, a.k.a DIGITS by running it inside Docker container. ICYMI –  DIGITS is essentially a webapp for training deep learning models and is used to rapidly train the highly accurate deep neural network (DNNs) for image classification, segmentation and object detection tasks.The currently supported frameworks are: Caffe, Torch, and Tensorflow. It simplifies common deep learning tasks such as managing data, designing and training neural networks on multi-GPU systems, monitoring performance in real time with advanced visualizations, and selecting the best performing model from the results browser for deployment. 

 

In a typical HPC environment where you run 100 and 100s of NVIDIA GPU equipped cluster of nodes, it becomes important to monitor those systems to gain insight of the performance metrics, memory usage, temperature and utilization. . Tools like Ganglia & Nagios etc. are very popular due to their scalable  & distributed monitoring architecture for high-performance computing systems such as clusters and Grids. It leverages widely used technologies such as XML for data representation, XDR for compact, portable data transport, and RRDtool for data storage and visualization. But with the advent of container technology, there is a need of modern monitoring tools and solutions which works well with Docker & Microservices. 

It’s all modern world of Prometheus Stack…

Prometheus is 100% open-source service monitoring system and time series database written in Go.It is a full monitoring and trending system that includes built-in and active scraping, storing, querying, graphing, and alerting based on time series data. It has knowledge about what the world should look like (which endpoints should exist, what time series patterns mean trouble, etc.), and actively tries to find faults.

How is it different from Nagios?

Though both serves a purpose of monitoring, Prometheus wins this debate with the below major points – 

  • Nagios is host-based. Each host can have one or more services, which has one check.There is no notion of labels or a query language. But Prometheus comes with its robust query language called “PromQL”. Prometheus provides a functional expression language that lets the user select and aggregate time series data in real time. The result of an expression can either be shown as a graph, viewed as tabular data in Prometheus’s expression browser, or consumed by external systems via the HTTP API.
  • Nagios is suitable for basic monitoring of small and/or static systems where blackbox probing is sufficient. But if you want to do whitebox monitoring, or have a dynamic or cloud based environment then Prometheus is a good choice.
  • Nagios is primarily just about alerting based on the exit codes of scripts. These are called “checks”. There is silencing of individual alerts, however no grouping, routing or deduplication.

Let’s talk about Prometheus Pushgateway..

Occasionally you will need to monitor components which cannot be scraped. They might live behind a firewall, or they might be too short-lived to expose data reliably via the pull model. The Prometheus Pushgateway allows you to push time series from these components to an intermediary job which Prometheus can scrape. Combined with Prometheus’s simple text-based exposition format, this makes it easy to instrument even shell scripts without a client library.

The Prometheus Pushgateway allow ephemeral and batch jobs to expose their metrics to Prometheus. Since these kinds of jobs may not exist long enough to be scraped, they can instead push their metrics to a Pushgateway. The Pushgateway then exposes these metrics to Prometheus. It is important to understand that the Pushgateway is explicitly not an aggregator or distributed counter but rather a metrics cache. It does not have statsd-like semantics. The metrics pushed are exactly the same as you would present for scraping in a permanently running program.For machine-level metrics, the textfile collector of the Node exporter is usually more appropriate. The Pushgateway is intended for service-level metrics. It is not an event store

Under this blog post, I will showcase how NVIDIA Docker, Prometheus & Pushgateway come together to  push NVIDIA GPU metrics to Prometheus Stack.

Infrastructure Setup:

  • Docker Version: 17.06
  • OS: Ubuntu 16.04 LTS
  • Environment : Managed Server Instance with GPU
  • GPU: GeForce GTX 1080 Graphics card

Cloning the GITHUB Repository

Run the below command to clone the below repository to your Ubuntu 16.04 system equipped with GPU card:

git clone https://github.com/ajeetraina/nvidia-prometheus-stats

Script to bring up Prometheus Stack(Includes Grafana)

Change to nvidia-prometheus-stats directory with proper execute permission & then execute the ‘start_containers.sh’ script as shown below:

cd nvidia-prometheus-stats
sudo chmod +x start_containers.sh
sudo sh start_containers.sh

This script will bring up 3 containers in sequence – Pushgateway, Prometheus & Grafana

Executing GPU Metrics Script:

NVIDIA provides a python module for monitoring NVIDIA GPUs using the newly released Python bindings for NVML (NVIDIA Management Library). These bindings are under BSD license and allow simplified access to GPU metrics like temperature, memory usage, and utilization.

Next, under the same directory, you will find a python script called “test.py”.

Execute the script (after IP under line number – 124 as per your host machine) as shown below:

sudo python test.py

That’s it. It is time to open up Prometheus & Grafana UI under http://<IP-address>:9090

Just type gpu under the Expression section and you will see the list of GPU metrics automatically turned up as shown below:

Accessing the targets

Go to Status > Targets to see what targets are accessible. The Status should show up UP.

Click on Push gateway Endpoint to access the GPU metrics in details as shown:

Accessing Grafana

 You can access Grafana through the below link:

                                          http://<IP-address>:3000

 

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

0
0

Running NVIDIA Docker in the GPU-Accelerated Data Center

It’s time to look at GPUs inside Docker container..

Docker is the leading container platform which provides both hardware and software encapsulation by allowing multiple containers to run on the same system at the same time each with their own set of resources (CPU, memory, etc) and their own dedicated set of dependencies (library version, environment variables, etc.). Docker  can now be used to containerize GPU-accelerated applications. In case you’re new to GPU-accelerated computing, it is basically the use of graphics processing unit  to accelerates high performance computing workloads and applications. This means you can easily containerize and isolate accelerated application without any modifications and deploy it on any supported GPU-enabled infrastructure. 

Docker does not natively support NVIDIA GPUs within containers. Though there are available workaround like fully installing the NVIDIA drivers inside the container and map in the character devices corresponding to the NVIDIA GPUs (e.g. /dev/nvidia0) on launch but still it is not recommended.

Here comes nvidia-docker plugin for a rescue…

The nvidia-docker is an open source project hosted on GITHUB and it provides driver-agnostic CUDA images  & docker command line wrapper that mounts the user mode components of the driver and the GPUs (character devices) into the container at launch. With this enablement, the NVIDIA Docker plugin enables deployment of GPU-accelerated applications across any Linux GPU server with NVIDIA Docker support. What does this mean? – Using Docker, we can develop and prototype GPU applications on a workstation, and then ship and run those applications anywhere that supports GPU containers. Earlier this year, the nvidia-docker 1.0.1 release announced the support for Docker 17.03 Community & Enterprise Edition both.

Some of the key notable benefits includes –

  • Legacy accelerated compute apps can be containerized and deployed on newer systems, on premise, or in the cloud.
  • Ease of Deployment
  • Isolation of Resource
  • Bare Metal Performance
  • Facilitate Collaboration
  • Run access heterogeneous CUDA toolkit environments (sharing the host driver)
  • Specific GPU resources can be allocated to container for better isolation and performance.
  • You can easily share, collaborate, and test applications across different environments.
  • Portable and reproducible builds

                                                                                                                                                               ~source: Nvidia

 

Let’s talk about libnvidia-container a bit..

libnvidia is NVIDIA container runtime library. The repository  provides a library and a simple CLI utility to automatically configure GNU/Linux containers leveraging NVIDIA hardware.The implementation relies on kernel primitives and is designed to be agnostic of the container runtime. Basic features includes –

  • Integrates with the container internals
  • Agnostic of the container runtime
  • Drop-in GPU support for runtime developers
  • Better stability, follows driver releases
  • Brings features seamlessly (Graphics, Display, Exclusive mode, VM, etc.)

                                                                                                                                                                             ~ source: NVIDIA

Under this blog post, I will show you how to get started with nvidia-docker to interact with NVIDIA GPU system and then look at few of interesting applications which can be build for GPU-accelerated data center. Let us get started – 

Infrastructure Setup:

Docker Version: 17.06

OS: Ubuntu 16.04 LTS

Environment : Manager Server Instance with GPU

GPU: GeForce GTX 1080 Graphics card

 

  • Verify that GPU card is equipped in your hardware:

 

  • Install nvidia-docker & nvidia-docker-plugin under Ubuntu 16.04 using wget as shown below:
sudo wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb

Initializing nvidia-docker service:

ajit@Ubuntu-1604-xenial-64-minimal:~$ systemctl status nvidia-docker
 nvidia-docker.service -- NVIDIA Docker
plugin Loaded: loaded (/lib/systemd/system/nvidia-docker.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2017-08-20 10:52:43 CEST; 6 days ago
Docs: https://github.com/NVIDIA/nvidia-docker/wiki Main
PID: 19921 (nvidia-docker-p)
Tasks: 13
Memory: 12.3M
CPU: 5.046s
CGroup: /system.slice/nvidia-docker.service
                └─19921 /usr/bin/nvidia-docker-plugin -s /var/lib/nvidia-docker

Whenever nvidia-docker is installed, it creates a Docker volume and mounts the devices into a docker container automatically.

Did you know?

It is possible to avoid replying on nvidia-wrapper to launch GPU containers using ONLY docker and that can be done by using the REST API directly as shown below:

docker run -ti --rm `curl -s http://localhost:3476/docker/cli` nvidia/cuda nvidia-smi

NVIDIA’s System Management Interface

If you want to know the status of your NVIDIA GPU, then nvidia-smi is the handy command which can be run using nvidia-cuda container. This is generally useful when  you’re having trouble getting your NVIDIA GPUs to run GPGPU code.

nvidia-docker run --rm nvidia/cuda nvidia-smi

 

Listing all NVIDIA Devices:

nvidia-docker run --rm nvidia/cuda nvidia-smi -L
GPU 0: GeForce GTX 1080 (UUID: GPU-70ecf884-c4fb-159b-a67e-26b4ce96681d)

Listing all available data on the particular GPU:

nvidia-docker run --rm nvidia/cuda nvidia-smi -i 0 -q

Listing details for each GPU:

nvidia-docker run --rm nvidia/cuda nvidia-smi --query-gpu=index,name,uuid,serial --format=csv
index, name, uuid, serial0, GeForce GTX 1080, GPU-70ecf884-c4fb-159b-a67e-26b4ce96681d, [Not Supported]

Listing the available clock speeds:

nvidia-docker run --rm nvidia/cuda nvidia-smi -q -d SUPPORTED_CLOCKS

Building & Testing NVIDIA-Docker Images

If you look at samples/ folder under the nvidia-docker repository , there are couple of  images that can be used to quickly test nvidia-docker on your machine. Unfortunately, the samples are not available on the Docker Hub, hence you will need to build the images locally. I have built few of them which I am going to showcase:

cd /nvidia-docker/samples/ubuntu-16.04/deviceQuery/
docker build -t ajeetraina/nvidia-devicequery .

 

Running the DeviceQuery container

You can leverage ajeetraina/nvidia-devicequery container directly as shown below:

 

 

Listing the current GPU clock speed, default clock speed & maximum possible clock speed:

nvidia-docker run --rm nvidia/cuda nvidia-smi -q -d CLOCK

Retrieving the System Topology:

The topology refers to how the PCI-Express devices (GPUs, InfiniBand HCAs, storage controllers, etc.) connect to each other and to the system’s CPUs.  This can be retrieved as follow:

 

A Quick Look at NVIDIA Deep Learning..

The NVIDIA Deep Learning GPU Training System, a.k.a DIGITS is a webapp for training deep learning models. It puts  the power of deep learning into the hands of engineers & data scientists. It can be used to rapidly train the highly accurate deep neural network (DNNs) for image classification, segmentation and object detection tasks.The currently supported frameworks are: Caffe, Torch, and Tensorflow.

DIGITS simplifies common deep learning tasks such as managing data, designing and training neural networks on multi-GPU systems, monitoring performance in real time with advanced visualizations, and selecting the best performing model from the results browser for deployment. DIGITS is completely interactive so that data scientists can focus on designing and training networks rather than programming and debugging.

To test-drive DIGITS, you can get it up and running in a single Docker container:

ajit@Ubuntu-1604-xenial-64-minimal:~$ NV_GPU=0 nvidia-docker run --name digits -d -p 5000:5000 nvidia/digits
f0e5d1f78b810037a039b34420ee4848e5809effc1c73752eb5d0ced89b1835f

In the above command, NV_GPU is a method of assigning GPU resources to a container which is critical for leveraging DOCKER in a Multi GPU System. This passes GPU ID 0 from the host system to the container as resources. Note that if you passed GPU ID 2,3 for example, the container would still see the GPUs as ID 0,1 inside the container, with the PCI ID of 2,3 from the host system. As I have a single GPU card, I just passed it as NV_GPU=0.

You can open up web browser and verify if its running on the below address:

w3m http://<dockerhostip>:5000

The below is the snippet from my w3m text browser:

 

How about Docker Compose? Is it supported?

Yes, of course.  

Let us see how Docker compose works for nvidia-docker. 

  • First we need to figure out the nvidia driver version 

 

As shown above, the nvidia driver version displays 375.66.

  • Create a docker volume that uses the nvidia-docker plugin.
docker volume create --name=nvidia_driver_375.66 -d nvidia-dockernvidia_driver_375.66

Verify it with the below command:

sudo docker volume ls
DRIVER VOLUME NAMElocal 15dd59ba1017ca5b822473fb3ed8a341b4e29e59f3725565c00810ddd5619878local
local nvidia_driver_375.66

Now let us look at docker-compose YAML file shown below:

If you have ever worked with docker-compose, you can easily understand what each line specifies. I specified /dev/nvidia0 as I had a single GPU card, capture the correct volume driver name which we specified in the last step.

Just initiate the docker-compose as shown below:

docker-compose up

This will start a container which ran nvidia-smi and then exited immediately. To keep it running, one can add tty: true inside Docker-compose file.

Let us see another interesting topic…TensorFlow

I have a sample TensorFlow based docker-compose file as shown below:

Verify if the container is up and running –

 

In the next blog post, I will showcase how to push the GPU metrics to prometheus & cAdvisor.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel

 

0
0

Hybrid Docker Swarm Mode Cluster with Multi-OS Application Deployment

Here comes the most awaited feature of 2017 – “Building Docker Swarm cluster which includes all Windows cluster, or a hybrid cluster of Linux and Windows machines, or all Linux systems”.  With the inclusion of the recent Docker 17.03 overlay networking feature for Windows Server 2016, it’s now possible to create swarm clusters that include both Windows and Linux environment across the both public and private cloud infrastructure.

Today most of the enterprises manage a diverse set of applications or workloads that might includes both traditional applications & microservices, built on either or both of Linux and Windows platform. Docker now provides a way to modernize all these different applications by packaging them in a standard format which does not require software development teams to change their code. Organizations can containerize traditional apps and microservices and deploy them in the same cluster, which includes both Linux & Windows environment.

Building Hybrid Docker Swarm Mode Cluster Environment

Under this blog post, I will showcase how to build the hybrid Docker Swarm Mode cluster. I will be leveraging Google Cloud Platform where I have 2 Windows Server 2016 & 1 Ubuntu 16.04 instance up and running.

Setting up Windows Server 2016 with Docker 17.03 Enterprise Editio

  • Bring up Windows Server 2016 instance on Google Cloud Platform 
  • Setup Windows password using the below command and then proceed to download the RDP file
gcloud beta compute --project “psychic-cascade-175904” reset-windows-password “windows2016” --zone “asia-east1-a”
ip_address: 35.194.X.X
password: XXXXXXX
username: <yourusername>
  • Open up the RDP file downloaded under step-2 and connect to the instance remotely.
  • Just 3 commands and docker should be up and running on Windows Server 2016
Install-Module -Name DockerMsftProvider -Force
Install-Package -Name docker -ProviderName DockerMsftProvider -Force
Restart-Computer -Force

 

This should restart the Windows instance and you should be able to see Docker up and running once it comes back.

One can easily verify if Docker 17.03.2 EE  is up and running using the below command:

 

Configuring Windows Server 2016 as Docker Swarm Manager Node:

Run the below command to make this node as a Swarm master node:

docker swarm init --listen-addr 10.140.0.2:2377 --advertise-addr 10.140.0.2

 

Listing the Swarm Mode cluster nodes:

As of now, there is only 1 manager node which gets listed.

Joining Windows Server 2016 as the first Worker Node

Install Docker 17.03 EE on the 2nd Windows Server 2016 instance and issue the below command to join it as worker node:

docker swarm join --token SWMTKN-1-4ia5wbutzfoimx5xm7eujadpa6vuivksmgijk4dm56ppw5u3ib-6hvdrvee3vlnlg8oftnnj80dw 10.140.0.2:2377

 

Listing out the Swarm Mode cluster:

 

Adding Linux System to the Cluster

Login to Ubuntu 16.04 instance, install Docker 17.06  and then issue the below command to join it as 3rd node to the existing cluster:

worker@ubuntu1604:~$ sudo docker swarm join --token SWMTKN-1-4ia5wbutzfoimx5xm7eujadpa6vuivksmgijk4dm56ppw5u3ib-6hvdrvee3vlnlg8oftnnj80dw 10.140.0.2:2377
This node joined a swarm as a worker.

Wow ! This builds up our first hybrid Swarm Mode cluster which includes 2 Windows nodes & 1 Linux Nodes.

A Quick way of verifying the OS type:

 

Create an Overlay Network For Swarm Cluster

Before we start deploying application, we need to create an overlay network which spans across the cluster nodes. By default, it shows up the below listed network drivers:

As shown above, it displays 3 different networks – nat, none and ingress. Let us create a new overlay network ‘collabnet’ using the below command:

docker network create -d overlay collabnet

 

Creating our First Windows-based Service container

It’s time to create our first service. I will pick up MSSQL container which will use ‘collabnet’ overlay network and should get deployed only on Windows platform based on applied constraint as shown below:

docker service create \
--network collabnet --endpoint-mode dnsrr \
--constraint ‘node.platform.os == windows’ \
--env ACCEPT_EULA=Y \
--env-file db-credentials.env \
--name db \
microsoft/mssql-server-windows

 

Ensure that you have db-credentials.env under the same directory with the below content:

sa_password=collabnix123
SA_PASSWORD=collabnix123
DB_CONNECTION_STRING=Server=db;Database=SignUp;User Id=sa;Password=collabnix123

Once you create service, you can verify it up and running with the below command:

docker service ps db

Scaling the DB service 

Let us scale the DB service and see if it still applies only to Windows Platform:

As shown above, there are now 3 instance of the same application up and running. The MS SQL is now running on only Windows system based on the constraint specified.

Creating Linux specific applications 

Let us create a Web application having frontend as WordPress and backend as MySQL container with a constraint that it should get deployed only on Linux specific platform.

docker service create \
--replicas 1 \
--name wordpressdb1 \
--network collabnet \
--constraint ‘node.platform.os == linux’ \
--env MYSQL_ROOT_PASSWORD=collab123 \
--env MYSQL_DATABASE=wordpress
mysql:latest

docker service create \
--env WORDPRESS_DB_HOST=wordpressdb1 \
--env WORDPRESS_DB_PASSWORD=collab123 \
--network collabnet --constraint ‘node.platform.os == linux’ \
--replicas 4 \
--name wordpressapp \
--publish 80:80/tcp \
wordpress:latest

 

Hence, we saw that MS SQL service is running on Windows host instance while WordPress application specific to Linux is successfully up and running on Linux nodes which proves that the newer Swarm Mode comes with an ability to intelligently orchestrate across mixed clusters of Windows & Linux worker nodes.

Adding Linux Worker Node and promoting to Manager

You should be able to view the list of nodes from Ubuntu manager node too:

In case you are looking out for application which uses frontend application running on Windows platform whereas the backend application uses Linux, here is a quick example.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

0
0

Walkthrough: Enabling IPv6 Functionality for Docker & Docker Compose

By default, Docker assigns IPv4 addresses to containers. Does Docker support IPv6 protocol too? If yes, how complicated is to get it enabled? Can I use docker-compose to build micro services which uses IPv6 addresses? What if I work for a company where our services run natively under IPv6 only environment? How shall I build Multi-Node Cluster setup using IPv6? Does Docker 17.06 Swarm Mode support IPv6?

I have been reading numerous queries, GITHUB issues around breaking IPv6 configuration while upgrading Docker version, issues related to IPv6 changes with host configuration etc. and just thought to share few of the findings around IPv6 effort ongoing in Docker upcoming releases.

Does Docker support IPv6 Protocol?

Yes. Support for IPv6 address has been there since Docker Engine 1.5 release.As of Docker 17.06 version (which is the latest stable release as of August 2017) by default, the Docker server configures the container network for IPv4 only. You can enable IPv4/IPv6 dualstack support by adding the below entry under daemon.json file as shown below:

 

File: /etc/docker/daemon.json

{
“ipv6”: true,
“fixed-cidr-v6”: “2001:db8:1::/64”
}

This is very similar to old way of running the Docker daemon with the --ipv6 flag. Docker will set up the bridge docker0 with the IPv6 link-local address fe80::1.

Why did we add “fixed-cidr-v6”: “2001:db8:1::/64” entry?

By default, containers that are created will only get a link-local IPv6 address. To assign globally routable IPv6 addresses to your containers you have to specify an IPv6 subnet to pick the addresses from. Setting the IPv6 subnet via the --fixed-cidr-v6 parameter when starting Docker daemon will help us achieve globally routable IPv6 address.

The subnet for Docker containers should at least have a size of /80. This way an IPv6 address can end with the container’s MAC address and you prevent NDP neighbor cache invalidation issues in the Docker layer.

With the --fixed-cidr-v6 parameter set Docker will add a new route to the routing table. Further IPv6 routing will be enabled (you may prevent this by starting dockerd with --ip-forward=false).

Let us closely examine the changes which Docker Host undergoes before & after IPv6 Enablement:

A Typical Host Network Configuration – Before IPv6 Enablement 

 

As shown above, before IPv6 protocol is enabled, the docker0 bridge network shows IPv4 address only.

Let us enable IPv6 on the Host system. In case you find daemon.json already created under /etc/docker directory, don’t delete the old entries, rather just add these two below entries into the file as shown:

{
“ipv6”: true,
“fixed-cidr-v6”: “2001:db8:1::/64”
}

 

 

Restarting the docker daemon to reflect the changes:

sudo systemctl restart docker

 

A Typical Host Network Configuration – After IPv6 Enablement 

Did you see anything new? Yes, the docker0 now gets populated with IPV6 configuration.(shown below)

docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:06:62:82:4d brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever
inet6 2001:db8:1::1/64 scope global tentative
valid_lft forever preferred_lft forever
inet6 fe80::1/64 scope link tentative valid_lft forever preferred_lft forever

Not only this, the docker_gwbridge network interface too received IPV6 changes:

docker_gwbridge: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:bc:0b:2a:84 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 scope global docker_gwbridge
valid_lft forever preferred_lft forever
inet6 fe80::42:bcff:fe0b:2a84/64 scope link
valid_lft forever preferred_lft forever

 

PING TEST – Verifying IPv6 Functionalities For Docker Host containers

Let us try bringing up two containers on the same host and see if they can ping each using IPV6 address:

 

Setting up Ubuntu Container:

mymanager1==>sudo docker run -itd ajeetraina/ubuntu-iproute bash

Setting up CentOS Container:

mymanager1==>sudo docker run -itd ajeetraina/centos-iproute bash

[Please Note: If you are using default Ubuntu or CentOS Docker Image, you will be surprised to find that ip or ifconfig command doesn’t work. You might need to install iproute package for ip command to work & net-tools package for ifconfig to work. If you want to save time, use ajeetraina/ubuntu-iproute for Ubuntu OR ajeetraina/centos-iproute for CentOS directly.]

Now let us initiate the quick ping test:

 

In this example the Docker container is assigned a link-local address with the network suffix /64 (here: fe80::42:acff:fe11:3/64) and a globally routable IPv6 address (here: 2001:db8:1:0:0:242:ac11:3/64). The container will create connections to addresses outside of the 2001:db8:1::/64 network via the link-local gateway at fe80::1 on eth0.

mymanager1==>sudo docker exec -it 907 ping6 fe80::42:acff:fe11:2
PING fe80::42:acff:fe11:2(fe80::42:acff:fe11:2) 56 data bytes
64 bytes from fe80::42:acff:fe11:2%eth0: icmp_seq=1 ttl=64 time=0.153 ms
64 bytes from fe80::42:acff:fe11:2%eth0: icmp_seq=2 ttl=64 time=0.100 ms
^C
--- fe80::42:acff:fe11:2 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 999
msrtt min/avg/max/mdev = 0.100/0.126/0.153/0.028 ms

So the two containers are able to reach out to each other using IPv6 address.

Does Docker Compose support IPv6 protocol?

The answer is Yes. Let us verify it using docker-compose version 1.15.0 and compose file format 2.1. I faced an issue while I use the latest 3.3 file format. As Docker Swarm Mode doesn’t support IPv6, hence it is not included under 3.3 file format. Till then, let us try to bring up container using IPv6 address using 2.1 file format:

docker-compose version
version 1.15.0, build e12f3b9
docker-py version: 2.4.2
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.0.1t 3 May 2016

Let us first verify the network available in the host machine:

 

File: docker-compose.yml

version: ‘2.1’
services:
  app:
    image: busybox
    command: ping www.collabnix.com
    networks:
      app_net:
        ipv6_address: 2001:3200:3200::20
networks:
  app_net:
    enable_ipv6: true
    driver: bridge
    ipam:
      driver: default
      config:
        -- subnet: 2001:3200:3200::/64
          gateway: 2001:3200:3200::1

 

The above docker-compose file will create a new network called testping_app_net based on IPv6 network under the subnet 2001:3200:3200::/64 and container should get IPv6 address automatically assigned.

Let us bring up services using docker-compose up and see if the services communicates over IPv6 protocol:

 

Verifying the IPv6 address for each container:

As shown above, this new container gets IPv6 address – 2001:3200:3200::20 and hence they are able to reach other flawlessly.

What’s Next? Under the next blog post, I am going to showcase how does IPv6 works across the multiple host machine and will talk about ongoing effort to bring IPv6 support in terms of Swarm Mode. 

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

0
0

Building a Secure VM based on LinuxKit on Microsoft Azure Platform

LinuxKit GITHUB repository recently crossed 3000 stars, forked around 300+ times and added 60+ contributors. Just 5 months old project and it has already gained lot of momentum across the Docker community. Built with a purpose that enables community to create secure, immutable, and minimal Linux distributions, LinuxKit is matured enough to support number of Cloud Platforms like Azure, AWS, Google Cloud Platform, VMware, Packets.net and many more..

 

In my recent blogs, I showcased how to get LinuxKit OS built for Google Cloud Platform, Amazon Web Services and VirtualBox. ICYMI, I recently published few of the the video on LinuxKit too. Check it out.

 

Under this blog post, I will walkthrough how to build secure and portal VM based on LinuxKit image on Microsoft Azure Platform.

Pre-requisite:

I will be leveraging macOS Sierra running Docker 17.06.1-ce-rc1-mac20 version. I tested it on Ubuntu 16.04 LTS edition too running on one of Azure VM and it went fine. Prior knowledge of Microsoft Azure / Azure CLI 2.0 will be required to configure Service Principle for VHD image to get uploaded to Azure smoothly.

 

Step-1: Pulling the latest LinuxKit repository

Pull the LinuxKit repository using the below command:

git clone https://github.com/linuxkit/linuxkit

 

Step-2: Build Moby & LinuxKit tool

cd linuxkit
make

 

Step-3: Copying the tools into the right PATH

cp -rf bin/moby /usr/local/bin/
cp -rf bin/linuxkit /usr/local/bin/

 

Step-4: Preparing Azure CLI tool

curl -L https://aka.ms/InstallAzureCli | bash

 

Step-5: Run the below command to restart your shell

exec -l $SHELL

 

Step-6: Building LinuxKit OS for Azure Platform

cd linuxkit/examples/
moby build -output vhd azure.yml

This will build up VHD image which now has to be pushed to Azure Platform.

In order to push the VHD image to Azure, you need to authenticate LinuxKit with your Azure subscription, hence you  will need to set up the following environment variables:

   export AZURE_SUBSCRIPTION_ID=43b263f8-XXXX--XXXX--XXXX--XXXXXXXX
   export AZURE_TENANT_ID=633df679-XXXX--XXXX--XXXX--XXXXXXXX
   export AZURE_CLIENT_ID=c7e4631a-XXXX--XXXX--XXXX--XXXXXXXX
   export AZURE_CLIENT_SECRET=XXXXXXXXXXXXXXXXXXXXXXXX=

Alternatively, the easy way to get all the above details is through the below command:

az login
To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code XXXXXX to authenticate.

The above command lists out Subscription ID and tenant ID which can be exported therein.

Next, follow this link to create an Azure Active Directory application and service principal that can access resources. If you want to stick to CLI rather than UI, you can follow the below steps:

Step-7: Pushing the VHD image to Azure Platform

linuxkit run azure --resourceGroupName mylinuxkit --accountName mylinuxkitstore -location eastasia azure.vhd
Creating resource group in eastasia
Creating storage account in eastasia, resource group mylinuxkit

The command will end up with the below message:

 

 Completed: 100% [     68.00 MB] RemainingTime: 00h:00m:00s Throughput: 0 Mb/sec    

Creating virtual network in resource group mylinuxkitresource, in eastasia

Creating subnet linuxkitsubnet468 in resource group mylinuxkitresource,

within virtual network linuxkitvirtualnetwork702

Creating public IP Address in resource group mylinuxkitresource, with name publicip159

Started deployment of virtual machine linuxkitvm941 in resource group mylinuxkitresource

Creating virtual machine in resource group mylinuxkitresource, with name linuxkitvm941, in location eastasia

NOTE: Since you created a minimal VM without the Azure Linux Agent,

the portal will notify you that the deployment failed. After around 50 seconds try connecting to the VM

ssh -i path-to-key root@publicip159.eastasia.cloudapp.azure.com

 

By this time, you should be able to see LinuxKit VM coming up under Azure Platform as shown below:

Wait for next 2-3 minutes till you try SSHing to this Azure instance and its all set to be up an running smoothly.

Known Issue:

  • Since the image currently does not contain the Azure Linux Agent, the Azure Portal will report the creation as failed.
  • The main workaround is the way the VHD is uploaded, specifically by using a Docker container based on Azure VHD Utils. This is mainly because the tool manages fast and efficient uploads, leveraging parallelism
  • There is work in progress to specify what ports to open on the VM (more specifically on a network security group)
  • The metadata package does not yet support the Azure metadata.

 

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Further Reference:

0
0