Under the Hood: Demystifying Docker Enterprise Edition 2.0 Architecture

Estimated Reading Time: 8 minutes

The Only Kubernetes Solution for Multi-Linux, Multi-OS and Multi-Cloud Deployments is right here…

Docker Enterprise Edition(EE) 2.0 Final GA release is available. One of the most promising feature announced with this release includes Kubernetes integration as an optional orchestration solution, running side-by-side with Docker Swarm. Not only this, this release includes Swarm Layer 7 routing improvements, Registry image mirroring, Kubernetes integration to Docker Trusted Registry & Kubernetes integration to Docker EE access controls. With this new release, organizations will be able to deploy applications with either Swarm or fully-conformant Kubernetes while maintaining the consistent developer-to-IT workflow.

Docker EE is more than just a container orchestration solution; it is a full lifecycle management solution for the modernization of traditional applications and microservices across a broad set of infrastructure platforms. It is a Containers-as-a-Service(CaaS) platform for IT that manages and secures diverse applications across disparate infrastructure, both on-premises and in the cloud. Docker EE provides an integrated, tested and certified platform for apps running on enterprise Linux or Windows operating systems and Cloud providers. It is tightly integrated to the underlying infrastructure to provide a native, easy to install experience and an optimized Docker environment.

Docker EE 2.0 GA consists of 3 major components which together enable a full software supply chain, from image creation, to secure image storage, to secure image deployment.

  • Universal Control Plane 3.0.0 (application and cluster management) – Deploys applications from images, by managing orchestrators, like Kubernetes and Swarm. UCP is designed for high availability (HA). You can join multiple UCP manager nodes to the cluster, and if one manager node fails, another takes its place automatically without impact to the cluster.
  • Docker Trusted Registry 2.5.0 – The production-grade image storage solution from Docker &
  • EE Engine 17.06.2- The commercially supported Docker engine for creating images and running them in Docker containers.

Tell me more about Kubernetes Support..

Kubernetes in Docker EE fully supports all Docker EE features, including role-based access control, LDAP/AD integration, scanning, signing enforcement, and security policies.

Kubernetes features on Docker EE include:

  • Kubernetes orchestration full feature set
  • CNCF Certified Kubernetes conformance
  • Kubernetes app deployment by using web UI or CLI
  • Compose stack deployment for Swarm and Kubernetes apps
  • Role-based access control for Kubernetes workloads
  • Pod-based autoscaling, to increase and decrease pod count based on CPU usage
  • Blue-Green deployments, for load balancing to different app versions
  • Ingress Controllers with Kubernetes L7 routing
  • Interoperability between Swarm and Kubernetes workloads for networking and storage.

But wait.. What type of workload shall I run on K8s and what on Swarm?

One of the question raised multiple times in last couple of Docker Meetup was – Good that now we have K8s and Swarm as multi-orchestrator under the single Enterprise Engine. But what type of workload shall I run on Kubernetes and what on Docker Swarm?

 

As shown above, even though there is a bigger overlap in terms of Microservices, Docker Inc. recommends specific types of workloads for both Swarm and Kubernetes.
Kubernetes works really great for large-scale workloads. It is designed to address some of the difficulties that are inherent in managing large-scale containerized environments. Example: blue-green deployments on Cloud Platform, highly resilient infrastructure with zero downtime deployment capabilities, automatic rollback, scaling, and self-healing of containers (which consists of auto-placement, auto-restart, auto-replication , and scaling of containers on the basis of CPU usage). Hence, it can be used in a variety of scenarios, from simple ones like running WordPress instances on Kubernetes, to scaling Jenkins machines, to secure deployments with thousands of nodes.

Docker Swarm, on the other hand, do provide most of the functionality which Kubernetes provides today. With notable features like easy & fast to install & configure, quick container deployment and scaling even in very large clusters, high availability through container replication and service redundancy, automated internal load balancing, process scheduling, Automatically configured TLS authentication and container networking, service discovery, routing mesh etc. Docker Swarm is best suited for MTA program – modernizing your traditional legacy application(.Net app running on Windows Server 2016)  by containerization on Docker Enterprise Edition. By doing this, you reduce the total resource requirements to run your application. This increases operational efficiency and allows you to consolidate your infrastructure.

Under this blog post, I will deep dive into Docker EE Architecture covering its individual components, how they work together and how the platform manages containers.

Let us begin with its architectural diagram –

 

At the base of the architecture, we have Docker Enterprise Engine. All the nodes in the cluster runs Docker Enterprise Edition as the base engine. Currently the stable release is 17.06 EE. It is a lightweight and powerful containerization technology combined with a work flow for building and containerizing your applications.

Sitting on top of the Docker Engine is what we call “Enterprise Edition Stack” where all of these components run as containers(shown below in the picture), except Swarm Mode. Swarm Mode is integrated into Docker Engine and you can turn it on/off based on your choice.Swarm Mode is enabled when Universal Control Plane (UCP) is up & running.

 

UCP depicting ucp-etcd, ucp-hyperkube & ucp-agent containers

In case you’re new to UCP, Docker Universal Control Plane is a solution designed to deploy, manage and scale Dockerized applications and infrastructure. As a Docker native solution, full support for the Docker API ensures seamless transition of your app from development to test to production – no code changes required, support for the Docker tool-set and compatibility with the Docker ecosystem.

 

 

From infrastructure clustering and container scheduling with the embedded Docker Swarm Mode, Kubernetes to multi-container application definition with Docker Compose and image management with Trusted Registry, Universal Control Plane simplifies the process of managing infrastructure, deploying, managing applications and monitoring their activity, with a mix of graphical dashboards and Docker command line driven user interface

There are 3 orchestrators sitting on top of Docker Enterprise Engine – Docker Swarm Mode, Classic Swarm & Kubernetes. For both Classic Swarm and Kubernetes, there is the same underlying etcd instance which is highly available based on setup you have in your cluster.

Docker EE provides access to the full API sets of three popular orchestrators:

  • Kubernetes: Full YAML object support
  • SwarmKit: Service-centric, Compose file version 3
  • “Classic” Swarm: Container-centric, Compose file version 2

Docker EE proxies the underlying API of each orchestrator, giving you access to all of the capabilities of each orchestrator, along with the benefits of Docker EE, like role-based access control and Docker Content Trust.

To manage lifecycle of orchestrator, we have a component called Agents and reconciler which manages both the promotion and demotion flows as well as certification renewal rotation and deal with patching and upgrading.

Agent is the core component of UCP and is a globally-scheduled service. When you install UCP on a node, or join a node to a swarm that’s being managed by UCP, the agent service starts running on that node. Once this service is running, it deploys containers with other UCP components, and it ensures they keep running. The UCP components that are deployed on a node depend on whether the node is a manager or a worker. In nutshell, it monitors the node and ensures the right UCP services are running.

Another important component is Reconciler. When UCP agent detects that the node is not running the right UCP components, it starts the reconciler container to converge the node to its desired state. It is expected for the UCP reconciler container to remain in an exited state when the node is healthy.

In order to integrate with K8s and support authentication, Docker Team built open ID connect provider. In case you’re completely new to OIDC, OpenID Connect is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an inter-operable and REST-like manner.

As Swarm Mode is designed, we have central certificate authority that manages all the certificates and cryptographic node identity of the cluster. All component uses mutual TLS communication and they are using nodes which are issue by certificate authority. We are using unmodified version of Kubernetes.

Sitting on the top of the stack, there is GUI which uses Swarm APIs and Kubernetes APIs. Right next to GUI, we have Docker Trusted Registry which is an enterprise-ready on-premises service for storing, distributing and securing images. It gives enterprises the ability to ensure secure collaboration between developers and sysadmins to build, ship and run applications. Finally we have Docker & Kubernetes CLI capability integrated. Please note that this uses un-modified version of Kubernetes API.

Try Free Hosted Trial

If you’re interested in a fully conformant Kubernetes environment that is ready for the enterprise,    https://trial.docker.com/

Did you find this blog helpful? Feel free to share your experience. Get in touch with me on Twitter –  @ajeetsraina

Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0

Estimated Reading Time: 7 minutes

Say Bye to Kubectx !

I have been a great fan of kubectx and kubectl which has been a fast way to switch between clusters and namespaces until I came across Docker for Mac 18.02. With the newer Docker for Mac 18.02 RC build, it is just a matter of a “toggle”. Life has become too easy to switch between dev, QA & production environment.

[Updated(2-Feb-2018) : Docker for Mac 18.02.0 CE RC2 build now comes with Kubernetes v1.9.2 for the first time. I upgraded my macOS High Sierra to RC2 build today and it just works flawlessly. Check it out]

 

New to Kubernetes Namespace Vs Context ?

Generally, software development teams partition their development pipelines into discrete units. These units take various forms in a discrete layout –

          Dev  >> Testing|QA >> Staging >> Production

The resulting layouts are ideally suited to Kubernetes Namespaces. Each environment or stage in the pipeline becomes a unique namespace.

In Kubernetes terminology, Namespaces are the way to partition a single Kubernetes cluster into multiple virtual clusters. Namespaces are a logical partitioning capability that enable one Kubernetes cluster to be used by multiple users, teams of users, or a single user with multiple applications without concern for undesired interaction. Each user, team of users, or application may exist within its Namespace, isolated from every other user of the cluster and operating as if it were the sole user of the cluster.

A major benefit of applying namespaces to the development cycle is that the naming of software components (e.g. micro-services/endpoints) can be maintained without collision across the different environments. This is due to the isolation of the Kubernetes namespaces. The fact that each namespace is logically discrete allows the development teams to work within an isolated “development” namespace.

Say, you have two clusters, one for development work and one for scratch work. In the development cluster, your frontend developers work in a namespace called frontend, and your storage developers work in a namespace called storage. In your scratch cluster, developers work in the default namespace, or they create auxiliary namespaces as they see fit. Access to the development cluster requires authentication by certificate. Access to the scratch cluster requires authentication by username and password.

Shown below is an example which clearly shows a file config-demo with this content:

apiVersion: v1
kind: Config
preferences: {}

clusters:
- cluster:
  name: development
- cluster:
  name: scratch

users:
- name: developer
- name: experimenter

contexts:
- context:
  name: dev-frontend
- context:
  name: dev-storage
- context:
  name: exp-scratch

As shown above, a configuration file describes clusters, users, and contexts. Your config-demo file has the framework to describe two clusters, two users, and three contexts.

Under this blog post, I will showcase how to create 3 difference contexts – Google Cloud, Docker for Desktop & Minikube first and then how easy is it to toggle between them under Docker for Mac Platform. Let’s get started –

Pre-requisite:

  • Docker For Mac 18.02 RC2 build

 

  • Enable Kubernetes under Preference Pane

Installing Minikube

Minikube requires that VT-x/AMD-v virtualization is enabled in BIOS. To check that this is enabled on OSX / macOS run:
sysctl -a | grep machdep.cpu.features | grep VMX

Installing Minikube via brew

Ajeets-MacBook-Air:~ ajeetraina$ brew update && brew install kubectl && brew cask install minikube

Starting Minikube

Ajeets-MacBook-Air:~ ajeetraina$ minikube start
Starting local Kubernetes v1.9.0 cluster...
Starting VM...
Downloading Minikube ISO
 142.22 MB / 142.22 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading localkube binary
 162.41 MB / 162.41 MB [============================================] 100.00% 0s
 0 B / 65 B [----------------------------------------------------------]   0.00%
 65 B / 65 B [======================================================] 100.00% 0sSetting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

Viewing Kubernetes context selection pane

By now, you should see Minikube context appear

Verifying it using CLI

Ajeets-MacBook-Air:~ ajeetraina$ kubectl config get-contexts
CURRENT   NAME                          CLUSTER                      AUTHINFO             NAMESPACE
         docker-for-desktop            docker-for-desktop-cluster   docker-for-desktop
         gce                                                        cluster-admin
         kubernetes-admin@kubernetes   kubernetes                   kubernetes-admin
*         minikube                      minikube                     minikube

Viewing Minikube Dashboard

You can just type the below command to bring up qinikube dashboard in a sec.

minikube dashboard

Initializing Docker Swarm

Ajeets-MacBook-Air:testenviron ajeetraina$ docker swarm init
Swarm initialized: current node (zfxiqqjpjmwbvhm1ahjwio3s7) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4vnxn6cbq4gtsjjvaluucncc8m71aexe11dhbm40aoxfqnr7s3-bevjmv2qpklluuhm6ufrfoas2 192.168.65.3:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Ajeets-MacBook-Air:testenviron ajeetraina$ docker node ls
ID                            HOSTNAME                STATUS              AVAILABILITY        MANAGER STATUS
zfxiqqjpjmwbvhm1ahjwio3s7 *   linuxkit-025000000001   Ready               Active              Leader
Ajeets-MacBook-Air:testenviron ajeetraina$ docker stack deploy -c docker-compose.yml myapp3
Creating network myapp3_default
Creating service myapp3_db1
Creating service myapp3_web1
Ajeets-MacBook-Air:testenviron ajeetraina$ docker stack ls
NAME                SERVICES
myapp3              2

Switching the context from Minikube to docker-for-desktop

Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl config get-contexts
CURRENT   NAME                          CLUSTER                      AUTHINFO             NAMESPACE
          docker-for-desktop            docker-for-desktop-cluster   docker-for-desktop
          gce                                                        cluster-admin
          kubernetes-admin@kubernetes   kubernetes                   kubernetes-admin
*         minikube                      minikube                     minikube

Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl config use-context docker-for-desktop
Switched to context "docker-for-desktop".

Verifying through Pane UI

Open up whale icon under D4M and see if the context switched successfully.

Enabling Kubernetes

Go to whale icon > Click on Preference > Click on Kubernetes > Enable Kubernetes > Show Systems Containers

It will take few minutes to get Kubernetes up and running. Expect it to take long time if you are enabling kubernetes for the first time based on your internet speed.

Clone the Repository

$git clone https://github.com/ajeetraina/docker101

Change to the right location

$cd docker101/play-with-kubernetes/examples/stack-deploy-on-mac/

Example-1 : Demonstrating a Simple Web Application

Building the Web Application Stack

$docker stack deploy -c docker-stack1.yml myapp1

Verifying the Stack

$docker stack ls

Verifying using Kubectl

$kubectl get pods

Verifying if the web application is accessible

$curl localhost:8083

Cleaning up the Stack

$docker stack rm myapp`

Example:2 – Demonstrating ReplicaSet

Building the Web Application Stack

$docker stack deploy -c docker-stack2.yml myapp2

Verifying the Stack

$docker stack ls

Verifying using Kubectl

$kubectl get pods
Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl get stacks
NAME      AGE
myapp2    22m
Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
db1-d977d5f48-l6v9d     1/1       Running   0          22m
db1-d977d5f48-mpd25     1/1       Running   0          22m
web1-6886bb478f-s7mvz   1/1       Running   0          22m
web1-6886bb478f-wh824   1/1       Running   0          22m

Adding Context for Google Cloud

Pre-requisites:

  • Install google-cloud-sdk on macOS
  • Enable Google Cloud Engine API
  • Authenticate Your Google Cloud using gcloud auth

Creating GKE Cluster Node

Ajeets-MacBook-Air:~ ajeetraina$ gcloud container clusters create k8s-lab1 --disk-size 10 --zone asia-east1-a --machine-type n1-standard-2 --num-nodes 3 --scopes compute-rw
WARNING: The behavior of --scopes will change in a future gcloud release: service-control and service-management scopes will no longer be added to what is specified in --scopes. To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
WARNING: Starting in Kubernetes v1.10, new clusters will no longer get compute-rw and storage-ro scopes added to what is specified in --scopes (though the latter will remain included in the default --scopes). To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
Creating cluster k8s-lab1...done.
Created [https://container.googleapis.com/v1/projects/spheric-temple-187614/zones/asia-east1-a/clusters/k8s-lab1].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/asia-east1-a/k8s-lab1?project=spheric-temple-187614
kubeconfig entry generated for k8s-lab1.
NAME      LOCATION      MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
k8s-lab1  asia-east1-a  1.7.11-gke.1    35.201.215.156  n1-standard-2  1.7.11-gke.1  3          RUNNING

Verify it on Google Cloud

Cluster

Master version	
1.7.11-gke.1 Upgrade available
Endpoint	
35.201.215.156 Show credentials
Client certificate	
Enabled
Kubernetes alpha features	
Disabled
Total size	
3
Master zone	
...

Connecting to Your GKE Cluster

You can connect to your cluster via command-line or using a dashboard too.

Ajeets-MacBook-Air:~ ajeetraina$ gcloud container clusters get-credentials k8s-lab1 --zone asia-east1-a --project spheric-temple-187614

Fetching cluster endpoint and auth data. kubeconfig entry generated for k8s-lab1.

Listing the Nodes

Ajeets-MacBook-Air:~ ajeetraina$ kubectl get nodes
NAME                                      STATUS    ROLES     AGE       VERSION
gke-k8s-lab1-default-pool-042d2598-591g   Ready     <none>    7m        v1.7.11-gke.1
gke-k8s-lab1-default-pool-042d2598-c633   Ready     <none>    7m        v1.7.11-gke.1
gke-k8s-lab1-default-pool-042d2598-q603   Ready     <none>    7m        v1.7.11-gke.1

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

2-minutes to Kubernetes Cluster on Docker for Mac 18.01 using Swarm CLI

Estimated Reading Time: 2 minutes

Docker for Mac 18.01.0 CE is available for the general public. It holds experimental Kubernetes release running on Linux Kernel 4.9.75, Docker Compose 1.180 and Docker Machine 0.13.0. It is available only under Edge Release. Please note that this feature is still NOT available under Stable Release branch. This release brought a major fixes around insecure registry, VPNKit port,  DNS timeout issues and many more which you can refer under Release Notes section.

[Updated – 29/01/2018 – Docker Inc. introduced Kubernetes context selector UI in the recent Docker for Mac 18.02 RC1 release. If you have Minikube already running on the same system, you can switch the context in between Minikube & docker for Mac flawlessly. Refer this for more information]

 

 

In my previous blog, I talked about how to build Kubernetes Cluster in 3 minutes using Kubectl tool which comes by default with this release. But what if you are a die-hard fan of   Docker Swarm CLI like me, here is the good news – You can now use Swarm CLI to bring up Kubernetes Cluster. Under this post, I will show you how Swarm CLI can be used to bring up Kubernetes cluster in just 2 minutes.

 

Pre-requisite:

  • Docker for Mac 18.01.0 CE Edge Release
  • Enable Kubernetes under Preference > Kubernetes Tab
  • Select Checkbox under Show System Container

A Quick 2-minutes ASCIINEMA video:

Here is 2-minutes video which shows how to get started from Zero to NGINX web server setup. It initiate with 0 pods, 0 external service and 0 deployments in Kubernetes terminology. Under this video, we will use the familiar docker stack CLI to bring up K8s cluster and then cleaning up in no seconds.

Liked the video? You can refer this link for detailed instructions and further examples.

As I dig deeper into Kubernetes architecture, below links might be useful for anyone who want to learn Kubernetes concepts in detail.

Getting Started with Kubernetes Concepts & Architecture

Building Kubernetes Dashboard on Docker for Mac in 1 min

Demystifying Kubernetes Namespace

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

 

 

3 Minutes to Single Node Kubernetes cluster on Docker for Mac Platform

Estimated Reading Time: 8 minutes

Docker For Mac 17.12 GA Release is now available for the general public. Experimental Kubernetes Support is available only on Edge Release. You can now run a single-node Kubernetes cluster from the “Kubernetes” Pane in Docker For Mac Preferences and use kubectl commands as well as docker commands. This means that there is no need of installing Kubectl or related Kubernetes CLI commands. It is ready-to-use platform which gives developers a simple and fast way to build and test Kubernetes apps locally with the latest and greatest Docker.

 

Say Bye to Minikube

Before Docker for Mac 17.12 release, for anyone who wants to get started with single node Kubernetes cluster, Minikube was an ideal tool. Minikube is a great local development environment and a way to learn the most common commands that help you to bring up a single node K8s cluster. To use Minikube, one needed a hypervisor and a container solution as well as the Kubernetes command-line tool called kubectl. Now these tools has to be manually installed on your Linux/MacOS.

But with the arrival of Kubernetes powered Docker for Mac 17.12, you no longer need these 3rd party tools & hypervisor to be installed or configured. Just update your Docker for Mac to 17.12 release and there you have ready-to-use single node Kubernetes cluster already up and running.

The Kubernetes server runs within a Docker container on your Mac, and is only for local testing. When Kubernetes support is enabled, you can deploy your workloads, in parallel, on Kubernetes, Swarm, and as standalone containers. Enabling or disabling the Kubernetes server does not affect your other workloads.

A Little about Kubernetes in 2018..

Kubernetes is not a mere orchestration system, in fact, it eliminates the need for orchestration. If you look at the technical definition of orchestration in wikipedia, it is all about the execution of a defined workflow: first do X, then Y, then Z. But in contrast, Kubernetes is comprised of a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldn’t matter how you get from X to Z. No need of any centralised control. This results in a system that is easier to use and more powerful, robust, resilient, and extensible.

Kubernetes provides a rich set of features for container grouping, container orchestration, health checking, service discovery, load balancing, horizontal autoscaling, secrets & configuration management, storage orchestration, resource usage monitoring, CLI, dashboard and much more. Few of the important points one should know about K8s are –

  • Kubernetes operates at the application level rather than at the hardware level
  • Kubernetes is not monolithic, and these default solutions are optional and pluggable.
  • Kubernetes is not a traditional, all-inclusive PaaS (Platform as a Service) system
  • Kubernetes operates on a declarative model, object specifications provided in so called manifest files declare how you want the cluster to look.
  • Kubernetes aims to support an extremely diverse variety of workloads, including stateless, stateful, and data-processing workloads.If an application can run in a container, it should run great on Kubernetes.
  • Allows users to choose their logging, monitoring, and alerting systems. (It provides some integrations as proof of concept.)

Docker for Mac is a great platform for developers…

Yes, you read it right. For anyone who want to configure a Docker dev environment and build, test and debug containerized apps, Docker for Mac is a great platform to get started with. Under this blog post, I will show how to build a simple Web Application Server running on a Single Node K8s cluster.

 

Let’ begin with a clean Docker for Mac 17.12 system. I am running macOS High Sierra version 10.13.1.Follow the below link if you are setting up Kubernetes on Docker for Mac 17.12 for the first time.

A First Look at Kubernetes Integrated Docker For Mac Platform

 

Setting up Web Application Cluster on K8s in 3 minutes

This section assumes that you are well versed with Kubernetes architectureand concepts. We will start with a clean MacOS system, hence there is no POD, no deployment and just default Kubernetes service running on your machine.

Ajeets-MacBook-Air:example1 ajeetraina$ kubectl get pods
No resources found.
Ajeets-MacBook-Air:example1 ajeetraina$ kubectl get deployment
No resources found.
Ajeets-MacBook-Air:example1 ajeetraina$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2d
Ajeets-MacBook-Air:example1 ajeetraina$ 
Ajeets-MacBook-Air:example1 ajeetraina$

Steps:

  • Creating Pod using Pod Definition
  • Exposing Pod to external world
  • Cleaning up

Creating Pod using Pod Definition

The simplest pod definition describes the deployment of a single container. A pod definition is a declaration of a desired state. For example, a simple web server pod might be defined as such:

apiVersion: v1
kind: Pod
metadata:
  name: collabweb
spec:
  containers:
  - name: webnix
    image: ajeetraina/webdemo
    ports:
      - containerPort: 8080

It’s time to create our first Pod.. 

To create a pod containing a web server, run the below command:

 

$kubectl create -f webdemo.yml

Once the pod is created, you can list it using the below command:

Ajeets-MacBook-Air:example1 ajeetraina$ kubectl get pods
NAME        READY     STATUS    RESTARTS   AGE
collabweb   1/1       Running   0          48s

Showcasing details of a specific Pod can be achieved with the following command:

 

Ajeets-MacBook-Air:example1 ajeetraina$ kubectl describe pod collabweb 
Name:         collabweb
Namespace:    default
Node:         docker-for-desktop/192.168.65.3
Start Time:   Mon, 15 Jan 2018 12:09:02 +0530
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.1.0.42
Containers:
  webnix:
    Container ID:   docker://5e429a30c4648f2564ccc145c8a0fc5d7160f24a9358d4ee979ac3f4254b711f
    Image:          ajeetraina/webdemo
    Image ID:       docker-pullable://ajeetraina/webdemo@sha256:5fddb01a372b02ec2d49465a920eda0f864b9b71ac75032fcbeeba028764bcd8
    Port:           8080/TCP
    State:          Running
      Started:      Mon, 15 Jan 2018 12:09:12 +0530
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4v6r8 (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-4v6r8:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4v6r8
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                         Message
  ----    ------                 ----  ----                         -------
  Normal  Scheduled              1m    default-scheduler            Successfully assigned collabweb to docker-for-desktop
  Normal  SuccessfulMountVolume  1m    kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "default-token-4v6r8"
  Normal  Pulling                1m    kubelet, docker-for-desktop  pulling image "ajeetraina/webdemo"
  Normal  Pulled                 1m    kubelet, docker-for-desktop  Successfully pulled image "ajeetraina/webdemo"
  Normal  Created                1m    kubelet, docker-for-desktop  Created container
  Normal  Started                1m    kubelet, docker-for-desktop  Started container

Listing the Deployment:

Ajeets-MacBook-Air:example1 ajeetraina$ kubectl get deployment -o wide
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE       CONTAINERS   IMAGES               SELECTOR
webdemo   1         1         1            1           7h        webdemo      ajeetraina/webdemo   run=webdemo

Creating a Deployment

kubectl run webdemo --image=ajeetraina/webdemo --port=8080 

Verifying the Deployment

kubectl get deployment webdemo -o wide

 

Exposing the pods

Exposing your pods so that it can access on the browser:

kubectl expose deployment webdemo --port=8080 --type=NodePort

Once the above command runs successfully, you should be able to list out the services –

Ajeets-MacBook-Air:example1 ajeetraina$ kubectl get services
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP          2d
webdemo      NodePort    10.99.125.85   <none>        8080:30990/TCP   5m

Now we should be able to access the Webdemo Test Page using the below command:


Ajeets-MacBook-Air:example1 ajeetraina$ curl localhost:30990
<h1>Web Test engine</h1>
<ul>
  <li>
    <code>Content-Type</code>
    <ul>
      <li>
        <a href='/type/text'>text</a>
        &mdash;
        <code>Welcome to Collabnix</code>
      </li>
      <li>
        <a href='/type/html'>html</a>
        &mdash;
        <code>&lt;h1&gt;Docker, Kubernetes &amp; Cloud!&lt;/h1&gt;</code>
      </li>
      <li>
        <a href='/type/json'>json</a>
        &mdash;
        <code>{&quot;message&quot;:&quot;Hello JSON World!&quot;}</code>
      </li>
    </ul>
  </li>
  <li>
    HTTP Status Codes
    <ul>
      <li>
        <a href='/code/400'>400 &mdash; Bad Request</a>
      </li>
      <li>
        <a href='/code/401'>401 &mdash; Unauthorized</a>
      </li>
      <li>
        <a href='/code/403'>403 &mdash; Forbidden</a>
      </li>
      <li>
        <a href='/code/404'>404 &mdash; Not Found</a>
      </li>
      <li>
        <a href='/code/405'>405 &mdash; Method Not Allowed</a>
      </li>
      <li>
        <a href='/code/406'>406 &mdash; Not Acceptable</a>
      </li>
      <li>
        <a href='/code/418'>418 &mdash; I'm a teapot (RFC 2324)</a>
      </li>
    </ul>
  </li>
</ul>
<h1>README</h1>
<h2>Web Demo Docker Container</h2>

Done. You have created your first Single Node web application cluster running on Kubernetes   powered Docker for Mac Platform.

Interestingly, one can run Docker specific commands too to list out the running containers:

Ajeets-MacBook-Air:~ ajeetraina$ docker ps | head -n 2
CONTAINER ID        IMAGE                                                    COMMAND                  CREATED             STATUS              PORTS                    NAMES
eb6312309518        ajeetraina/webdemo                                       "ruby webtest.rb -p …"   9 hours ago         Up 9 hours                                   k8s_webdemo_webdemo-85f56bc5d5-np9qh_default_eae29f0d-f9bf-11e7-994c-025000000001_0
Ajeets-MacBook-Air:~ ajeetraina$ docker exec -it eb631 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
   inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
   link/ipip 0.0.0.0 brd 0.0.0.0
3: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1
   link/tunnel6 :: brd ::
5: eth0@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
   link/ether 0a:58:0a:01:00:2b brd ff:ff:ff:ff:ff:ff
   inet 10.1.0.43/16 scope global eth0
      valid_lft forever preferred_lft forever
Ajeets-MacBook-Air:~ ajeetraina$ 

Cleaning Up

To delete the webdemo containers, delete the deployment:

 

kubectl delete deployment webdemo

Did you find 3-minutes still time-consuming? Do visit the below post to see how docker stack deploy can save your time.

When Kubernetes Meet Docker Swarm for the First time under Docker for Mac 17.12 Release

 

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

When Kubernetes Meet Docker Swarm for the First time under Docker for Mac 17.12 Release

Estimated Reading Time: 7 minutes

Docker For Mac 17.12 GA is the first release which includes both the orchestrators – Docker Swarm & Kubernetes under the same Docker platform. As of 1/7/2018 – Experimental Kubernetes has been released under Edge Release(still not available under D4M Stable Release). Experimental Kubernetes is still not available for Docker for Windows & Linux platform. It is slated to be available for Docker for Windows next month(mid of February) and then for Linux by March or April.

Now you might ask why Docker Inc. is making this announcement? What is the fun of having 2 orchestrator under the same roof?  To answer this, let me go little back to the past and see how Docker platform looked like:

                                                                                                                             ~ Source – Docker Inc.

 

Docker platform is like a stack with various layers. The first base layer is called containerd. Containerd is an industry-standard core container runtime with an emphasis on simplicity, robustness and portability. Based on the Docker Engine’s core container runtime, it is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments, etc. Containerd is designed to be embedded into a larger system, rather than being used directly by developers or end-users. It basically includes a daemon exposing gRPC API over a local UNIX socket. The API is a low-level one designed for higher layers to wrap and extend. It also includes a barebone CLI (ctr) designed specifically for development and debugging purpose. It uses runC to run containers according to the OCI specification. The code can be found on GitHub, and here are the contribution guidelines. Let us accept the fact that over the last few years, there has been lots of iteration around this layer but now Docker Inc. has finalised it to a robust, popular and widely accepted container runtime.

On top of containerd, there is an orchestration layer rightly called Docker Swarm. Docker Swarm ties all of your individual machines together which runs container runtime. It allows you to deploy application not on a single machine at a time but into a whole system, thereby making your application distributed.

To take advantage of these layers, as a developer you need tools & environment which can build & package your application that takes advantage of your environment, hence Docker Inc. provides Community Edition like  Docker for Mac, Docker for Windows etc. If you are considering to move your application to the production, Docker Enterprise Edition is the right choice.

If the stack looks really impressive, why again the change in architecture?

The reason is – Not everybody uses Swarm.

~ Source – Docker Inc.

Before Swarm & Kubernetes Integration – If you are a developer and you are using Docker, the workflow look something like as shown below. A Developer typically uses Docker for Mac or Docker for Windows.Using a familiar docker build, docker-compose build tool you build your environment and ensure that it gets deployed across a single node cluster OR use docker stack deploy to deploy it across the multiple cluster nodes.

~ Source – Docker Inc.

 

If your production is in swarm, then you can test it locally on Swarm as it is already inbuilt in Docker platform. But if your production environment runs in Kubernetes, then surely there is lot of work to be done like translating files, compose etc. using 3rd party open source tools and negotiating with their offerings. Though it is possible today but it is not still smooth as Swarm Mode CLI.

With the newer Docker platform, you can seamlessly use both Swarm and Kubernetes flawlessly. Interestingly, you use the same familiar tools like docker stack ls, docker stack deploy, docker ps, `docker stack ps`to display Swarm and Kubernetes containers. Isn’t it cool? You don’t need to learn new tools to play around with Kubernetes cluster.

~ Source – Docker Inc.

 

The new Docker platform includes both Kubernetes and Docker Swarm side by side and at the same level as shown below. Please note that it is a real kubernetes sitting next to Docker Swarm and NOT A FORK OR WRAPPER.

                                                                                                 ~ Source – Docker Inc.

Still not convinced why this announcement?

 

                                                                                                  ~ Source – Docker Inc.

How does SWARM CLI builds Kubernetes cluster side-by-side?

The docker compose file analyses the input file format and convert it to pods along with creating replicas set as per the instruction set. With the newer Docker for Mac 17.12 release, a new stack command has been added as the first class citizen to Kubernetes CLI.

 

Ajeets-MacBook-Air:~ ajeetraina$ kubectl get stacks -o wide
NAME      AGE
webapp    1h

 

 

Important Points –

 

  • Future Release of Docker Platform will include both orchestration options available – Kubernetes and Swarm
  • Swarm CLI will be used for Cluster Management while for orchestration you have a choice of Kubernetes & Swarm
  • Full Kubernetes API is exposed in the stack, hence support for overall Kubernetes Ecosystem is possible.
  • Docker Stack Deploy will be able to target both of Swarm or Kubernetes.
  • Kubernetes is recommended for the production environment
  • Running both Swarm & Kubernetes is not recommended for the production environment.
  • AND by now, you must be convinced – “SWARM MODE CLI is NOT GOING ANYWHERE”

Let us test drive the latest Docker for Mac 17.12 beta release and see how Swarm CLI can be used to bring up both Swarm and Kubernetes cluster seamlessly.

  • Ensure that you have Docker for Mac 17.12 Edge Release running on your Mac system. If you still don’t see 17.12-kube_beta client version, I suggest you to go through my last blog post.

A First Look at Kubernetes Integrated Docker For Mac Platform

 

 

Please note that Kubernetes/kubectl comes by default with Docker for Mac 17.12 Beta release. YOU DON”T NEED TO INSTALL KUBERNETES. By default, a single node cluster is already setup for you by default.

As we have Kubernetes & Swarm Orchestration already present, let us head over to build NGINX services as piece of demonstration on this single node Cluster node.

Writing a Docker Compose File for NGINX containers

Let us write a Docker compose file for  nginx image and deploy 3 containers of that image. This is how my docker-compose.yml looks like:

 

Deploying Application Stack using docker stack deploy 

Ajeets-Air:mynginx ajeetraina$ DOCKER_ORCHESTRATOR=kubernetes docker stack deploy --compose-file docker-compose.yml webapp
Stack webapp was created
Waiting for the stack to be stable and running…
-- Service nginx has one container running
Stack webapp is stable and running

 

Verifying the NGINX replica sets through the below command:

 

As shown above, there are 3 replicas of the same NGINX image running as containers.

Verify the cluster using Kubectl CLI displaying the stack information:

Ajeets-MacBook-Air:mynginx ajeetraina$ kubectl get stack -o wide
NAME      AGE
webapp    8h

As you see, kubectl and stack deploy displays the same cluster information.

Verifying the cluster using kubectl CLI displaying YAML file:

You can verify that Docker analyses the docker-compose.yaml input file format and  convert it to pods along with creating replicas set as per the instruction set which can be verified using the below YAML output format.

 

 

We can use the same old stack deploy CLI to verify the cluster information

 

Managing Docker Stack

Ajeets-MacBook-Air:mynginx ajeetraina$ docker stack services webapp
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
20e31598-e4c        nginx               replicated          3/3                 nginx               *:82->80/tcp,*:444->443/tcp

 

It’s time to verify if the NGINX webpage comes up well:

 

Hence, we saw that NGINX service is running both on Kubernetes & Swarm Cluster side by side.

Cleaning up

Run the below Swarm CLI related command to clean up the NGINX service as shown below:

docker stack ls
docker stack rm webapp
kubectl get pods

Output:

Want to see this in action?

https://asciinema.org/a/8lBZqBI3PWenBj6mSPzUd6i9Y

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

A First Look at Kubernetes Integrated Docker For Mac Platform

Estimated Reading Time: 7 minutes

Docker support for Kubernetes is now in private beta. As a docker captain, I was able to be a part of the first group to get my hands on the update and can show you what to expect. Kubernetes is ONLY available to those users who are part of the private beta for Docker for Mac 17.12. To access beta builds, you must be signed in within Docker for Mac using your Docker ID. Please remember that this won’t be available to those users who missed to register for Docker Beta Program. 

[Updated: 1/7/2018] – Docker For Mac 17.12 GA is available and is the first release which includes both the orchestrators – Docker Swarm & Kubernetes under the same Docker platform. As of 1/7/2018 – Experimental Kubernetes has been released under Edge Release(still not available under D4M Stable Release). Experimental Kubernetes is still not available for Docker for Windows & Linux platform. It is slated to be available for Docker for Windows next month(mid of February) and then for Linux by March or April.

 

 

 

 

What’s new in Docker 17.12 CE Final Release?

The fresh new Docker 17.12 Community Edition Final Release  includes a standalone Kubernetes server & client, plus Docker CLI integration. The Kubernetes server runs locally within your Docker instance and is a single-node cluster. It is specifically meant for development & testing purpose only. 

Docker 17.12 CE includes the newer Docker Compose 1.18.0-rc2 release along with the below new features & further improvements like:

New Feature:

  •  VM disk size can be changed in settings. (See docker/for-mac#1037).

Bug fixes and minor changes

  • Avoid VM reboot when changing host proxy settings.
  • Don’t break HTTP traffic between containers by forwarding them via the external proxy (docker/for-mac#981)
  • Filesharing settings are now stored in settings.json
  • Daemon restart button has been moved to settings / Reset Tab
  • Display various component versions in About box
  • Better VM state handling & error messsages in case of VM crashes

Important Information:

  • The beta features are being released in a controlled manner: not all users who signed up for the beta will be able to access the features right away. You must be signed in within Docker for Mac using your Docker ID to access the beta builds.

  • The Kubernetes features are only accessible on macOS for now; Windows will follow at a later date.

  • Because this feature is still in beta, it can only be accessed using the latest Docker for Mac release, more precisely on the Edge channel. 

How to get this Beta Release?

You need to install Docker 17.12 Edge Release(it is NOT available under Stable Release yet). Once you install the latest Beta release, all you need is to log in with your Docker ID, select whale menu -> Sign in / Create Docker ID from the menu bar. 

 

 

To enable Kubernetes support and install a standalone instance of Kubernetes running as a Docker container, select Enable Kubernetes and click the Apply and restart button.

 

Once Kubernetes support is enabled, you are able to deploy your workloads, in parallel, on Kubernetes, Swarm, and as standalone containers. Enabling or disabling the Kubernetes server is not going to affect other workloads.

This new beta includes Docker CLI integration for Kubernetes too along with a standalone Kubernetes server & client and this can be verified as shown below. 

 

 

By default, Kubernetes containers are hidden from commands like docker service ls, because managing them manually is not supported. To make them visible, select Show system containers (advanced) and click Apply and restart

 

This means that now you can use the same old favourite docker ps command for displaying Kubernetes internal containers. Isn’t it cool?

 

How to get Kubectl working?

Kubectl comes out-of-the-box by default with this beta release and hence you don’t need to install it. Kubectl is a command line interface for running commands against Kubernetes clusters. It holds a lot of functionality similarity with docker CLI like docker run, docker attach, docker logs and so on. kubectl controls the Kubernetes cluster manager.

The Docker for Mac Kubernetes integration provides the Kubernetes CLI command at /usr/local/bin/kubectl. By default, the kubectl mightn’t work as expected as we still need to choose the correct context.

 

Follow the below steps to get started with kubectl:

Ajeets-MacBook-Air:~ ajeetraina$ kubectl config get-contexts

Let us pick up the docker-for-desktop context:

Ajeets-MacBook-Air:~ ajeetraina$ kubectl config use-context docker-for-desktop
Switched to context “docker-for-desktop”.

Now the kubectl should work fine and display a single node K8s cluster as shown below:

The above output shows us that we were able to connect to our Kubernetes cluster and display the status of our master node. In case you are new to Kubernetes terminology – a Kubernetes Node is a physical or virtual machine used to host application containers.  

Verifying the kubectl version:

Ajeets-MacBook-Air:~ ajeetraina$ sudo kubectl version
Client Version: version.Info{Major:”1″, Minor:”8″, GitVersion:”v1.8.2″, GitCommit:”bdaeafa71f6c7c04636251031f93464384d54963″, GitTreeState:”clean”, BuildDate:”2017-10-24T19:48:57Z”, GoVersion:”go1.8.3″, Compiler:”gc”, Platform:”darwin/amd64″}
Server Version: version.Info{Major:”1″, Minor:”8″, GitVersion:”v1.8.2″, GitCommit:”bdaeafa71f6c7c04636251031f93464384d54963″, GitTreeState:”clean”, BuildDate:”2017-10-24T19:38:10Z”, GoVersion:”go1.8.3″, Compiler:”gc”, Platform:”linux/amd64″}

Verifying the Kubernetes Containers

Displaying the Kubernetes Cluster Information

Ajeets-MacBook-Air:~ ajeetraina$ kubectl cluster-info
Kubernetes master is running at https://localhost:6443
KubeDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’

 

Test Drive WordPress Application on a single Node Kubernetes Cluster

With our Kubernetes cluster ready, let us now start deploying application containers. I have picked up my all time favourite WordPress application.

Let us test and deploy WordPress application on top of this single node Kubernetes cluster. We will follow the below steps:

  1. Creating a Persistent Volume
  2. Creating a Secret
  3. Deploying MySQL
  4. Deploying WordPress

Creating a Persistent Volume

We will first create two Persistent Volumes from the local-volumes.yaml file shown below:

Copy the code into a file called local-volume.yaml and then run the below command:

kubectl create -f local-volumes.yaml

You can verify the details below:

Creating a Secret

Ajeets-MacBook-Air:~ ajeetraina$ kubectl create secret generic mysql-pass --from-literal=password=collab123
secret “mysql-pass” created

Verifying the secrets:

Ajeets-MacBook-Air:~ ajeetraina$ kubectl get secrets
NAME TYPE DATA AGE
default-token-fwflq kubernetes.io/service-account-token 3 1h
mysql-pass Opaque 1 15s
Ajeets-MacBook-Air:~ ajeetraina$

Deploying MySQL

The below YAML describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The MYSQL_ROOT_PASSWORD environment variable sets the database password from the Secret.

Create a file called “mysql-deployment.yaml” and copy the above content. Once you save the file, run the below command to bring up MySQL container:

Ajeets-MacBook-Air:~ ajeetraina$ kubectl create -f mysql-deployment.yaml
service “wordpress-mysql” created
persistentvolumeclaim “mysql-pv-claim” created

Verifying the MySQL Pod:

Ajeets-MacBook-Air:~ ajeetraina$ kubectl get pods
NAME               READY       STATUS      RESTARTS         AGE
wordpress-mysql-7b4ffb6fb4-gfk8n 0/1 ContainerCreating 0 34s

 

 

Deploying WordPress

The below YAML describes a single-instance WordPress Deployment and Service. It uses a PVC for persistent storage & a Secret for the password as shown in the content. Did you see : type: NodePort. entry? This setting exposes WordPress to traffic from outside of the cluster.

Ajeets-MacBook-Air:~ ajeetraina$ kubectl create -f wordpress-deployment.yaml
service “wordpress” created
persistentvolumeclaim “wp-pv-claim” created
deployment “wordpress” created

Let us verify the pods using kubectl get pods command:

 

 

Verify that the Services are up and running by running the following command:

 

 

You can fetch overall status of the pods and running containers using the below command:

 

By now, you should be able to visit http://localhost and see WordPress page as shown below:

 

 

While you access the WordPress page,you can see logs under the WordPress container as shown below:

 

Hence, this is how your WordPress page can be customized flawlessly.

 

In my next blog post, I will deep dive into how Docker Swarm and Kubernetes gonna work together under the same cluster environment.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

How to Build Kubernetes Cluster using CRI-containerd & Moby

Estimated Reading Time: 5 minutes

Let’s talk about CRI Vs CRI-Containerd…

Container Runtime Interface(a.ka. CRI) is a standard way to integrate Container Runtime with Kubernetes. It is new plugin interface for container runtimes. It is a plugin interface which enables kubelet to use a wide variety of container runtimes, without the need to recompile.Prior to the existence of CRI, container runtimes (e.g., docker, rkt ) were integrated with kubelet through implementing an internal, high-level interface in kubelet. Formerly known as OCID, CRI-O is strictly focused on OCI-compliant runtimes and container images. 

Last month, the new CRI-O version 1.2 got announced. This is a minor release of the v1.0.x cycle supporting Kubernetes 1.7.With CRI, Kubernetes can be container runtime-agnostic. Now this provides flexibility to the providers of container runtimes who don’t need to implement features that Kubernetes already provides. CRI-O allows you to run containers directly from Kubernetes – without any unnecessary code or tooling.

For those viewers who compare Docker Runtime Engine Vs CRI-O, here is an important note – 

CRI-O is not really a competition to the docker project – in fact it shares the same OCI runC container runtime used by docker engine, the same image format, and allows for the use of docker build and related tooling. Through this new runtime, it is expected to bring developers more flexibility by adding other image builders and tooling in the future.Please remember that CRI is not an interface for full-fledge, all inclusive container runtime.

How does CRI-O workflow look like ?

When Kubernetes needs to run a container, it talks to CRI-O and the CRI-O daemon works with container runtime  to start the container. When Kubernetes needs to stop the container, CRI-O handles that. Everything  just works behind the scenes to manage Linux containers.

 

 

Kubelet is a node agent which has gRPC client which talks to gRPC server rightly called as shim. The shim then talks to container runtime. Today the default implementation is Docker Shim.Docker shim then talks to Docker daemon using the classical APIs. This works really well.

CRI consists of a protocol buffers and gRPC API, and libraries, with additional specifications and tools under active development.

Introducing CRI-containerd

CRI-containerd is containerd based implementation of CRI.  This project started in April 2017. In order to have Kubernetes consume containerd for its container runtime, containerd team implemented the CRI interface.  CRI is responsible for distribution and the lifecycle of pods and containers running on a cluster.The scope of containerd 1.0 aligns with the requirement of CRI. In case you want to deep-dive into it, don’t miss out this link.

Below is how CRI-containerd architecture look like:

                                                                                                             

In my last blog post, I talked about how to setup Multi-Node Kubernetes cluster using LinuxKit. It basically used Docker Engine to build up minimal & immutable Kubernetes OS images with LinuxKit. Under this blog post, we will see how to build it using CRI-containerd.

Infrastructure Setup:

  • OS – Ubuntu 17.04 
  • System – ESXi VM
  • Memory – 8 GB
  • SSH Key generated using ssh-keygen -t rsa and put under $HOME/.ssh/ folder

Caution – Please note that this is still under experimental version. It is currently under active development and hence don’t expect it to work as full-fledge K8s cluster.

Copy the below script to your Linux System and execute it flawlessly –

 

 

Did you notice the parameter –  KUBE_RUNTIME=cri-containerd ?

The above parameter is used to specify that we want to build minimal and immutable K8s ISO image using CRI-containerd. If you don’t specify the parameter, it will use Docker Engine to build this ISO image.

The above script is going to take sometime to finish.

It’s always a good idea to see what CRI-containerd specific files are present. 

Let us look into CRI-Containerd directory – 

Looking at the content of cri-containerd.yml where it defines a service called as critical-containerd.

The below list of files gets created with the output files like Kube-master-efi.iso right under Kubernetes directory – 

 

 

 

By the end of this script you should be able to see Kubernetes LinuxKit OS booting up – 

 

 

CRI-containerd let the user containers in the same sandbox share the network namespace and hence you will see the message ” This system is namespaced”.

You can find the overall screen logs to watch how it boots up –

 

Next, let us try to see what container services are running:

 

You will notice that cri-containerd service is up and running. 

Let us enter into one of tasks containers and initialize the master node with kubeadm-init script which comes by default –

(ns: getty) linuxkit-ee342f3aebd6:~# ctr tasks exec -t --exec-id 654 kubelet sh
/ # ls

Execute the below script –

/ # kubeadm-init.sh
/ # kubeadm-init.sh
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [linuxkit-ee342f3aebd6 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig] Wrote KubeConfig file to disk: “admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “scheduler.conf”
[controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
…..

Now you should be able to join the other worker nodes( I have discussed the further steps under  this link ).

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

Getting Started with Multi-Node Kubernetes Cluster using LinuxKit

Estimated Reading Time: 5 minutes

Here’s a BIG news for the entire Container Community – “Kubernetes Support is coming to Docker Platform“. What does this mean? This means that developers and operators can now build apps with Docker and seamlessly test and deploy them using both Docker Swarm and Kubernetes. This announcement was made on the first day of Dockercon EU 2017 Copenhagen where Solomon Hykes, Founder of Docker invited Kubernetes Co-Founder  Tim Hockin on the stage for the first time. Tim welcomed the Docker family to the vibrant Kubernetes world. 

In case you missed out the news, here are the important takeaways from Docker-Kubernetes announcement –

  • Kubernetes support will be added to Docker Enterprise Edition for the first time.
  • Kubernetes support will be added optionally to Docker Community Edition for Mac and Windows.
  • Kubernetes and Docker Swarm both orchestrators will be available under Docker Platform.
  • Kubernetes integration will be made available under Moby projects too.

Docker Inc firmly believes that bringing Kubernetes to Docker will simplify and advance the management of Kubernetes for developers, enterprise IT and hackers to deliver the advanced capabilities of Docker to a broader set of applications.

 

 

In case you want to be first to test drive, click here to sign up and be notified when the beta is ready.

Please be aware that the beta program is expected to be ready at the end of 2017.

Can’t wait for beta program? Here’s a quick guide on how to get Multi-Node Kubernetes cluster up and running using LinuxKit.

Under this blog post, we will see how to build 5-Node Kubernetes Cluster using LinuxKit. Here’s the bonus – We will try to deploy WordPress application on top of Kubernetes Cluster. I will be building it on my macOS Sierra 10.12.6 system running the latest Docker for Mac 17.09.0 Community Edition up and running.

Pre-requisite:

  1. Ensure that the latest Docker release is up and running.

       2. Clone the LinuxKit repository:

sudo git clone https://github.com/ajeetraina/linuxkit
cd projects/kubernetes/

3. Execute the below script to build Kubernetes OS images using LinuxKit & Moby:

sudo chmod +x script4k8s.sh

All I have done is put all the manual steps under a script as shown below:

 

 

This should build up Kubernetes OS image.

Run the below command to know the IP address of the kubelet container:

ip adds show dev eth0

 

You can check the list of tasks running as shown below:

Once this kubelet is ready, it’s time to login into it directly from a new terminal:

sudo ./ssh_into_kubelet.sh 192.168.65.3

So you have already SSH into the kubelet container rightly named “LinuxKit Kubernetes Project”.

We are going to mark it as “Master Node”.

Initializing the Kubernetes Master Node

Now it’s time to manually initialize master node with Kubeadm command as shown below:

sudo kubeadm-init.sh

Wait for few minutes till kubeadm command completes. Once done, you can use kubeadm join argument as shown below:

 

Your Kubernetes master node gets initialized successfully as shown below:

 

 

It’s time to run the below list of commands command to join the list of worker nodes:

pwd
/Users/ajeetraina/linuxkit/projects/kubernetes
sudo ./boot.sh 1 --token 4cec03.1a7ccb44115f427a 192.168.65.3:6443
sudo ./boot.sh 2 --token 4cec03.1a7ccb44115f427a 192.168.65.3:6443
sudo ./boot.sh 3 --token 4cec03.1a7ccb44115f427a 192.168.65.3:6443
sudo ./boot.sh 4 --token 4cec03.1a7ccb44115f427a 192.168.65.3:6443

 

It brings up 5-Node Kubernetes cluster as shown below:

 

Deploying WordPress Application on Kubernetes Cluster:

Let us try to deploy a WordPress application on top of this Kubernetes Cluster.

Head over to the below directory:

cd linuxkit/projects/kubernetes/wordpress

 

Creating a Persistent Volume

kubectl create -f local-volumes.yaml

The content of local-volumes.yml look like:


This creates 2 persistent volumes:

Verifying the volumes:

 

Creating a Secret for MySQL Password

kubectl create secret generic mysql-pass --from-literal=password=mysql123

 
Verifying the secrets using the below command:
kubectl get secrets

 

Deploying MySQL:

kubectl create -f mysql-deployment.yaml

 

Verify that the Pod is running by running the following command:

kubectl get pods

 

Deploying WordPress:

kubectl create -f wordpress-deployment.yaml

By now, you should be able to access WordPress UI using the web browser.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

When Moby Meet Kubernetes for the first time

Estimated Reading Time: 6 minutes

Moby has turned to be an open playground for collaborators. It has become a popular collaborative project for the container ecosystem to assemble container-based systems. There has been tremendous amount of effort put to containerize an application but what about the platform which runs those containers? Shouldn’t that be containerize too? Moby is the answer. With library of over 80+ components for all vital aspects of a container system: OS, container run time, orchestration, infrastructure management, networking, storage, security, build, image distribution, etc., Moby can help you package your own components as containers.  The Moby Project enables customers to plug and play their favorite technology components to create their own custom platform. Interestingly, all Moby components are containers, so creating new components is as easy as building a new OCI-compatible container.

While  Moby project provide you with a command-line tool called “moby” to assembles components, LinuxKit is a valuable toolkit which allows you for building secure, portable and lean operating systems for containers. It provides a container-based approach to building a customized  Linux subsystem for each type of container. It is based on containerd and has its own Linux kernel, system daemon and system services. 

Mobyy

I attended Dockercon 2017, Austin TX last month and one of coolest weekend project showcased by Docker Team was running Kubernetes on Mac using Moby and LinuxKit. In case you’re completely new to Kubernetes, it is an open-source system for automating deployment, scaling and management of containerized applications. It was originally designed by Google and donated to the Cloud Native Computing Foundation. It provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It supports a range of container tools, including Docker.

moby_kubernetes

 

 

linuxkit_4_kubernetes

 

One of the main benefit of LinuxKit for Kubernetes includes reliable deployment, lower security footprint, easy customization around building own desired base image.Under this blog post, I am going to demonstrate how one can easily create minimal and immutable Kubernetes OS images with LinuxKit.

Pre-requisite:  

  1. Install the latest Edge Release of Docker for Mac and Engine through this link.
  2. Please note that if you are using Stable Release of Docker for Mac, you won’t be able to setup Multi-node Kubernetes cluster as the stable release lack Multi-host functionality of VPNKit. Do refer this known issue. The support for multi-host networking was introduced in the latest Edge release.

 

Screen Shot 2017-05-11 at 9.09.37 AM

  • Ensure that Docker for Mac Edge Release gets displayed once you have installed it properly.

 

Screen Shot 2017-05-11 at 8.26.34 AM

 

Clone the LinuxKit Repository as shown:

git clone https://github.com/linuxkit/linuxkit

 

Build the Moby and LinuxKit tool first using the below commands:

 

cd linuxkit
make
cp -rf bin/moby /usr/local/bin/
cp -rf bin/linuxkit /usr/local/bin/

 

Change directory to kubernetes project:

 

cd linuxkit/projects/kubernetes

 

You will find the below list of files and directories:

Screen Shot 2017-05-11 at 8.31.24 AM

 

Let us first look at kube-master.yml file. Everything under LinuxKit is just a YAML file. This files starts with a section defining the kernel configuration, init section just lists images that is used for the init system and are unpacked directly into the root filesystem, the onboot sections indicates that  these containers are run to completion sequentially, using runc before anything else is started.  As shown below,  under the service section, there is a kubelet service defined which uses errordeveloper/mobykube:master image and build Kubernetes images.

Edit kube-master.yml and add your public SSH key to files section. You can generate the SSH key using ssh-keygen command.

Screen Shot 2017-05-12 at 9.24.03 AM

 

Once you have added your public SSH key, go ahead and build OS images using the below command:

 

sudo make build-vm-images

 

The above command provides you with the below output:

 

Screen Shot 2017-05-11 at 8.48.35 AM

 

Few of the important files includes:

kube-node-kernel  kube-node-initrd.img  kube-node-cmdline

 

Under the same directory, you will find a file called “boot-master.sh” which will help us in setting up the master node.

 

Screen Shot 2017-05-12 at 9.28.14 AM

 

Boot Kubernetes master OS image using hyperkit on macOS:

 

./boot-master.sh

This will display the following output:

Screen Shot 2017-05-11 at 8.50.11 AM

Just wait for few seconds and you will see LinuxKit OS coming up as shown:

Screen Shot 2017-05-11 at 8.52.58 AM

 

It’s easy to retrieve the IP address of the master node:

 

Screen Shot 2017-05-11 at 8.54.17 AM

 

Verify the kubelet process:

Screen Shot 2017-05-11 at 8.55.15 AM

Now it’s time to execute the script to manually initialize master with kubeadm:

/ # runc exec kubelet kubeadm-init.sh

 

Screen Shot 2017-05-11 at 8.56.48 AM

 

Copy / Save  the below command  and keep it handy. We are going to need it soon.

kubeadm join --token a5365b.45e88229a1548bf2 192.168.65.2:6443

 

Hence, your Kubernetes master is up and ready.

You can verify the cluster node:

Screen Shot 2017-05-11 at 8.57.48 AM

This was so easy to setup. Isn’t it? Let us create 3 node cluster directly from macOS terminal. Open up 3 new separate terminal to start 3 nodes  and run the below commands:

 

 ajeetraina$cd linuxkit/projects/kubernetes/
 ajeetraina$ sudo ./boot-node.sh 1 --token a5365b.45e88229a1548bf2 192.168.65.2:6443
 ajeetraina$ sudo ./boot-node.sh 2 --token a5365b.45e88229a1548bf2 192.168.65.2:6443
 ajeetraina$ sudo ./boot-node.sh 3 --token a5365b.45e88229a1548bf2 192.168.65.2:6443

Open up the master node terminal and verify if all the 3 nodes gets added:

 

/ # kubectl get nodes
NAME                STATUS    AGE       VERSION
moby-025000000003   Ready     18m       v1.6.1
moby-025000000004   Ready     13m       v1.6.1
moby-025000000004   Ready     15m       v1.6.1
 moby-025000000004   Ready     14m       v1.6.1

 

Screen Shot 2017-05-11 at 9.06.28 AM

Moby makes it so simple to setup Kubernetes cluster up and running. Under this demonstration, it created a bridge network inside VPNKit and hosts are added to that as they use the same VPNKit socket.

Thanks to Justin Cormack @ LinuxKit maintainer for the valuable insight regarding the multi-host networking functionality.

giphy

 

Did you find this blog helpful? Are you planning to explore Moby for Kubernetes? Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Track The Moby Project here.