Top 5 Most Exciting Dockercon EU 2018 Announcements

Estimated Reading Time: 10 minutes

Last week I attended Dockercon 2018 EU which took place at Centre de Convencions Internacional de Barcelona (CCIB) in Barcelona, Spain. With over 3000+ attendees from around the globe, 52 breakout sessions, 11 Community Theatres, 12 workshops, over 100+ total sessions, exciting Hallway Tracks & Hands-on Labs/Trainings, paid trainings, women’s networking event, DockerPals and so on..Dockercon allowed developers, sysadmins, Product Managers & industry evangelists come closer to share their wealth of experience around the container technology. This time I was lucky enough to get chance to Emcee for Docker for Developer Track  for the first time. Not only this, I conducted Hallway Track for OpenUSM project & DockerLabs community contribution effort. Around 20-30 participants  showed up their interest to learn more around this system management, monitoring & Log Analytics tool.

This Dockercon we had Docker Captains Summit for the first time where the entire day was  dedicated to Captains. On Dec #3 ( 10:00 AM till 3:00 PM), we got chance to interact with Docker Staffs, where we put all our queries around Docker Future roadmap. It was amazing to meet all young Captains who joined us this year as well as getting familiar to what they have been contributing to during the initial introductory rounds.

This Dockercon, there has been couple of exciting announcements. 3 of the new features were targeted at Docker Community Edition, while the two were for Docker Enterprise customers. Here’s a rundown of what I think are the most 5 exciting announcements made last week –

#1. Announcement of Cloud Native Application Bundles(CNAB)

Microsoft and Docker have captured a great piece of attention with announcement around CNAB – Cloud Native Application Bundles.

What is CNAB? 

Cloud Native Application Bundles (CNAB) are a standard packaging format for multi-component distributed applications. It allows packages to target different runtimes and architectures. It empowers application distributors to package applications for deployment on a wide variety of cloud platforms, cloud providers, and cloud services. It also provides the capabilities necessary for delivering multi-container applications in disconnected environments.

Is it platform-specific tool?

CNAB is not a platform-specific tool. While it uses containers for encapsulating installation logic, it remains un-opinionated about what cloud environment it runs in. CNAB developers can bundle applications targeting environments spanning IaaS (like OpenStack or Azure), container orchestrators (like Kubernetes or Nomad), container runtimes (like local Docker or ACI), and cloud platform services (like object storage or Database as a Service). CNAB can also be used for packaging other distributed applications, such as IoT or edge computing. In nutshell, CNAB are a package format specification that describes a technology for bundling, installing, and managing distributed applications, that are by design, cloud agnostic.

Why do we need CNAB?

The current distributed computing landscape involves a combination of executable units and supporting API-based services. Executable units include Virtual Machines (VMs), Containers (e.g. Docker and OCI) and Functions-as-a-Service (FaaS), as well as higher-level PaaS services. Along with these executable units, many managed cloud services (from load balancers to databases) are provisioned and interconnected via REST (and similar network-accessible) APIs. The overall goal of CNAB is to provide a packaging format that can enable application providers and developers with a way of installing a multi-component application into a distributed computing environment, supporting all of the above types.


Is it open source? Tell me more about CNAB format?

It is an open source, cloud-agnostic specification for packaging and running distributed applications. It is a nascent specification that offers a way to repackage distributed computing apps

The CNAB format is a packaging format for a broad range of distributed applications. It specifies a pairing of a bundle definition(bundle.json) to define the app, and an invocation image to install the app.

The bundle definition is a single file that contains the following information:

  • Information about the bundle, such as name, bundle version, description, and keywords
  • Information about locating and running the invocation image (the installer program)
  • A list of user-overridable parameters that this package recognizes
  • The list of executable images that this bundle will install
  • A list of credential paths or environment variables that this bundle requires to execute

What’s Docker future plan to do with CNAB?

This project was incubated by Microsoft and Docker 1 year back. The first implementation of the spec is an experimental utility called Docker App, which Docker officially rolled out this Dockercon and expected to be integrated with Docker Enterprise in near future.  Microsoft and Docker plan to donate CNAB to an open source foundation publicly which is expected to happen early next year.

If you have no patience, head over Docker App CNAB examples recently posted by Gareth Rushgrove, Docker Employee, which is accessible via https://github.com/garethr/docker-app-cnab-examples

The examples in this repository show some basic examples of using docker-app, in particular showing some of the CNAB integration details. Check it out –

 #2. Support for using Docker Compose on Kubernetes.

On the 2nd day of Dockercon, Docker Inc. open sourced Compose on Kubernetes project. Docker Enterprise Edition already had this capability enabled starting Compose File version 3.3 where one can use the same docker-compose.yml file for Swarm deployment as well as one can specify Kubernetes workloads whenever stack is deployed. 

What benefit does this bring to Community Developers?

By making it open source, Docker, Inc has really paved a way of infinite possibilities around simplified way of deploying Kubernetes application. Docker Swarm gained popularity because of its simplified approach of application deployment using docker-compose.yml file. Now the community developers can use the same YAML file to deploy their K8s application. 

Imagine, you are using Docker Desktop on your Macbook. Docker Desktop provides capability of running both Swarm & Kubernetes. You have context set to GKE cluster which is running on Google Cloud Platform. You just deployed your app using docker-compose.yml on your local Macbook. Now you want to deploy it in the same way but this time on your GKE cluster.  Just use docker stack deploy command to deploy it to GKE cluster. Interesting, Isn’t it?

How does Compose on Kubernetes architecture look like?

Compose on Kubernetes is made up of server-side and client-side components. This architecture was chosen so that the entire life cycle of a stack can be managed. The following image is a high-level diagram of the architecture:

Compose on Kubernetes architecture

If you’re interested to learn further, I would suggest you to visit this link.

How can I test it now?

First we need to install the Compose on Kubernetes controller into your Kubernetes cluster(which could be GKE/AKS). You can download the latest binary(as of 12/13/2018) via https://github.com/docker/compose-on-kubernetes/releases/tag/v0.4.16 .

This controller uses the standard Kubernetes extension points to introduce the `Stack` to the Kubernetes API. You can use any Kubernetes cluster you like, but if you don’t already have one available then remember that Docker Desktop comes with Kubernetes and the Compose controller built-in, and enabling it is as simple as ticking a box in the settings

Check out the latest doc which shows how to make it work with AKS here.

#3. Introducing Docker Desktop Enterprise

The 3rd Big announcement was an introduction to Docker Desktop Enterprise. With this, Docker Inc. made a new addition to their desktop product portfolio which currently includes the free Docker Desktop Community products for MacOS and Windows. Docker Desktop is the simplest way to get started with container-based development on both Windows 10 and macOS with a set of features now available for the enterprise.

Desktop Comparison Table

How will Docker Desktop Enterprise be different from Docker Desktop Community Edition?

Good question. Docker Desktop has Docker Engine and Kubernetes built-in and with the addition of swappable version packs you can now synchronize your desktop development environment with the same Docker API and Kubernetes versions that are used in production with Docker Enterprise. You get the assurance that your application will not break due to incompatible API calls, and if you have multiple downstream environments running different versions of the APIs, you can quickly change your desktop configuration with the click of a button.

Not only this, with Docker Desktop Enterprise, you get access to the Application Designer which is a new workflow that provides production-ready application and service templates that let you get coding quickly, with the reassurance that your application meets architectural standards

source ~ Docker, Inc


For those who are interested in Docker Desktop Enterprise – Please note that it is expected to be available for preview in January & General Availability is slated to happen during 1H 2019.

#4. From Zero to Docker in Seconds with “docker assemble” CLI

This time, Docker Team announced a very interesting docker subcommand rightly named as “assemble” to the public. Ann Rahma and Gareth Rushgrove from Docker, Inc. announced assemble, a new command that generates optimized images from non dockerized apps. It will get you from source to an optimized Docker images in seconds.

Here are few of interesting facts around docker assemble utility:

  • Docker assemble has capability to build an image without a Dockerfile, all about auto detecting the code framework.
  • It generates docker images (and lot more) from your code with single command and zero effort! which mean no more dockerfile needed for your app till you have a config (.pom file there). 
  • It can analyze your applications, dependencies, and caches, and give you a sweet Docker image without having to author your own Dockerfiles.
  • It is built on top of buildKit, will auto detect framework, versions etc. from a config file (.pom file) and automatically add dependencies to the image label, optimize image size and push.
  • Docker Assemble can also figure out what ports need to be published and what healthchecks are relevant.
  • The dockerassemble builds app without configuration files, without Dockerfile, just a git repository to deploy

Is it an open source project?

It’s an enterprise feature for now — not in the community version. It is available for a couple languages and frameworks (like Java as demonstrated on Dockercon stage). 

How is it different from buildpack?

By reading all through its feature, Docker assemble might look very similar to buildpacks  as it overlap with some of the stuff docker-assemble does. But the huge benefit with assemble is that it’s more than just an image (also ports, healthchecks, volume mounts, etc), and it’s integrated into the enterprise toolchain. The docker-assemble is sort of an enterprise-grade buildpack to help with digitalization.

Keep eye on my next blog post to get more detail around the fancy docker assemblecommand.

#5. Docker-app & CNAB together for the first time

On the 2nd day of Dockercon, Docker confirmed that they are the first to implement CNAB for containerized applications and will be expanding it across the Docker platform to support new application development, deployment and lifecycle management. Initially CNAB support will be released as part of our docker-app experimental tool for building, packaging and managing cloud-native applications. With this, Docker now lets you package CNAB bundles as Docker images, so you can distribute and share through Docker registry tools including Docker Hub and Docker Trusted Registry. Additionally, Docker will enable organizations to deploy and manage CNAB-based applications in Docker Enterprise in the upcoming months.

Can I test the preview binaries of docker-app which comes with CNAB support?

Yes, you can find some preview binaries of docker-app with CNAB support here.The latest release of Docker App is one such tool that implements the current CNAB spec. Tt can be used to both build CNAB bundles for Compose (which can then be used with any other CNAB client), and also to install, upgrade and uninstall any other CNAB bundle.

In case you have no patience, head over to this recently added example of how to deploy a Helm chart

You can visit https://github.com/docker/app/releases/tag/cnab-dockercon-preview for accessing preview build.

I hope you found this blog helpful. In my next blog series, I will deep-dive around each of these announcements in terms of implementation and enablements.

Kubernetes Hands-on Lab #4 – Deploy Prometheus Stack using Helm on Play with Kubernetes Platform

Estimated Reading Time: 8 minutes 

Let’s talk about Kubernetes Deployment Challenges…

As monolithic systems become too large to deal with, many enterprises are drawn to breaking them down into the microservices architecture.Whenever we move from monolithic to microservice architecture, application consists of multiple components in terms of services talking to each other. Each components has its own resources and can be scaled individually. If we talk about Kubernetes, it can become very complex with all the objects you need to handle ― such as ConfigMaps, services, pods, Persistent Volumes ― in addition to the number of releases you need to manage. The below list of challenges might occur :

 

 
 
1. Manage, Edit and Update multiple k8s configuration
2. Deploy Multiple K8s configuration as a SINGLE application
3. Share and reuse K8s configurations and applications
4. Parameterize and support multiple environments
5. Manage application releases: rollout, rollback, diff, history
6. Define deployment lifecycle(control operations to be run in different phases
7. Validate release state after deployment
 
These can be managed with Kubernetes Helm, which offers a simple way to package everything into one simple application and advertises what you can configure.
 

Helm is a Deployment Management(and NOT JUST PACKAGE MANAGER) for Kubernetes. It does a heavy lifting of repeatable deployment, management of dependencies(reuse and share), management of multiple configurations, update, rollback and test application deployments(Releases).

Under this blog post, we will test drive Helm on top of Play with Kubernetes Platform. Let’s get started.

Open https://labs.play-with-k8s.com/ to access Kubernetes Playground.

Click on the Login button to authenticate with Docker Hub or GitHub ID.

 

Once you start the session, you will have your own lab environment.

Adding First Kubernetes Node

Click on “Add New Instance” on the left to build your first Kubernetes Cluster node. It automatically names it as “node1”. Each instance has Docker Community Edition (CE) and Kubeadm already pre-installed. This node will be treated as the master node for our cluster.

Bootstrapping the Master Node

 

You can bootstrap the Kubernetes cluster by initializing the master (node1) node with the below script. Copy this script content into bootstrap.sh file and make it executable using “chmod +x bootstrap.sh” command.

When you execute this script, as part of initialization, the kubeadm write several configuration files needed, setup RBAC and deployed Kubernetes control plane components (like kube-apiserver, kube-dns, kube-proxy, etcd, etc.). Control plane components are deployed as Docker containers.

Copy the above kubeadm join token command and save it for the next step. This command will be used to join other nodes to your cluster.

Adding Worker Nodes

Click on “Add New Node” to add a new worker node.

Checking the Cluster Status

[node1 ~]$ kubectl get nodes
NAME      STATUS     ROLES     AGE       VERSION
node1     Ready      master    18m       v1.11.3
node2     Ready      <none>    4m        v1.11.3
node3     Ready      <none>    39s       v1.11.3
node4     NotReady   <none>    22s       v1.11.3
node5     NotReady   <none>    4s        v1.11.3
[node1 ~]$
[node1 ]$ kubectl get po
No resources found.
[node1 ]$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   1h
[node1]$

 

Verifying the running Pods


 

[node1 ~]$ kubectl get nodes -o json |
>       jq ".items[] | {name:.metadata.name} + .status.capacity"

{
  "name": "node1",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node2",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node3",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node4",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node5",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}

Installing OpenSSL

[node1 ~]$ yum install -y openssl

Installing Helm

$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
[node1 ~]$ sh get_helm.sh
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.11.0-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
get_helm.sh: line 177: which: command not found
Run 'helm init' to configure helm.
[node1 ~]$ helm init
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming

Installing Prometheus 

Let us try to install Prometheus Stack on top of 5-Node K8s cluster using Helm.

First one can search for application stack using helm search <packagename> option.

[node1 ~]$ helm search prometheus
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
stable/prometheus                       7.3.4           2.4.3           Prometheus is a monitoring system and time series database.
stable/prometheus-adapter               v0.2.0          v0.2.1          A Helm chart for k8s prometheus adapter
stable/prometheus-blackbox-exporter     0.1.3           0.12.0          Prometheus Blackbox Exporter
stable/prometheus-cloudwatch-exporter   0.2.1           0.5.0           A Helm chart for prometheus cloudwatch-exporter
stable/prometheus-couchdb-exporter      0.1.0           1.0             A Helm chart to export the metrics from couchdb in Promet...
stable/prometheus-mysql-exporter        0.2.1           v0.11.0         A Helm chart for prometheus mysql exporter with cloudsqlp...
stable/prometheus-node-exporter         0.5.0           0.16.0          A Helm chart for prometheus node-exporter
stable/prometheus-operator              0.1.7           0.24.0          Provides easy monitoring definitions for Kubernetes servi...
stable/prometheus-postgres-exporter     0.5.0           0.4.6           A Helm chart for prometheus postgres-exporter
stable/prometheus-pushgateway           0.1.3           0.6.0           A Helm chart for prometheus pushgateway
stable/prometheus-rabbitmq-exporter     0.1.4           v0.28.0         Rabbitmq metrics exporter for prometheus
stable/prometheus-redis-exporter        0.3.2           0.21.1          Prometheus exporter for Redis metrics
stable/prometheus-to-sd                 0.1.1           0.2.2           Scrape metrics stored in prometheus format and push them ...
stable/elasticsearch-exporter           0.4.0           1.0.2           Elasticsearch stats exporter for Prometheus
stable/karma                            1.1.2           v0.14           A Helm chart for Karma - an UI for Prometheus Alertmanager
stable/stackdriver-exporter             0.0.4           0.5.1           Stackdriver exporter for Prometheus
stable/weave-cloud                      0.3.0           1.1.0           Weave Cloud is a add-on to Kubernetes which provides Cont...
stable/kube-state-metrics               0.9.0           1.4.0           Install kube-state-metrics to generate and expose cluster...
stable/mariadb                          5.2.2           10.1.36         Fast, reliable, scalable, and easy to use open-source rel...
[node1 ~]$

Update the Repo

[node1 ~]$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

Installing Prometheus

$helm install stable/prometheus

Error: namespaces “default” is forbidden: User “system:serviceaccount:kube-system:default” cannot get namespaces in the namespace “default”

How to fix?

To fix this issue, you need to follow below steps:

kubectl --namespace kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade

Listing Helm

[node1 ~]$ helm list
NAME            REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
excited-elk     1               Sun Oct 28 10:00:02 2018        DEPLOYED        prometheus-7.3.4        2.4.3           default
[node1 ~]$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
[node1 ~]$ helm install stable/prometheus
NAME:   excited-elk
LAST DEPLOYED: Sun Oct 28 10:00:02 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/DaemonSet
NAME                                  AGE
excited-elk-prometheus-node-exporter  1s

==> v1/Pod(related)

NAME                                                        READY  STATUS             RESTARTS  AGE
excited-elk-prometheus-node-exporter-7bjqc                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-node-exporter-gbcd7                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-node-exporter-tk56q                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-node-exporter-tkk9b                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-alertmanager-68f4f57c97-wrfjz        0/2    Pending            0         1s
excited-elk-prometheus-kube-state-metrics-858d44dfdc-vt4wj  0/1    ContainerCreating  0         1s
excited-elk-prometheus-pushgateway-58bfd54d6d-m4n69         0/1    ContainerCreating  0         1s
excited-elk-prometheus-server-5958586794-b97xn              0/2    Pending            0         1s

==> v1/ConfigMap

NAME                                 AGE
excited-elk-prometheus-alertmanager  1s
excited-elk-prometheus-server        1s

==> v1/ServiceAccount
excited-elk-prometheus-alertmanager        1s
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-node-exporter       1s
excited-elk-prometheus-pushgateway         1s
excited-elk-prometheus-server              1s

==> v1beta1/ClusterRole
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-server              1s

==> v1beta1/Deployment
excited-elk-prometheus-alertmanager        1s
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-pushgateway         1s
excited-elk-prometheus-server              1s

==> v1/PersistentVolumeClaim
excited-elk-prometheus-alertmanager  1s
excited-elk-prometheus-server        1s

==> v1beta1/ClusterRoleBinding
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-server              1s

==> v1/Service
excited-elk-prometheus-alertmanager        1s
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-node-exporter       1s
excited-elk-prometheus-pushgateway         1s
excited-elk-prometheus-server              1s


NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
excited-elk-prometheus-server.default.svc.cluster.local


Get the Prometheus server URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9090


The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
excited-elk-prometheus-alertmanager.default.svc.cluster.local


Get the Alertmanager URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9093


The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
excited-elk-prometheus-pushgateway.default.svc.cluster.local


Get the PushGateway URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9091

For more information on running Prometheus, visit:
https://prometheus.io/
[node1 ~]$ kubectl get all
NAME                                                             READY     STATUS    RESTARTS   AGE
pod/excited-elk-prometheus-alertmanager-68f4f57c97-wrfjz         0/2       Pending   0          3m
pod/excited-elk-prometheus-kube-state-metrics-858d44dfdc-vt4wj   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-7bjqc                   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-gbcd7                   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-tk56q                   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-tkk9b                   1/1       Running   0          3m
pod/excited-elk-prometheus-pushgateway-58bfd54d6d-m4n69          1/1       Running   0          3m
pod/excited-elk-prometheus-server-5958586794-b97xn               0/2       Pending   0          3m

NAME                                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/excited-elk-prometheus-alertmanager         ClusterIP   10.106.159.46   <none>        80/TCP     3m
service/excited-elk-prometheus-kube-state-metrics   ClusterIP   None            <none>        80/TCP     3m
service/excited-elk-prometheus-node-exporter        ClusterIP   None            <none>        9100/TCP   3m
service/excited-elk-prometheus-pushgateway          ClusterIP   10.106.88.15    <none>        9091/TCP   3m
service/excited-elk-prometheus-server               ClusterIP   10.107.15.64    <none>        80/TCP     3m
service/kubernetes                                  ClusterIP   10.96.0.1       <none>        443/TCP    37m

NAME                                                  DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/excited-elk-prometheus-node-exporter   4         4         4         4            4           <none>          3m

NAME                                                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/excited-elk-prometheus-alertmanager         1         1         1            0           3m
deployment.apps/excited-elk-prometheus-kube-state-metrics   1         1         1            1           3m
deployment.apps/excited-elk-prometheus-pushgateway          1         1         1            1           3m
deployment.apps/excited-elk-prometheus-server               1         1         1            0           3m

NAME                                                                   DESIRED   CURRENT   READY     AGE
replicaset.apps/excited-elk-prometheus-alertmanager-68f4f57c97         1         1         0         3m
replicaset.apps/excited-elk-prometheus-kube-state-metrics-858d44dfdc   1         1         1         3m
replicaset.apps/excited-elk-prometheus-pushgateway-58bfd54d6d          1         1         1         3m
replicaset.apps/excited-elk-prometheus-server-5958586794               1         1         0         3m
[node1 ~]$

Wait for few minutes while you can access Prometheus UI using https://<external-ip>:9090 In the upcoming blog series, I will bring more interesting stuffs around Helm on PWD Playground.

Kubernetes Hands-on Lab #3 – Deploy Istio Mesh on K8s Cluster

Kubernetes Hands-on Lab #1 – Setting up 5-Node K8s Cluster

Building Helm Chart for Kubernetes Cluster running on Docker Enterprise 2.0 using Docker-app 0.6.0

Estimated Reading Time: 8 minutes 

Docker-app allows you to share your applications on Docker Hub directly. This tool not only makes Compose file shareable but provide us with simplified approach to share multi-service application (not just Docker Image) directly on Dockerhub.

 

Docker-app 0.6.0 got released 2 week back. Few of the notable features included under this release include –

  • Support for external files (extra configuration files): when pushing and pulling all files present in the application folder are included.
  • Render can now produce output in multiple formats (YAML or JSON).
  • split and merge now work properly when specifying an image as input and no output.
  • Command line accepts a trailing slash in the application path.

But one of the most important fix which arrived with this releases was related to Helm chart.  This release fixed multiple issues in Helm chart generation.

How I built Elastic Stack for Docker Swarm using Docker Application Packages(docker-app)

Under this blog post, I will showcase how docker-app can help you build Helm Chart for your 3-node Kubernetes Cluster running on Docker Enterprise 2.0.

Tested Infrastructure

Platform Google Cloud Platform
OS Instance Ubuntu 18.04
Machine Type n1-standard-4 (4 vCPUs, 15 GB memory)
No. of Nodes 3

I assume that you have 3 Ubuntu 18.04 instances having the above minimal configurations. Make sure all the hosts you want to manage with Docker EE have a minimum of:

  • Docker Enterprise Edition 17.06.2-ee-8. Values of n in the -ee- suffix must be 8 or higher
  • Linux kernel version 3.10 or higher
  • 4.00 GB of RAM
  • 3.00 GB of available disk space

 


Cloning the Repository

Login to the first Ubuntu 18.04 OS and run the below command to clone the repository.

git clone https://github.com/ajeetraina/docker101
cd docker101/docker-ee/ubuntu

 

Installing Docker EE

The best way to try Docker Enterprise Edition for yourself is to get the 30-day trial available at the Docker Store. Once you get your trial license, you can install Docker EE on your Linux servers.

As soon as you get 1 Month Trial Version of Docker EE, you will be provided with URL. Copy the section from URL starting from sub

https://storebits.docker.com/ee/ubuntu/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Exporting URL

export eeid=sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Installing Docker EE

I have built a script for you to get Docker EE 2.0 up and running flawlessly. Just one command and Docker EE is all set.

sh bootstrap.sh provision_dockeree

Setting up UCP

Run the below command to initiate container which will setup UCP for you.

sudo sh bootstrap.sh provision_ucp
openusm@master01:~/test/docker101/docker-ee/ubuntu$ sudo sh bootstrap.sh provision_ucp
Unable to find image 'docker/ucp:3.0.5' locally
3.0.5: Pulling from docker/ucp
ff3a5c916c92: Pull complete 
a52011fa0ead: Pull complete 
87e35eb74a08: Pull complete 
Digest: sha256:c8a609209183561de7779e5d32abc5fd9125944c67e8daf262dcbb9f2b1e44ff
Status: Downloaded newer image for docker/ucp:3.0.5
INFO[0000] Your engine version 17.06.2-ee-16, build 9ef4f0a (4.15.0-1021-gcp) is compatible with UCP 3.0.5 (f588f8a) 
Admin Username: collabnix
Admin Password: 
Confirm Admin Password: 
INFO[0043] Pulling required images... (this may take a while) 
INFO[0043] Pulling docker/ucp-auth:3.0.5                
INFO[0049] Pulling docker/ucp-hyperkube:3.0.5           
INFO[0064] Pulling docker/ucp-etcd:3.0.5                
INFO[0070] Pulling docker/ucp-interlock-proxy:3.0.5     
INFO[0086] Pulling docker/ucp-agent:3.0.5               
INFO[0092] Pulling docker/ucp-kube-compose:3.0.5        
INFO[0097] Pulling docker/ucp-dsinfo:3.0.5              
INFO[0104] Pulling docker/ucp-cfssl:3.0.5               
INFO[0107] Pulling docker/ucp-kube-dns-sidecar:3.0.5    
INFO[0112] Pulling docker/ucp-interlock:3.0.5           
INFO[0115] Pulling docker/ucp-kube-dns:3.0.5            
INFO[0120] Pulling docker/ucp-controller:3.0.5          
INFO[0128] Pulling docker/ucp-pause:3.0.5               
INFO[0132] Pulling docker/ucp-calico-kube-controllers:3.0.5 
INFO[0136] Pulling docker/ucp-auth-store:3.0.5          
INFO[0142] Pulling docker/ucp-calico-cni:3.0.5          
INFO[0149] Pulling docker/ucp-calico-node:3.0.5         
INFO[0158] Pulling docker/ucp-kube-dns-dnsmasq-nanny:3.0.5 
INFO[0163] Pulling docker/ucp-compose:3.0.5             
INFO[0167] Pulling docker/ucp-swarm:3.0.5               
INFO[0173] Pulling docker/ucp-metrics:3.0.5             
INFO[0179] Pulling docker/ucp-interlock-extension:3.0.5 
WARN[0183] None of the hostnames we'll be using in the UCP certificates [master01 127.0.0.1 172.17.0.1 10.140.0.2] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases 

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases: 
INFO[0000] Initializing a new swarm at 10.140.0.2 
Additional aliases: 
INFO[0000] Initializing a new swarm at 10.140.0.2       
INFO[0009] Installing UCP with host address 10.140.0.2 - If this is incorrect, please specify an alternative address with the '--host-address' flag 
INFO[0009] Deploying UCP Service...                     
INFO[0068] Installation completed on master01 (node slsvy00m1khejbo5itmupk034) 
INFO[0068] UCP Instance ID: omz7lso0zpeyzk17gxubvz72r   
INFO[0068] UCP Server SSL: SHA-256 Fingerprint=24:9B:51:4E:E2:F1:CD:1B:DE:E0:86:0F:DC:E7:29:B5:1E:0E:6B:0C:BF:24:CC:27:85:91:35:A1:6A:39:37:C6 
INFO[0068] Login to UCP at https://10.140.0.2:443       
INFO[0068] Username: collabnix                          
INFO[0068] Password: (your admin password) 

Logging in Docker EE

By now, you should be able to login to Docker EE Window using browser. Upload the license and you should be good to see the UCP console.

Installing Kubectl

Undoubtedly, kubectl has been favourite command for K8s users. It works great but it’s painful because you use it to manually run a command for each resource in your Kubernetes application. This is prone to error, because we might forget to deploy one resource, or introduce a typo when writing our kubectl commands. As we add more parts to our application, the probability of these problems occurring increases.

But still here’s a bonus – Execute the below script if you really want to use kubectl on Docker EE Platform.

sudo sh bootstrap.sh install_kubectl

Verify Kubectl Version

@master01:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.11", GitCommit:"1df6a8381669a6c753f79cb31ca2e3d57ee7c8a3", GitTreeState:"clean", BuildDate:"2018-04-05T17:24:
03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.11-docker-8d637ae", GitCommit:"8d637aedf46b9c21dde723e29c645b9f27106fa5", GitTreeState:"clean", BuildDate:"2
018-04-26T16:51:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Verifying Docker Version

Here you go.. Docker EE 2.0 is all set up with Swarm & Kubernetes running side by side.

 docker version
Client: Docker Enterprise Edition (EE) 2.0
 Version:      17.06.2-ee-16
 API version:  1.30
 Go version:   go1.8.7
 Git commit:   9ef4f0a
 Built:        Thu Jul 26 16:41:28 2018
 OS/Arch:      linux/amd64
Server: Docker Enterprise Edition (EE) 2.0
 Engine:
  Version:      17.06.2-ee-16
  API version:  1.30 (minimum version 1.12)
  Go version:   go1.8.7
  Git commit:   9ef4f0a
  Built:        Thu Jul 26 16:40:18 2018
  OS/Arch:      linux/amd64
  Experimental: false
 Universal Control Plane:
  Version:       3.0.5
  ApiVersion:                   1.30
  Arch:                         amd64
  BuildTime:                    Thu Aug 30 17:47:03 UTC 2018
  GitCommit:                    f588f8a
  GoVersion:                    go1.9.4
  MinApiVersion:                1.20
  Os:                           linux
 Kubernetes:
  Version:      1.8+
  buildDate:                   2018-04-26T16:51:21Z
  compiler:                    gc
  gitCommit:                   8d637aedf46b9c21dde723e29c645b9f27106fa5
  gitTreeState:                clean
  gitVersion:                  v1.8.11-docker-8d637ae
  goVersion:                   go1.8.3
  major:                       1
  minor:                       8+
  platform:                    linux/amd64
 Calico:
  Version:          v3.0.8
  cni:                             v2.0.6
  kube-controllers:                v2.0.5
  node:                            v3.0.8

Verifying the Kubernetes Nodes

@master01:~/test/docker101/docker-ee/ubuntu$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
master01   Ready     master    20m       v1.8.11-docker-8d637ae

Adding Worker Nodes

To add worker nodes, go to Add Nodes section under Docker Enterprise UI and click on “Add a Node”. It will display a command which need to be executed on worker nodes. This should be good to build multi-node Docker EE Swarm & Kubernetes Cluster

m@master01:~$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
av668en5dinpin5jpi6ro0yfs     worker01            Ready               Active              
k4grcnyl6vbf0z17bh67cz9l5     worker02            Ready               Active              
slsvy00m1khejbo5itmupk034 *   master01            Ready               Active              Leader

@master01:~$ kubectl get nodes
NAME       STATUS     ROLES     AGE       VERSION
master01   Ready      master    1h        v1.8.11-docker-8d637ae
worker01   NotReady   <none>    28s       v1.8.11-docker-8d637ae
worker02   Ready      <none>    3m        v1.8.11-docker-8d637ae
openusm@master01:~$ 

Downloading Client Bundle

In order to manage services using Docker CLI, one need to install client bundle and Docker Inc. provides an easy way to install it. I put it under the script “install-client-bundle” . All you need is to supply the correct username and password plus UCP URL to make it work.

Cloning the docker-app Repository

@master01:~/$git clone https://github.com/ajeetraina/app
@master01:~/$cd app
@master01:~/app$ sudo sh install-dockerapp

Verifying Docker-app version

openusm@master01:~/app$ docker-app version
Version:      v0.6.0
Git commit:   9f9c6680
Built:        Thu Oct  4 13:30:33 2018
OS/Arch:      linux/amd64
Experimental: off
Renderers:    none

 

Create WordPress Helm Package

Change directory to /app/examples/wordpress and run the below command to create Helm Chart.

openusm@master01:~/app/examples/wordpress$ docker-app helm --stack-version=v1beta1

 

docker-app helm wordpress will output a Helm package in the ./wordpress.helm folder. –compose-file (or -c), –set (or -e) and –settings-files (or -s) flags apply the same way they do for the render subcommand.

Deploy WordPress Application on Kubernetes Cluster

We are now all set to build WordPress application for Kubernetes Cluster using docker-app deploy command.

openusm@master01:~/app/examples/wordpress$ docker-app deploy -o kubernetes
top-level network "overlay" is ignored
service "mysql": network "overlay" is ignored
service "wordpress": network "overlay" is ignored
service "wordpress": depends_on are ignored
Waiting for the stack to be stable and running...
wordpress: Ready                [pod status: 1/1 ready, 0/1 pending, 0/1 failed]

 

 

 

If you’re interested in a fully conformant Kubernetes environment that is ready for the enterprise,    https://trial.docker.com/

Did you find this blog helpful? Feel free to share your experience. Get in touch with me on Twitter –  @ajeetsraina

If you are looking out for contribution, join me at Docker Community Slack Channel.

Installing Helm to deploy Kubernetes Applications on Docker Enterprise 2.0 Made Easy

Estimated Reading Time: 11 minutes

 

 

Let’s talk about RBAC under Docker EE 2.0…

Kubernetes RBAC(Role-based Access Control) security context is a fundamental part of Kubernetes security best practices, as well as rolling out TLS certificates / PKI authentication for connecting to the Kubernetes API server and between its components. Kubernetes RBAC is essentially an authorization and access control specification where you define the actions (GET, UPDATE, DELETE, etc) that Kubernetes subjects (i.e. human users, software, kubelets) are allowed to perform over Kubernetes entities (i.e. pods, secrets, nodes).

 

 

 

Early this year, Docker EE 2.0 introduced an enhanced RBAC solution for the first time that provided flexible and granular access controls across multiple teams and users. Docker EE leverages the Kubernetes webhook authentication model. This feature enables the validation of all requests by an outside source. With Docker EE we use the control plane’s RBAC controller, eNZi. Each Kubernetes request, whether issued via the CLI or the GUI, is validated against Docker EE’s authN/authZ database, and then rejected or accepted as appropriate.

 

With Docker EE 2.0, UCP now includes an upstream distribution of Kubernetes. From a security point of view this is the best of both worlds. Out of the box Docker EE 2.0 provides user authentication and RBAC on top of Kubernetes. To ensure the Kubernetes orchestrator follows all the security best practices UCP utilizes TLS for the Kubernetes API port. When combined with UCP’s auth model, this allows for the same client bundle to talk to the Swarm or Kubernetes API.

Early this year, I wrote a blog post which deep-dive into Docker EE 2.0 Architecture. Check it out if you are new and want to get into nitty-gritty of Docker Enterprise product.

Under the Hood: Demystifying Docker Enterprise Edition 2.0 Architecture

Under this blog post, I will show you an insanely easy way to deploy Helm Package manager on 3-Node Kubernetes Cluster running Docker Enterprise 2.0.

Tested Infrastructure

Platform Google Cloud Platform
OS Instance Ubuntu 18.04
Machine Type n1-standard-4 (4 vCPUs, 15 GB memory)
No. of Nodes 3

I assume that you have 3 Ubuntu 18.04 instances having the above minimal configurations. Make sure all the hosts you want to manage with Docker EE have a minimum of:

  • Docker Enterprise Edition 17.06.2-ee-8. Values of n in the -ee- suffix must be 8 or higher
  • Linux kernel version 3.10 or higher
  • 4.00 GB of RAM
  • 3.00 GB of available disk space

 


Cloning the Repository

Login to the first Ubuntu 18.04 OS and run the below command to clone the repository.

git clone https://github.com/ajeetraina/docker101
cd docker101/docker-ee/ubuntu

 

Installing Docker EE

The best way to try Docker Enterprise Edition for yourself is to get the 30-day trial available at the Docker Store. Once you get your trial license, you can install Docker EE on your Linux servers.

As soon as you get 1 Month Trial Version of Docker EE, you will be provided with URL. Copy the section from URL starting from sub

https://storebits.docker.com/ee/ubuntu/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Exporting URL

export eeid=sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Installing Docker EE

I have built a script for you to get Docker EE 2.0 up and running flawlessly. Just one command and Docker EE is all set.

sh bootstrap.sh provision_dockeree

Setting up UCP

Run the below command to initiate container which will setup UCP for you.

sudo sh bootstrap.sh provision_ucp
openusm@master01:~/test/docker101/docker-ee/ubuntu$ sudo sh bootstrap.sh provision_ucp
Unable to find image 'docker/ucp:3.0.5' locally
3.0.5: Pulling from docker/ucp
ff3a5c916c92: Pull complete 
a52011fa0ead: Pull complete 
87e35eb74a08: Pull complete 
Digest: sha256:c8a609209183561de7779e5d32abc5fd9125944c67e8daf262dcbb9f2b1e44ff
Status: Downloaded newer image for docker/ucp:3.0.5
INFO[0000] Your engine version 17.06.2-ee-16, build 9ef4f0a (4.15.0-1021-gcp) is compatible with UCP 3.0.5 (f588f8a) 
Admin Username: collabnix
Admin Password: 
Confirm Admin Password: 
INFO[0043] Pulling required images... (this may take a while) 
INFO[0043] Pulling docker/ucp-auth:3.0.5                
INFO[0049] Pulling docker/ucp-hyperkube:3.0.5           
INFO[0064] Pulling docker/ucp-etcd:3.0.5                
INFO[0070] Pulling docker/ucp-interlock-proxy:3.0.5     
INFO[0086] Pulling docker/ucp-agent:3.0.5               
INFO[0092] Pulling docker/ucp-kube-compose:3.0.5        
INFO[0097] Pulling docker/ucp-dsinfo:3.0.5              
INFO[0104] Pulling docker/ucp-cfssl:3.0.5               
INFO[0107] Pulling docker/ucp-kube-dns-sidecar:3.0.5    
INFO[0112] Pulling docker/ucp-interlock:3.0.5           
INFO[0115] Pulling docker/ucp-kube-dns:3.0.5            
INFO[0120] Pulling docker/ucp-controller:3.0.5          
INFO[0128] Pulling docker/ucp-pause:3.0.5               
INFO[0132] Pulling docker/ucp-calico-kube-controllers:3.0.5 
INFO[0136] Pulling docker/ucp-auth-store:3.0.5          
INFO[0142] Pulling docker/ucp-calico-cni:3.0.5          
INFO[0149] Pulling docker/ucp-calico-node:3.0.5         
INFO[0158] Pulling docker/ucp-kube-dns-dnsmasq-nanny:3.0.5 
INFO[0163] Pulling docker/ucp-compose:3.0.5             
INFO[0167] Pulling docker/ucp-swarm:3.0.5               
INFO[0173] Pulling docker/ucp-metrics:3.0.5             
INFO[0179] Pulling docker/ucp-interlock-extension:3.0.5 
WARN[0183] None of the hostnames we'll be using in the UCP certificates [master01 127.0.0.1 172.17.0.1 10.140.0.2] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases 

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases: 
INFO[0000] Initializing a new swarm at 10.140.0.2 
Additional aliases: 
INFO[0000] Initializing a new swarm at 10.140.0.2       
INFO[0009] Installing UCP with host address 10.140.0.2 - If this is incorrect, please specify an alternative address with the '--host-address' flag 
INFO[0009] Deploying UCP Service...                     
INFO[0068] Installation completed on master01 (node slsvy00m1khejbo5itmupk034) 
INFO[0068] UCP Instance ID: omz7lso0zpeyzk17gxubvz72r   
INFO[0068] UCP Server SSL: SHA-256 Fingerprint=24:9B:51:4E:E2:F1:CD:1B:DE:E0:86:0F:DC:E7:29:B5:1E:0E:6B:0C:BF:24:CC:27:85:91:35:A1:6A:39:37:C6 
INFO[0068] Login to UCP at https://10.140.0.2:443       
INFO[0068] Username: collabnix                          
INFO[0068] Password: (your admin password) 

Logging in Docker EE

By now, you should be able to login to Docker EE Window using browser. Upload the license and you should be good to see the UCP console.

Installing Kubectl

Undoubtedly, kubectl has been favourite command for K8s users. It works great but it’s painful because you use it to manually run a command for each resource in your Kubernetes application. This is prone to error, because we might forget to deploy one resource, or introduce a typo when writing our kubectl commands. As we add more parts to our application, the probability of these problems occurring increases.

But still here’s a bonus – Execute the below script if you really want to use kubectl on Docker EE Platform.

sudo sh bootstrap.sh install_kubectl

Verify Kubectl Version

@master01:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.11", GitCommit:"1df6a8381669a6c753f79cb31ca2e3d57ee7c8a3", GitTreeState:"clean", BuildDate:"2018-04-05T17:24:
03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.11-docker-8d637ae", GitCommit:"8d637aedf46b9c21dde723e29c645b9f27106fa5", GitTreeState:"clean", BuildDate:"2
018-04-26T16:51:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Verifying Docker Version

Here you go.. Docker EE 2.0 is all set up with Swarm & Kubernetes running side by side.

 docker version
Client: Docker Enterprise Edition (EE) 2.0
 Version:      17.06.2-ee-16
 API version:  1.30
 Go version:   go1.8.7
 Git commit:   9ef4f0a
 Built:        Thu Jul 26 16:41:28 2018
 OS/Arch:      linux/amd64
Server: Docker Enterprise Edition (EE) 2.0
 Engine:
  Version:      17.06.2-ee-16
  API version:  1.30 (minimum version 1.12)
  Go version:   go1.8.7
  Git commit:   9ef4f0a
  Built:        Thu Jul 26 16:40:18 2018
  OS/Arch:      linux/amd64
  Experimental: false
 Universal Control Plane:
  Version:       3.0.5
  ApiVersion:                   1.30
  Arch:                         amd64
  BuildTime:                    Thu Aug 30 17:47:03 UTC 2018
  GitCommit:                    f588f8a
  GoVersion:                    go1.9.4
  MinApiVersion:                1.20
  Os:                           linux
 Kubernetes:
  Version:      1.8+
  buildDate:                   2018-04-26T16:51:21Z
  compiler:                    gc
  gitCommit:                   8d637aedf46b9c21dde723e29c645b9f27106fa5
  gitTreeState:                clean
  gitVersion:                  v1.8.11-docker-8d637ae
  goVersion:                   go1.8.3
  major:                       1
  minor:                       8+
  platform:                    linux/amd64
 Calico:
  Version:          v3.0.8
  cni:                             v2.0.6
  kube-controllers:                v2.0.5
  node:                            v3.0.8

Verifying the Kubernetes Nodes

@master01:~/test/docker101/docker-ee/ubuntu$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
master01   Ready     master    20m       v1.8.11-docker-8d637ae

Adding Worker Nodes

To add worker nodes, go to Add Nodes section under Docker Enterprise UI and click on “Add a Node”. It will display a command which need to be executed on worker nodes. This should be good to build multi-node Docker EE Swarm & Kubernetes Cluster

m@master01:~$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
av668en5dinpin5jpi6ro0yfs     worker01            Ready               Active              
k4grcnyl6vbf0z17bh67cz9l5     worker02            Ready               Active              
slsvy00m1khejbo5itmupk034 *   master01            Ready               Active              Leader

@master01:~$ kubectl get nodes
NAME       STATUS     ROLES     AGE       VERSION
master01   Ready      master    1h        v1.8.11-docker-8d637ae
worker01   NotReady   <none>    28s       v1.8.11-docker-8d637ae
worker02   Ready      <none>    3m        v1.8.11-docker-8d637ae
openusm@master01:~$ 

Downloading Client Bundle

In order to manage services using Docker CLI, one need to install client bundle and Docker Inc. provides an easy way to install it. I put it under the script “install-client-bundle” . All you need is to supply the correct username and password plus UCP URL to make it work.

 

Installing Helm

openusm@master01:~$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > install-helm.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7230  100  7230    0     0  17173      0 --:--:-- --:--:-- --:--:-- 17132
openusm@master01:~$ chmod u+x install-helm.sh
openusm@master01:~$ ./install-helm.sh
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.11.0-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.
openusm@master01:~$ helm init
Creating /home/openusm/.helm 
Creating /home/openusm/.helm/repository 
Creating /home/openusm/.helm/repository/cache 
Creating /home/openusm/.helm/repository/local 
Creating /home/openusm/.helm/plugins 
Creating /home/openusm/.helm/starters 
Creating /home/openusm/.helm/cache/archive 
Creating /home/openusm/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /home/openusm/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
openusm@master01:~$

Verifying Helm Version

helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}

Installing MYSQL using Helm

helm install --name mysql stable/mysql
Error: release mysql failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "default": access denied

You will encounter the above error message and it is expected. This is caused by tiller not having the correct service account / permissions to install applications into the cluster.

Creating Service Account for Tiller

To create a service account for tiller and apply the correct grants, follow these steps:

Create the tiller service account:

openusm@master01:~$ kubectl create serviceaccount --namespace kube-system tiller
serviceaccount "tiller" created

 

In the UCP UI, navigate to the User Management > Grants menu and click  the Create button.In the Subject area, choose Service Account and select the kube-system Namespace and tiller Service Account; then hit Next:

 

In the Role area, select Full Control. Then hit Next. In the Resource area, select Namespaces, flip the switch to enable Apply grant to all existing and new namespaces, then hit Create:

 

 

Verify the final grant. It should look similar to what is shown below:

 

At this stage, if you have tiller installed in the cluster already, you will need to patch the tiller deployment to use the tiller service account just created:

$ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

So before we can use helm with a kubernetes cluster, you need to install tiller on it. It’s as easy as running the below command:

openusm@master01:~$ helm init --service-account tiller
$HELM_HOME has been configured at /home/openusm/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!

The above command installs Tiller (the Helm server-side component) onto your Kubernetes Cluster and sets up local configuration in $HELM_HOME (default ~/.helm/).As with the rest of the Helm commands, ‘helm init’ discovers Kubernetes clusters by reading $KUBECONFIG (default ‘~/.kube/config’) and using the default context.

We are good to go ahead and install a chart via helm install:

openusm@master01:~$ helm install --name mysql stable/mysql
NAME:   mysql
LAST DEPLOYED: Thu Oct 11 13:37:52 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod(related)
NAME                    READY  STATUS   RESTARTS  AGE
mysql-694696f6d5-w4j5n  0/1    Pending  0         1s
==> v1/Secret
NAME   AGE
mysql  1s
==> v1/ConfigMap
mysql-test  1s
==> v1/PersistentVolumeClaim
mysql  1s
==> v1/Service
mysql  1s
==> v1beta1/Deployment
mysql  1s
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mysql.default.svc.cluster.local
To get your root password run:
    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)
To connect to your database:
1. Run an Ubuntu pod that you can use as a client:
    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
2. Install the mysql client:
    $ apt-get update && apt-get install mysql-client -y
3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysql -p
To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306
    # Execute the following command to route the connection:
    kubectl port-forward svc/mysql 3306
    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
    

In my next blog post, I will showcase how to build IstioMesh on Docker Enterprise 2.0. Stay tuned !

If you’re interested in a fully conformant Kubernetes environment that is ready for the enterprise,    https://trial.docker.com/

Did you find this blog helpful? Feel free to share your experience. Get in touch with me on Twitter –  @ajeetsraina

If you are looking out for contribution, join me at Docker Community Slack Channel.

Kubernetes Hands-on Lab #3 – Deploy Istio Mesh on K8s Cluster

Estimated Reading Time: 10 minutes 

Istio is completely an open source service mesh that layers transparently onto existing distributed applications. Istio v1.0 got announced last month and is ready for production. It is written completely in Go Language and its a fully grown platform which provides APIs that let it integrate into any logging platform, or telemetry or policy system. This project adds a very tiny overhead to your system. It is hosted on GitHub under this link. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.

Istio is composed of these components:

  • Envoy – Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services. The proxies form a secure microservice mesh providing a rich set of functions like discovery, rich layer-7 routing, circuit breakers, policy enforcement and telemetry recording/reporting functions.

    Note: The service mesh is not an overlay network. It simplifies and enhances how microservices in an application talk to each other over the network provided by the underlying platform.

  • Mixer – Central component that is leveraged by the proxies and microservices to enforce policies such as authorization, rate limits, quotas, authentication, request tracing and telemetry collection.
  • Pilot – A component responsible for configuring the proxies at runtime.
  • Citadel – A centralized component responsible for certificate issuance and rotation.
  • Node Agent – A per-node component responsible for certificate issuance and rotation.
  • Galley– Central component for validating, ingesting, aggregating, transforming and distributing config within Istio.

What benefits does Istio bring?

Figure: The sidecar incepts all the network traffic

  • Istio lets you connect, secure, control, and observe services.
  • It helps to reduce the complexity of service deployments and eases the strain on your development teams.
  • It provides developers and DevOps fine-grained visibility and control over traffic without requiring any changes to application code.
  • It provides CIOs with the necessary tools needed to help enforce security and compliance requirements across the enterprise.
  • It provides behavioral insights & operational control over the service mesh as a whole.
  • Istio makes it easy to create a network of deployed services with automatic Load Balancing for HTTP, gRPC, Web Socket & TCP Traffic.
  • It provides fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
  • It enables a pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
  • Istio provides automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
  • It provides secure service-to-service communication in a cluster with strong identity-based authentication and authorization.

Istio currently supports Kubernetes.Under this blog post, I will showcase how can one bring up Istio on Play with Kubernetes Platform.

Open https://labs.play-with-k8s.com/ to access Kubernetes Playground.
Click on the Login button to authenticate with Docker Hub or GitHub ID.

Once you start the session, you will have your own lab environment.

Adding First Kubernetes Node

Click on “Add New Instance” on the left to build your first Kubernetes Cluster node. It automatically names it as “node1”. Each instance has Docker Community Edition (CE) and Kubeadm already pre-installed. This node will be treated as the master node for our cluster.

Bootstrapping the Master Node

You can bootstrap the Kubernetes cluster by initializing the master (node1) node with the below script. Copy this script content into bootstrap.sh file and make it executable using “chmod +x bootstrap.sh” command.

When you execute this script, as part of initialization, the kubeadm write several configuration files needed, setup RBAC and deployed Kubernetes control plane components (like kube-apiserver, kube-dns, kube-proxy, etcd, etc.). Control plane components are deployed as Docker containers.

Copy the above kubeadm join token command and save it for the next step. This command will be used to join other nodes to your cluster.

Adding Worker Nodes

Click on “Add New Node” to add a new worker node.

Checking the Cluster Status

 

 

Verifying the running Pods


Installing Istio 1.0.0

Istio is deployed in a separate Kubernetes namespace istio-system. We will verify it later. As of now, you can copy the below content in a file called install_istio.sh and save it. You can make it executable and run it to install Istio and related tools.

#!/bin/bash
curl -L https://git.io/getLatestIstio | sh –
export PATH=$PWD/bin:$PATH
cd istio-1.0.0
kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
kubectl apply -f install/kubernetes/istio-demo.yaml
kubectl get svc -n istio-system
kubectl get pods -n istio-system

You should be able to see screen flooding with the below output.

 

 

As shown above, it will enable the Prometheus, ServiceGraph, Jaeger, Grafana, and Zipkin by default.

Please note – While executing this script, it might end up with the below error message –

unable to recognize “install/kubernetes/istio-demo.yaml”: no matches for admissionregistration.k8s.io/, Kind=MutatingWebhookConfiguration

The error message is expected.

As soon as the command gets executed completely, you should be able to see a long list of ports which gets displayed at the top center of the page.

Verifying the Services


Exposing the Services

To expose Prometheus, Grafana & Servicegraph services, you will need to delete the existing services and then use NodePort instead of ClusterIP so as to access the service using the port displayed on the top of the instance page. (as shown below)

You should be able to access Grafana page by clicking on “30004” port and Prometheus page by clicking on “30003”.

You can check Prometheus metrics by selecting the necessary option as shown below:

Under Grafana Page, you can add “Data Source” for Prometheus and ensure that the dashboard is up and running:

Congratulations! You have installed Istio on Kubernetes cluster. Below listed services have been installed on K8s playground:

  • Istio Controllers and related RBAC rules
  • Istio Custom Resource Definitions
  • Prometheus and Grafana for Monitoring
  • Jeager for Distributed Tracing
  • Istio Sidecar Injector (we’ll take a look next section)

Installing Istioctl

Istioctl is configuration command line utility of Istio. It helps to create, list, modify and delete configuration resources in the Istio system.

Deploying the Sample BookInfo Application

Now Istio is installed and verified, you can deploy one of the sample applications provided with the installation- BookInfo. This is a simple mock bookstore application made up of four services that provide a web product page, book details, reviews (with several versions of the review service), and ratings – all managed using Istio.

Deploying BookInfo Services

 

[node1 istio-1.0.0]$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

service “details” created

deployment “details-v1” created

service “ratings” created

deployment “ratings-v1” created

service “reviews” created

deployment “reviews-v1” created

deployment “reviews-v2” created

deployment “reviews-v3” created

service “productpage” created

deployment “productpage-v1” created

 

 

[node1 istio-1.0.0]$ istioctl create -f samples/bookinfo/networking/bookinfo-gateway.yaml

Created config gateway/default/bookinfo-gateway at revision 13436

Created config virtual-service/default/bookinfo at revision 13438

 

 

Verifying BookInfo Application

[node1 istio-1.0.0]$ kubectl get services

NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE

details       ClusterIP   10.97.29.111     <none>        9080/TCP   1m

kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP    1h

productpage   ClusterIP   10.106.144.171   <none>        9080/TCP   1m

ratings       ClusterIP   10.111.164.221   <none>        9080/TCP   1m

reviews       ClusterIP   10.99.195.21     <none>        9080/TCP   1m

[node1 istio-1.0.0]$ curl 10.106.144.171:9080

<!DOCTYPE html>

<html>

<head>

<title>Simple Bookstore App</title>

<meta charset=”utf-8″>

<meta http-equiv=”X-UA-Compatible” content=”IE=edge”>

<meta name=”viewport” content=”width=device-width, initial-scale=1″>

 

<!– Latest compiled and minified CSS –>

<link rel=”stylesheet” href=”https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css”>

 

<!– Optional theme –>

<link rel=”stylesheet” href=”https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap-theme.min.css”>

 

</head>

<body>

 

 

<p><h3>Hello! This is a simple bookstore application consisting of three services as shown below</h3></p>

 

<table class=”table table-condensed table-bordered table-hover”><tr><th>endpoint</th><td>details</td></tr><tr><th>name</th><td>http://details:9080</td></tr><tr><th>children</th><td><table class=”table table-condensed table-bordered table-hover”><tr><th>endpoint</th><th>name</th><th>children</th></tr><tr><td>details</td><td>http://details:9080</td><td></td></tr><tr><td>reviews</td><td>http://reviews:9080</td><td><table class=”table table-condensedtable-bordered table-hover”><tr><th>endpoint</th><th>name</th><th>children</th></tr><tr><td>ratings</td><td>http://ratings:9080</td><td></td></tr></table></td></tr></table></td></tr></table>

 

<p><h4>Click on one of the links below to auto generate a request to the backend as a real user or a tester

</h4></p>

<p><a href=”/productpage?u=normal”>Normal user</a></p>

<p><a href=”/productpage?u=test”>Test user</a></p>

 

 

 

<!– Latest compiled and minified JavaScript –>

<script src=”https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js”></script>

 

<!– Latest compiled and minified JavaScript –>

<script src=”https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js”></script>

 

</body>

</html>

 

Accessing it via Web URL

[node1 istio-1.0.0]$ kubectl get services

NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE

details       ClusterIP   10.97.29.111     <none>        9080/TCP   2m

kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP    1h

productpage   ClusterIP   10.106.144.171   <none>        9080/TCP   2m

ratings       ClusterIP   10.111.164.221   <none>        9080/TCP   2m

reviews       ClusterIP   10.99.195.21     <none>        9080/TCP   2m

[node1 istio-1.0.0]$ kubectl delete svc productpage

service “productpage” deleted

[node1 istio-1.0.0]$ kubectl create service nodeport productpage –tcp=9080 –node-port=30010

service “productpage” created

 

You should now be able the BookInfo Sample as shown below:

In my next blog post, I will showcase how to bring up Kubernetes Dashboard under Play with Kubernetes Platform.

 

Kubernetes Hands-on Lab #2 – Running Our First Nginx Cluster

Estimated Reading Time: 2 minutes 

 

Nginx (pronounced “engine-x”) is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server). The nginx project started with a strong focus on high concurrency, high performance and low memory usage. It is licensed under the 2-clause BSD-like license and it runs on Linux, BSD variants, Mac OS X, Solaris, AIX, HP-UX, as well as on other *nix flavors. It also has a proof of concept port for Microsoft Windows.

In my last blog post, I showcased how to build 5-Node Kubernetes cluster. Under this blog post, we will see how to build our first Nginx application on this cluster environment.

Verifying 5-Node K8s Cluster

[node1 ~]$ kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node1     Ready     master    1h        v1.10.2
node2     Ready     <none>    1h        v1.10.2
node3     Ready     <none>    1h        v1.10.2
node4     Ready     <none>    1h        v1.10.2
node5     Ready     <none>    14m       v1.10.2
[node1 ~]$

Running Nginx having 4 Replicas

kubectl run nginx --image=nginx:latest --replicas=4

Verifying K8s Pods Up and Running

[node1 ~]$ kubectl get po
NAME                     READY     STATUS    RESTARTS   AGE
nginx-5db977d67c-6sdfd   1/1       Running   0          2m
nginx-5db977d67c-jfq9h   1/1       Running   0          2m
nginx-5db977d67c-vs925   1/1       Running   0          2m
nginx-5db977d67c-z5r45   1/1       Running   0          2m
[node1 ~]$

Watch the pods

kubectl get pods -w

Expose the NGINX API port:


kubectl expose deploy/nginx --port 80

Testing the Nginx Service


IP=$(kubectl get svc nginx -o go-template --template '{{ .spec.clusterIP }}')

Send a few requests:

[node1 ~]$ curl $IP:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[node1 ~]$

In my next blog post, I will showcase how to build Istio Application on Play with Kubernetes Platform.

Kubernetes Hands-on Lab #3 – Deploy Istio Mesh on K8s Cluster

Kubernetes Hands-on Lab #1 – Setting up 5-Node K8s Cluster

Estimated Reading Time: 4 minutes 

Are you new to Kubernetes? Want to build your career in Kubernetes? Then Welcome ! You are at the right place. This blog post series brings you tutorials that help you get hands-on experience using Kubernetes. Here you will find a mix of labs and tutorials that will help you, no matter if you are a beginner, SysAdmin, IT Pro or Developer. Yes, you read it correct ! Its $0 learning platform. You don’t need any infrastructure. Most of the tutorials runs on Play with K8s Platform. This is a free browser based learning platform for you. Kubernetes tools like kubeadm, kompose & kubectl are already installed for you. All you need is to get started.

Kubernetes (often abbreviated to K8S), is a container orchestration platform for applications that run on containers. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.

Kubernetes can speed up the development process by making easy, automated deployments, updates (rolling-update) and by managing our apps and services with almost zero downtime. It also provides self-healing. Kubernetes can detect and restart services when a process crashes inside the container. Any developer can package up applications and deploy them on Kubernetes with basic Docker knowledge.

At a minimum, Kubernetes can schedule and run application containers on clusters of physical or virtual machines. However, Kubernetes also allows developers to ‘cut the cord’ to physical and virtual machines, moving from a host-centric infrastructure to a container-centric infrastructure, which provides the full advantages and benefits inherent to containers. Kubernetes provides the infrastructure to build a truly container-centric development environment. K8s provides a rich set of features for container grouping, container orchestration, health checking, service discovery, load balancing, horizontal autoscaling, secrets & configuration management, storage orchestration, resource usage monitoring, CLI, and dashboard.

This is the first blog targeted at setting up 5-Node Kubernetes cluster. To get started with Kubernetes, follow the below steps:

Click on “Start” button to get access to PWK instances as shown below:

Click on Add Instances to setup first k8s node.

Cloning the Repository

git clone https://github.com/ajeetraina/kubernetes101/
cd kubernetes101/install

Bootstrapping the First Node Cluster

sh bootstrap.sh

Adding New K8s Cluster Node

Click on Add Instances to setup first k8s node cluster

Wait for 1 minute time till it gets completed.

Copy the command starting with kubeadm join ..... We will need it to be run on the worker node.

Setting up Worker Node

Click on “Add New Instance” and paste the last kubeadm command on this fresh new worker node.

[node2 ~]$ kubeadm join --token 4f924f.14eb7618a20d2ece 192.168.0.8:6443 --discovery-token-ca-cert-hash  sha256:a5c25aa4573e06a0c11b11df23c8f85c95bae36cbb07d5e7879d9341a3ec67b3```

You will see the below output:

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Skipping pre-flight checks[discovery] Trying to connect to API Server "192.168.0.8:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.8:6443"
[discovery] Requesting info from "https://192.168.0.8:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.8:6443"[discovery] Successfully established connection with API Server "192.168.0.8:6443"
[bootstrap] Detected server version: v1.8.15
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.
[node2 ~]$

Verifying Kubernetes Cluster

Run the below command on master node

[node1 ~]$ kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node1     Ready     master    15m       v1.10.2
node2     Ready     <none>    1m        v1.10.2
[node1 ~]$

Adding Worker Nodes

[node1 ~]$ kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node1     Ready     master    58m       v1.10.2
node2     Ready     <none>    57m       v1.10.2
node3     Ready     <none>    57m       v1.10.2
node4     Ready     <none>    57m       v1.10.2
node5     Ready     <none>    54s       v1.10.2
[node1 ]$ kubectl get po
No resources found.
[node1 ]$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   1h
[node1 ]$

In the next blog series, I will showcase how to build a simple Nginx application on top of 5-Node Kubernetes cluster.

Kubernetes Hands-on Lab #2 – Running Our First Nginx Cluster

Kubernetes Hands-on Lab #3 – Deploy Istio Mesh on K8s Cluster

Test Drive Your First Istio Deployment using Play with Kubernetes Platform

Estimated Reading Time: 3 minutes

If you’re a Developer and have been spending lot of time in developing apps recently, you already understand a whole new set of challenges related to Microservice architecture. Although there has been a shift from bloated monolithic apps to small, focused Microservices to speed up implementation and to improve resiliency but the fact is  developers have to really worry about the challenges in integrating services in distributed systems which includes accountability for service discovery, load balancing, registration, fault tolerance, monitoring, routing, compliance and security.

Let us understand the challenges which Microservice bring to developers and operators in details. Consider a 1st Generation simple Service Mesh scenario. As shown below, Service (A) talks to Service (B). Instead of talking directly, the request gets routed through Nginx. The Nginx finds route in Consul (which is actually a service discovery tool) and automatic retries on HTTP 502’s happen.

 

But with the advent of growing number of microservices architecture, the below listed challenges arises for both developers as well as operation team which are discussed below –

  • How to enable these growing number of microservices to talk to each other?
  • How to enable these growing number of microservices to load-balance?
  • How to enable these growing number of microservices to provide role-based routing?
  • How to implement outgoing traffic on these microservices and test canary deployment?
  • How to manage complexity around these growing pieces of microservices?
  • How can operator implement fine-grained control of traffic behavior with rich-routing rules?
  • How shall one implement Traffic encryption, service-to-service authentication and strong identity assertions?

 

In nutshell, although you could put service discovery and retry logic into application or networking middleware but the fact is service discovery becomes tricky to get right.

Enter Istio’s Service Mesh

“Service Mesh” is one of the hottest buzzword of 2018. As its name suggest, it is a configurable infrastructure layer for a microservices app. It describes the network of microservices that make up applications and the interactions between them. It makes communication between service instances flexible, reliable, and fast. The mesh provides service discovery, load balancing, encryption, authentication and authorization, support for the circuit breaker pattern, and other capabilities.

Istio is a completely open source service mesh that layers transparently onto existing distributed applications. Istio v1.0 got announced last month and is ready for production. It is written completely in Go Language and is actually a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. This project adds a very tiny overhead to your system. It is being hosted on GitHub. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.

Read the full story at Knowledgehut.

Kubernetes Application Deployment Made Easy using Helm on Docker for Mac 18.05.0 CE

Estimated Reading Time: 10 minutes 

Docker for Mac 18.05.0 CE Release went GA last month. With this release, you can now select your orchestrator directly from the UI in the “Kubernetes” pane which allows “docker stack” commands to deploy to swarm clusters, even if Kubernetes is enabled in Docker for Mac. This feature was introduced for the first time under any Desktop Edition. To try out this feature, ensure that you are using Edge Release of Docker for Mac 18.05.0 CE. Once you update your Docker for Mac, you can find this new feature by clicking on Preference Pane UI and then selecting Kubernetes as shown below:

 

 

 

Whenever you select your choice of orchestrator, it updates ~/.docker/config.json file in the backend as shown below:

 

Docker for Mac is used everyday by hundreds of thousands of developers to build, test and debug containerized apps in their local and dev environment. Developers building both docker-compose and Swarm-based apps, and apps destined for deployment on Kubernetes now get a simple-to-use development system that takes optimal advantage of their laptop or workstation. All container tasks – build, run and push – will run on the same Docker instance with a shared set of images, volumes and containers. With the current release, it is way more simple to install, so you can have Docker containers running on your Mac in just a few minutes.

Check out the curated list of blogs around Docker for Mac

 

Docker for Mac is built with LinuxKit. How to access the LinuxKit VM
Top 5 Exclusive Features of Docker for Mac That you can’t afford to ignore
5 Minutes to Bootstrap Kubernetes Cluster on GKE using Docker for Mac 18.03.0
Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0
2-minutes to Kubernetes Cluster on Docker for Mac 18.01 using Swarm CLI
Docker For Mac 1.13.0 brings support for macOS Sierra, now runs ARM & AARCH64 based Docker containers
Docker for Mac 18.03.0 now comes with NFS Volume Sharing Support for Kubernetes

Docker for Mac provides docker stack command to deploy your application for both Swarm and Kubernetes. This become very useful for Docker Swarm users as they can use the same Swarm CLI to bring up Kubernetes users. But here is an extra bonus – Docker for Mac now works flawlessly for Helm Package Manager.

Why Yet another Package Manager?

Let’s accept the fact that Kubernetes can become very complex with all the objects you need to handle ― such as ConfigMaps, services, pods, Persistent Volumes ― in addition to the number of releases you need to manage. These can be managed with Kubernetes Helm, which offers a simple way to package everything into one simple application and advertises what you can configure.

In case you are completely new – Helm is an open source project that enables developers to create packages of containerized apps to make installation much simpler. Helm is the package manager for Kubernetes and it’s the best way to find, share, and deploy software to k8s.The project was initially created by Deis and has since been donated to the Cloud Native Computing Foundation (CNCF).

Users can install Helm with one click or configure it to suit their organization’s needs. For example, if you want to package and release version 1.0, making only certain parts configurable, this can be done with Helm. Then with version 2.0, additional parts can be made configurable.

Up until now, it was a sub-project of Kubernetes, the popular container orchestration tool, but as of today it is a stand-alone project.

 

Helm is built on three important concepts:

  • Charts – a bundle of information necessary to create an instance of a Kubernetes application
  • Config – contains configuration information that can be merged into a packaged chart to create a releasable object
  • Release – a running instance of a chart, combined with a specific config

Architecture of Helm:

Architecturally it’s built on two major components:

 

Helm Client, a command line tool with the following responsibilities

  • Interacting with the Tiller server
  • Sending charts to be installed
  • Upgrading or uninstalling of existing releases
  • Managing repositories

Tiller Server, an in-cluster server with the following responsibilities:

  • Interacts with the Helm client
  • Interfaces the Kubernetes API server
  • Combining a chart and configuration to build a release
  • Installing charts and tracking the release
  • Upgrading and uninstalling charts

Both the Helm client and Tiller are written in Go and uses gRPC to interact with each other. Tiller (as the server part running inside Kubernetes) provides a gRPC server to connect with the client and it uses the k8s client library to communicate with Kubernetes. It does not require it’s own database as the information is stored within Kubernetes as ConfigMaps.

 

Installing Helm

Pre-requisites:

  • Docker for Mac 18.05.0 CE – Edge Release
  • Enable Kubernetes under Preference Pane UI

To install Helm, you just need a single-liner command on your macOS:

[Captains-Bay]🚩 >  brew install kubernetes-helm
[Captains-Bay]🚩 >  helm
The Kubernetes package manager

To begin working with Helm, run the 'helm init' command:

	$ helm init

This will install Tiller to your running Kubernetes cluster.
It will also set up any necessary local configuration.

Common actions from this point include:

- helm search:    search for charts
- helm fetch:     download a chart to your local directory to view
- helm install:   upload the chart to Kubernetes
- helm list:      list releases of charts

Environment:
  $HELM_HOME          set an alternative location for Helm files. By default, these are stored in ~/.helm
  $HELM_HOST          set an alternative Tiller host. The format is host:port
  $HELM_NO_PLUGINS    disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins.
  $TILLER_NAMESPACE   set an alternative Tiller namespace (default "kube-system")
  $KUBECONFIG         set an alternative Kubernetes configuration file (default "~/.kube/config")

Usage:
  helm [command]

Available Commands:
  completion  Generate autocompletions script for the specified shell (bash or zsh)
  create      create a new chart with the given name
  delete      given a release name, delete the release from Kubernetes
  dependency  manage a chart's dependencies
  fetch       download a chart from a repository and (optionally) unpack it in local directory
  get         download a named release
  history     fetch release history
  home        displays the location of HELM_HOME
  init        initialize Helm on both client and server
  inspect     inspect a chart
  install     install a chart archive
  lint        examines a chart for possible issues
  list        list releases
  package     package a chart directory into a chart archive
  plugin      add, list, or remove Helm plugins
  repo        add, list, remove, update, and index chart repositories
  reset       uninstalls Tiller from a cluster
  rollback    roll back a release to a previous revision
  search      search for a keyword in charts
  serve       start a local http web server
  status      displays the status of the named release
  template    locally render templates
  test        test a release
  upgrade     upgrade a release
  verify      verify that a chart at the given path has been signed and is valid
  version     print the client/server version information

Flags:
      --debug                           enable verbose output
  -h, --help                            help for helm
      --home string                     location of your Helm config. Overrides $HELM_HOME (default "/Users/ajeetraina/.helm")
      --host string                     address of Tiller. Overrides $HELM_HOST
      --kube-context string             name of the kubeconfig context to use
      --tiller-connection-timeout int   the duration (in seconds) Helm will wait to establish a connection to tiller (default 300)
      --tiller-namespace string         namespace of Tiller (default "kube-system")

Use "helm [command] --help" for more information about a command.

Verify the Helm version. If Server and client version doesn’t match, you need to upgrade to deploy application seamlessly(as shown below):

[Captains-Bay]🚩 >  helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
[Captains-Bay]🚩 >  helm init --upgrade
$HELM_HOME has been configured at /Users/ajeetraina/.helm.

Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
[Captains-Bay]🚩 >  helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

Installing WordPress Application using Helm

Say, you want to install WordPress application using Helm. First you need to update the repository and then you can search the application using helm search command as shown below:

[Captains-Bay]🚩 > helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

[Captains-Bay]🚩 >  helm search wordpress
NAME            	CHART VERSION	APP VERSION	DESCRIPTION
stable/wordpress	1.0.2        	4.9.4      	Web publishing platform for building blogs and ...

Just a single-liner command and your WordPress application is up and running:

[Captains-Bay]🚩 >  helm install stable/wordpress --name mywp
NAME:   mywp
LAST DEPLOYED: Sat Jun  2 07:19:25 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Deployment
NAME            DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
mywp-mariadb    1        1        1           0          0s
mywp-wordpress  1        1        1           0          0s

==> v1/Pod(related)
NAME                             READY  STATUS             RESTARTS  AGE
mywp-mariadb-b689ddf74-mlprh     0/1    Init:0/1           0         0s
mywp-wordpress-774555bd4b-hcdc2  0/1    ContainerCreating  0         0s

==> v1/Secret
NAME            TYPE    DATA  AGE
mywp-mariadb    Opaque  2     1s
mywp-wordpress  Opaque  2     1s

==> v1/ConfigMap
NAME                DATA  AGE
mywp-mariadb        1     1s
mywp-mariadb-tests  1     1s

==> v1/PersistentVolumeClaim
NAME            STATUS   VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mywp-mariadb    Bound    pvc-2e1f1122-6607-11e8-8d79-025000000001  8Gi       RWO           hostpath      1s
mywp-wordpress  Pending  hostpath                                  1s

==> v1/Service
NAME            TYPE          CLUSTER-IP      EXTERNAL-IP  PORT(S)                     AGE
mywp-mariadb    ClusterIP     10.98.65.236    <none>       3306/TCP                    0s
mywp-wordpress  LoadBalancer  10.109.204.199  localhost    80:31016/TCP,443:30638/TCP  0s


NOTES:
1. Get the WordPress URL:

  NOTE: It may take a few minutes for the LoadBalancer IP to be available.
        Watch the status with: 'kubectl get svc --namespace default -w mywp-wordpress'

  export SERVICE_IP=$(kubectl get svc --namespace default mywp-wordpress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  echo http://$SERVICE_IP/admin

2. Login with the following credentials to see your blog

  echo Username: user
  echo Password: $(kubectl get secret --namespace default mywp-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode)

[Captains-Bay]🚩 >

Checking the Status of WordPress Application

[Captains-Bay]🚩 >  kubectl get svc --namespace default -w mywp-wordpress
NAME             TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
mywp-wordpress   LoadBalancer   10.109.204.199   localhost     80:31016/TCP,443:30638/TCP   47s

That’s it. Just browse the WordPress using your localhost IP:

 

Cleaning up WordPress

[Captains-Bay] > helm delete mywp
release "mywp" deleted

Installing Prometheus Stack using Helm

[Captains-Bay]🚩 >  helm install stable/prometheus
NAME:   hasty-ladybug
LAST DEPLOYED: Sun Jun  3 09:00:30 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolumeClaim
NAME                                   STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
hasty-ladybug-prometheus-alertmanager  Bound   pvc-7732a80b-66de-11e8-b7d4-025000000001  2Gi       RWO           hostpath      2s
hasty-ladybug-prometheus-server        Bound   pvc-773541f3-66de-11e8-b7d4-025000000001  8Gi       RWO           hostpath      2s

==> v1/ServiceAccount
NAME                                         SECRETS  AGE
hasty-ladybug-prometheus-alertmanager        1        2s
hasty-ladybug-prometheus-kube-state-metrics  1        2s
hasty-ladybug-prometheus-node-exporter       1        2s
hasty-ladybug-prometheus-pushgateway         1        2s
hasty-ladybug-prometheus-server              1        2s

==> v1beta1/ClusterRoleBinding
NAME                                         AGE
hasty-ladybug-prometheus-kube-state-metrics  2s
hasty-ladybug-prometheus-server              2s

==> v1beta1/DaemonSet
NAME                                    DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR  AGE
hasty-ladybug-prometheus-node-exporter  1        1        0      1           0          <none>         2s

==> v1/Pod(related)
NAME                                                          READY  STATUS             RESTARTS  AGE
hasty-ladybug-prometheus-node-exporter-9ggqj                  0/1    ContainerCreating  0         2s
hasty-ladybug-prometheus-alertmanager-5c67b8b874-4xxtj        0/2    ContainerCreating  0         2s
hasty-ladybug-prometheus-kube-state-metrics-5cbcd4d86c-788p4  0/1    ContainerCreating  0         2s
hasty-ladybug-prometheus-pushgateway-c45b7fd6f-2wwzm          0/1    Pending            0         2s
hasty-ladybug-prometheus-server-799d6c7c75-jps8k              0/2    Init:0/1           0         2s

==> v1/ConfigMap
NAME                                   DATA  AGE
hasty-ladybug-prometheus-alertmanager  1     2s
hasty-ladybug-prometheus-server        3     2s

==> v1beta1/ClusterRole
NAME                                         AGE
hasty-ladybug-prometheus-kube-state-metrics  2s
hasty-ladybug-prometheus-server              2s

==> v1/Service
NAME                                         TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)   AGE
hasty-ladybug-prometheus-alertmanager        ClusterIP  10.96.193.91   <none>       80/TCP    2s
hasty-ladybug-prometheus-kube-state-metrics  ClusterIP  None           <none>       80/TCP    2s
hasty-ladybug-prometheus-node-exporter       ClusterIP  None           <none>       9100/TCP  2s
hasty-ladybug-prometheus-pushgateway         ClusterIP  10.97.92.108   <none>       9091/TCP  2s
hasty-ladybug-prometheus-server              ClusterIP  10.96.118.138  <none>       80/TCP    2s

==> v1beta1/Deployment
NAME                                         DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
hasty-ladybug-prometheus-alertmanager        1        1        1           0          2s
hasty-ladybug-prometheus-kube-state-metrics  1        1        1           0          2s
hasty-ladybug-prometheus-pushgateway         1        1        1           0          2s
hasty-ladybug-prometheus-server              1        1        1           0          2s


NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
hasty-ladybug-prometheus-server.default.svc.cluster.local


Get the Prometheus server URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9090


The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
hasty-ladybug-prometheus-alertmanager.default.svc.cluster.local


Get the Alertmanager URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9093


The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
hasty-ladybug-prometheus-pushgateway.default.svc.cluster.local


Get the PushGateway URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9091

For more information on running Prometheus, visit:
https://prometheus.io/
[Captains-Bay]🚩 >  helm ls
NAME         	REVISION	UPDATED                 	STATUS  	CHART           	NAMESPACE
hasty-ladybug	1       	Sun Jun  3 09:00:30 2018	DEPLOYED	prometheus-6.7.0	default
mywp         	1       	Sat Jun  2 07:19:25 2018	DEPLOYED	wordpress-1.0.2 	default

Accessing Prometheus UI under Web Browser

export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9090

Now you can access Prometheus:

open http://localhost:9090

 

Cleaning Up

[Captains-Bay]🚩 >  helm delete hasty-ladybug
release "hasty-ladybug" deleted

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Under the Hood: Demystifying Docker For Mac CE Edition

Estimated Reading Time: 6 minutes 

Docker is a full development platform for creating containerized apps, and Docker for Mac is the most efficient way to start and run Docker on your MacBook. It runs on a LinuxKit VM and NOT on VirtualBox or VMware Fusion. It embeds a hypervisor (based on xhyve), a Linux distribution which runs on LinuxKit and filesystem & network sharing that is much more Mac native. It is a Mac native application, that you install in /Applications. At installation time, it creates symlinks in /usr/local/bin for docker & docker-compose and others, to the commands in the application bundle, in /Applications/Docker.app/Contents/Resources/bin.

One of the most amazing feature about Docker for Mac is “drag & Drop” the Mac application to /Applications to run Docker CLI and it just works flawlessly. The way the filesystem sharing maps OSX volumes seamlessly into Linux containers and remapping macOS UIDs into Linux is one of the most anticipated feature.

Few Notables Features of Docker for Mac:

  • Docker for Mac runs in a LinuxKit VM.
  • Docker for Mac uses HyperKit instead of Virtual Box. Hyperkit is a lightweight macOS virtualization solution built on top of Hypervisor.framework in macOS 10.10 Yosemite and higher.
  • Docker for Mac does not use docker-machine to provision its VM. The Docker Engine API is exposed on a socket available to the Mac host at /var/run/docker.sock. This is the default location Docker and Docker Compose clients use to connect to the Docker daemon, so you to use docker and docker-compose CLI commands on your Mac.
  • When you install Docker for Mac, machines created with Docker Machine are not affected.
  • There is no docker0 bridge on macOS. Because of the way networking is implemented in Docker for Mac, you cannot see a docker0 interface on the host. This interface is actually within the virtual machine.
  • Docker for Mac has now Multi-Architectural support. It provides binfmt_misc multi architecture support, so you can run containers for different Linux architectures, such as arm, mips, ppc64le, and even s390x.

Top 5 Exclusive Features of Docker For Mac That You Can’t Afford to Ignore

Under this blog, I will deep dive into Docker for Mac architecture and show how to access service containers running on top of LinuxKit VM.

At the base of architecture, we have hypervisor called Hyperkit which is derived from xhyve. The xhyve hypervisor is a port of bhyve to OS X. It is built on top of Hypervisor.framework in OS X 10.10 Yosemite and higher, runs entirely in userspace, and has no other dependencies. HyperKit is basically a toolkit for embedding hypervisor capabilities in your application. It includes a complete hypervisor optimized for lightweight virtual machines and container deployment. It is designed to be interfaced with higher-level components such as the VPNKit and DataKit.

Just sitting next to HyperKit is Filesystem sharing solution. The osxfs is a new shared file system solution, exclusive to Docker for Mac. osxfs provides a close-to-native user experience for bind mounting macOS file system trees into Docker containers. To this end, osxfs features a number of unique capabilities as well as differences from a classical Linux file system.On macOS Sierra and lower, the default file system is HFS+. On macOS High Sierra, the default file system is APFS.With the recent release, NFS Volume sharing has been enabled both for Swarm & Kubernetes.

There is one more important component sitting next to Hyperkit, rightly called as VPNKit. VPNKit is a part of HyperKit attempts to work nicely with VPN software by intercepting the VM traffic at the Ethernet level, parsing and understanding protocols like NTP, DNS, UDP, TCP and doing the “right thing” with respect to the host’s VPN configuration. VPNKit operates by reconstructing Ethernet traffic from the VM and translating it into the relevant socket API calls on OSX. This allows the host application to generate traffic without requiring low-level Ethernet bridging support.

On top of these open source components, we have LinuxKit VM which runs containerd and service containers which includes Docker Engine to run service containers. LinuxKit VM is built based on YAML file. The docker-for-mac.yml contains an example use of the open source components of Docker for Mac. The example has support for controlling dockerd from the host via vsudd and port forwarding with VPNKit. It requires HyperKit, VPNKit and a Docker client on the host to run.

Sitting next to Docker CE service containers, we have kubelet binaries running inside LinuxKit VM. If you are new to K8s, kubelet is an agent that runs on each node in the cluster. It makes sure that containers are running in a pod. It basically takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.On top of Kubelet, we have kubernetes services running. We can either run Swarm Cluster or Kubernetes Cluster. We can use the same Compose YAML file to bring up both the clusters side by side.

Peeping into LinuxKit VM

Curious about VM and how Docker for Mac CE Edition actually look like?

Below are the list of commands which you can leverage to get into LinuxKit VM and see kubernetes services up and running. Here you go..

How to enter into LinuxKit VM?

Open MacOS terminal and run the below command to enter into LinuxKit VM:

$screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty

Listing out the service containers:

Earlier the ctr tasks ls used to list the service containers running inside LinuxKit VM but in the recent release, namespace concept has been introduced, hence you might need to run the below command to list out the service containers:

$ ctr -n services.linuxkit tasks ls
TASK                    PID     STATUS
acpid                   854     RUNNING
diagnose                898     RUNNING
docker-ce               936     RUNNING
host-timesync-daemon    984     RUNNING
ntpd                    1025    RUNNING
trim-after-delete       1106    RUNNING
vpnkit-forwarder        1157    RUNNING
vsudd                   1198    RUNNING

How to display containerd version?

Under Docker for Mac 18.05 RC1, containerd version 1.0.1 is available as shown below:

linuxkit-025000000001:~# ctr version
Client:
  Version:  v1.0.1
  Revision: 9b55aab90508bd389d7654c4baf173a981477d55

Server:
  Version:  v1.0.1
  Revision: 9b55aab90508bd389d7654c4baf173a981477d55
linuxkit-025000000001:~#

How shall I enter into docker-ce service container using containerd?

ctr -n services.linuxkit tasks exec -t --exec-id 936 docker-ce sh
/ # docker version
Client:
 Version:      18.05.0-ce-rc1
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   33f00ce
 Built:        Thu Apr 26 00:58:14 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.05.0-ce-rc1
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.10.1
  Git commit:   33f00ce
  Built:        Thu Apr 26 01:06:49 2018
  OS/Arch:      linux/amd64
  Experimental: true
/ #

How to verify Kubernetes Single Node Cluster?

/ # kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-23T09:38:59Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
/ # kubectl get nodes
NAME                 STATUS    ROLES     AGE       VERSION
docker-for-desktop   Ready     master    26d       v1.9.6
/ #

 

Interested to read further? Check out my curated list of blog posts –

Docker for Mac is built with LinuxKit. How to access the LinuxKit VM
Top 5 Exclusive Features of Docker for Mac That you can’t afford to ignore
5 Minutes to Bootstrap Kubernetes Cluster on GKE using Docker for Mac 18.03.0
Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0
2-minutes to Kubernetes Cluster on Docker for Mac 18.01 using Swarm CLI
Docker For Mac 1.13.0 brings support for macOS Sierra, now runs ARM & AARCH64 based Docker containers
Docker for Mac 18.03.0 now comes with NFS Volume Sharing Support for Kubernetes

 

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.