5 Minutes to Bootstrap Kubernetes Cluster on GKE using Docker for Mac 18.03.0

Estimated Reading Time: 6 minutes

Docker 18.03.0 CE Release is now available under Docker for Mac Platform. Docker for Mac 18.03.0 CE is now shipped with Docker Compose version 1.20.1, Kubernetes v1.9.2,  Docker Machine 0.14.0 & Notary 0.6.0.  Few of the promising features included under this release are listed below-

  • Changing VM Swap size under settings
  • Linux Kernel 4.9.87
  • Support of NFS Volume sharing under Kubernetes.
  • Revert the default disk format to qcow2 for users running macOS 10.13 (High Sierra).
  • DNS name `host.docker.internal` used for host resolution from containers.
  • Improvement over Kubernetes Load balanced services (No longer marked as `Pending`)
  • Fixed hostPath mounts in Kubernetes`.
  • Fix support for AUFS.
  • Fix synchronisation between CLI `docker login` and GUI login.
  • Updated Compose on Kubernetes to v0.3.0. Existing Kubernetes stacks will be removed during migration and need to be re-deployed on the cluster… and many more

In my last blog, I talked about context switching and showcased how one can switch the context from docker-for-desktop to Minikube under Docker for Mac Platform. A context element in a kubeconfig file is used to group access parameters under a convenient name. Each context has three parameters: cluster, namespace, and user. By default, the kubectl command-line tool uses parameters from the current context to communicate with the cluster. Under  .kube/config file, you can see the list of context specified a shown below –

– cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURERENDQWZTZ0F3SUJBZ0lSQUpwcmVPY..V0gKZ0hVaVl6dGR…
server: https://35.201.215.156
name: gke_spheric-temple-187614_asia-east1-a_k8s-lab1
– cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOOd2..LQo=
server: https://localhost:6443
name: kubernetes
– cluster:
certificate-authority: /Users/ajeetraina/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
contexts:
– context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop

Under this blog, I will showcase how you can bootstrap Kubernetes Cluster on GKE Platform using context switching functionality under Docker for Mac Platform.

Pre-requisite:

  • Install/Upgrade Docker for Mac 18.03 CE Edition
  • Install google-cloud-sdk
  • Enable Google Cloud Engine API
  • Authenticate Your Google Cloud using gcloud auth

Installing Docker for Mac 18.03 CE Edition

Installing Google Cloud SDK on your macOS

  • Make sure that Python 2.7 is installed on your system:

Ajeets-MacBook-Air:~ ajeetraina$ python -V
Python 2.7.10
  • Download the below package based on your system.
Platform Package Size SHA256 Checksum
macOS 64-bit(x86_64) google-cloud-sdk-195.0.0-darwin-x86_64.tar.gz 15.0 MB 56d72895dfc6c4208ca6599292aff629e357ad517e6979203a68a3a8ca5f6cc8
macOS 32-bit(x86) google-cloud-sdk-195.0.0-darwin-x86.tar.gz 15.0 MB e389ec98b65a0dbfc3f2c2637b9e3a375913b39d50e668fecb07cd04474fc080
  • Extract the archive to any location on your file system.
./google-cloud-sdk/install.sh
  • Restart your terminal for the changes to take effect.

Initializing the SDK

gcloud init

In your browser, log in to your Google user account when prompted and click Allow to grant permission to access Google Cloud Platform resources.

Enabling Kubernetes Engine API

You need to enable K8s engine API to bootstrap K8s cluster on Google Cloud Platform. To do so, open up this link.

Authenticate Your Google Cloud

Next, you need to authenticate your Google Cloud using glcloud auth

gcloud auth login

Done. We are all set to bootstrap K8s cluster…

Creating GKE Cluster Node

Ajeets-MacBook-Air:~ ajeetraina$ gcloud container clusters create k8s-lab1 --disk-size 10 --zone asia-east1-a --machine-type n1-standard-2 --num-nodes 3 --scopes compute-rw
WARNING: The behavior of --scopes will change in a future gcloud release: service-control and service-management scopes will no longer be added to what is specified in --scopes. To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
WARNING: Starting in Kubernetes v1.10, new clusters will no longer get compute-rw and storage-ro scopes added to what is specified in --scopes (though the latter will remain included in the default --scopes). To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
Creating cluster k8s-lab1...done.
Created [https://container.googleapis.com/v1/projects/spheric-temple-187614/zones/asia-east1-a/clusters/k8s-lab1].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/asia-east1-a/k8s-lab1?project=spheric-temple-187614
kubeconfig entry generated for k8s-lab1.
NAME      LOCATION      MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
k8s-lab1  asia-east1-a  1.7.11-gke.1    35.201.215.156  n1-standard-2  1.7.11-gke.1  3          RUNNING

Viewing it on Docker for Mac UI

Click on Whale icon on the top right of Docker for Mac and by now, you must be able to see the new Context getting appeared.

 

Listing the Nodes

 

Ajeets-MacBook-Air:~ ajeetraina$ kubectl get nodes
NAME                                      STATUS    ROLES     AGE       VERSION
gke-k8s-lab1-default-pool-042d2598-591g   Ready     <none>    7m        v1.7.11-gke.1
gke-k8s-lab1-default-pool-042d2598-c633   Ready     <none>    7m        v1.7.11-gke.1
gke-k8s-lab1-default-pool-042d2598-q603   Ready     <none>    7m        v1.7.11-gke.1

Viewing it directly under GCP Platform

 

 

Connecting to Your GKE Cluster

There are 2 ways to do this:

Method-1: Click on “Connection” button to see how to connect to K8s-lab1.

 

Method-2:

You can connect to your cluster via command-line or using a dashboard.

Ajeets-MacBook-Air:~ ajeetraina$gcloud container clusters get-credentials k8s-lab1 --zone asia-east1-a --project captain-199803
Fetching cluster endpoint and auth data.
kubeconfig entry generated for k8s-lab1. 

Fetching cluster endpoint and auth data. kubeconfig entry generated for k8s-lab1.

Listing the Nodes under Google Cloud Platform

 

Deploy Nginx on GKE Cluster

Let us see how to deploy Nginx on remote GKE cluster using Docker for Mac. This requires two commands. deploy and expose.

Step 1: Deploy nginx

$ kubectl run nginx --image=nginx --replicas=3

deployment "nginx" created

This will create a replication controller to spin up 3 pods, each pod runs the nginx container.

Step 2: Verify that the pods are running.

You can see the status of deployment by running:

kubectl get pods -owide
NAME                    READY     STATUS    RESTARTS   AGE       IP          NODE
nginx-7c87f569d-glczj   1/1       Running   0          8s        10.12.2.6   gke-k8s-lab1-default-pool-b2aaa29b-w904
nginx-7c87f569d-pll76   1/1       Running   0          8s        10.12.0.8   gke-k8s-lab1-default-pool-b2aaa29b-2gzh
nginx-7c87f569d-sf8z9   1/1       Running   0          8s        10.12.1.8   gke-k8s-lab1-default-pool-b2aaa29b-qpc7

Youcan see that each nginx pod is now running in a different node (virtual machine).

Once all pods have the Running status, you can then expose the nginx cluster as an external service.

Step 3: Expose the nginx cluster as an external service.

$ kubectl expose deployment nginx --port=80 --target-port=80 \
--type=LoadBalancer

service "nginx" exposed

This command will create a network load balancer to load balance traffic to the three nginx instances.

Step 4: Find the network load balancer address:

kubectl get service nginx
NAME      TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
nginx     LoadBalancer   10.15.247.8   <pending>     80:30253/TCP   12s

It may take several minutes to see the value of EXTERNAL_IP. If you don’t see it the first time with the above command, retry every minute or so until the value of EXTERNAL_IP is displayed.

You can then visit http://EXTERNAL_IP/ to see the server being served through network load balancing.

GKE provides amazing platform to view workloads & Load-balancer as shown below:

GKE also provides UI for displaying Loadbalancer:

In my upcoming blog post, I will showcase how context switching can help you in switching your project between Dev, QA & Production environment flawlessly.

Did you find this blog helpful? Feel free to share your experience. Get in touch with me on Twitter –  @ajeetsraina

If you are looking out for contribution, join me at Docker Community Slack Channel.

LinuxKit 101: Getting Started with LinuxKit for Google Cloud Platform

Estimated Reading Time: 7 minutes

 

“…LinuxKit? A New Beast?

     What problem does it solve for us?..”

 

01

In case you missed out Dockercon 2017 and have no idea what is LinuxKit all about, then you have arrived at the right place. For the next 30 minutes of your time, I will be talking about an open source container toolkit which Docker Inc. has recently made to the public & will help you get started with it in very easy and precise way.

What is LinuxKit?

LinuxKit is just like Docker’s other open-source container toolkits such as InfraKit and VPNkit. It is essentially a container-native toolkit that allows organizations to build their own containerized operating systems that are secure, lean, modular and portable. Essentially, it is more of a developer kit than an end-user product.This project is completely open source and  is hosted on GitHub, under an Apache 2 licence.

What problem does it solve?

Last year Docker Inc. started shipping Docker for Mac, Docker for Windows, Docker for Azure & Docker for GCP and that brought a Docker-native experience to these various platforms. One of the common problem which the community faced was non-standard Linux OS running on all those platform.  Esp. Cloud platform do not ship with a standard Linux which brought lots of concerns around portability, security and  incompatibility. This lead Docker Inc. to bundle Linux into the Docker platform to run on all of these places uniformly.

Talking about portability, Docker Inc. has always focused on product which should run anywhere. Hence, they worked with partners like HP, Intel, ARM and Microsoft to ensure that LinuxKit toolkit should flawlessly run on the desktop, server, cloud ARM, x86, virtual environment and on bare metal. LinuxKit was built with an intention of  an optimized tooling for portability which can accommodate a new architecture, a new system in very easier way.

What does LinuxKit hold?

LinuxKit includes the tooling to allow building custom Linux subsystems that only include exactly the components the runtime platform requires. All system services are containers that can be replaced, and everything that is not required can be removed.The toolkit works with Docker’s containerd. All components can be substituted with ones that match specific needs.You can optimize LinuxKit images for specific hardware platforms and host operating systems with just the drivers and other dependencies you need, and nothing more, rather than use a full-fat generic base. The toolkit basically tries to help you create your own slimline containerized operating system as painlessly as possible. The size of a LinuxKit image is in MBs ( around 35-50MB).

100

The above shown is YAML file which specifies a kernel and base init system, a set of containers that are built into the generated image and started at boot time. It also specifies what formats to output(shown at the last line), such as bootable ISOs and images for various platforms. Interestingly, system services are sandboxed in containers, with only the privileges they need. The configuration is designed for the container use case. The whole system is built to be used as immutable infrastructure, so it can be built and tested in your CI pipeline, deployed, and new versions are redeployed when you wish to upgrade. To know more about YAML specification, check this out.

What tool does LinuxKit uses?

There are two basic tools which LinuxKit uses – Linuxkit & Moby.

2

In short, the moby tool converts the yaml specification into one or more bootable images.

Let us get started with LinuxKit to understand how it builds customized ISO images and run uniformly across various platform. Under this blog post, I have chosen Google Cloud Platform. We will build LinuxKit based customized ISO image locally on my Macbook Air and push it to Google Cloud Platform to run as VM instance. I will be using forked linuxkit repository which I have built around and runs Docker container(ex. running Portainer docker container) inside VM instance too.

Steps:

  1. Install LinuxKit & Moby tool on macOS
  2. Building a LinuxKit ISO Image with Moby 
  3. Create a bucket under Google Cloud Platform
  4. Upload the LinuxKit ISO image to a GCP bucket using LinuxKit tool
  5. Initiate the GCP instance from the LinuxKit ISO image placed under GCP bucket
  6. Verifying Docker running inside LinuxKitOS 
  7. Running Portainer as Docker container

 

Pre-requisite:

– Install Google Cloud SDK on your macOS system through this link. You will need to verify your google account using the below command:

$gcloud auth login

– Ensure that the build essential tools like make are perfectly working

– Ensure that GO packages are installed on macOS..

Steps:

  1. Clone the repository:

 

sudo git clone https://github.com/ajeetraina/linuxkit

4

2.  Change directory to linuxkit and run make which builds “moby” and “linuxkit” for us

cd linuxkit && sudo make

 

3.  Verify that these tools are built and placed under /bin:

cd bin/
ls
moby         linuxkit

4.  Copy these tools into system PATH:

 
sudo cp bin/* /usr/local/bin/

5. Use moby tool to build the customized ISO image:

 

cd examples/
sudo moby build gcpwithdocker.yml

 

5

 

[Update: 6/21/2017 – With the latest release of LinuxKit, Output section is no longer allowed inside YAML file. It means that whenever you use moby build command to build an image, specify -output gcp to build an image in a format that GCP will understand. For example:

moby build -output gcp example/gcpwithdocker.yml

This will create a local gcpwithdocker.img.tar.gz compressed image file.]

 

6.  Create a GCE bucket “mygcp” under your Google Cloud Platform:

7

7. Run  linuxkit push command  to push it to GCP:

 

sudo linuxkit push gcp -project synthetic-diode-161714 -bucket mygcp gcpwithdocker.img.tar.gz

 

8

[Note: “synthetic-diode-161714” is my GCP project name and “mygcp” is the bucket name which I created in earlier step. Please input as per your environment.]

Please note that you might need to enable Google Cloud API using this link in case you encounter “unable to connect GCP”  error. 

8.  You can execute the image you created and this will should show up under VM instance on Google  Cloud Platform:

9

10

This will build up a LinuxKit OS which you can verify below:

15

You can also verify if this brings up VM instance on GCP platform:

010

9. You can use runc command to list out all the services which were defined under gcpwithdocker.yml file:

11

10. As shown above, one of the service which I am interested is called “docker”. You can use the below command to enter into docker service:

 

runc exec -t docker sh

Wow ! It is running the latest Docker 17.04.0-ce version.

11.  Let us try to run Portainer application and check if it works good.

12

You can verify the IP address running ifconfig for that specific container which in my case is 35.187.162.100:

14

Now this is what I call ” a coolest stuff on earth”. Linuxkit allows you to build your own secure, modular, portable, lean and mean containerized OS and that too in just minutes. I am currently exploring LinuxKit in terms of bare metal OS and will share it under my next blog post.

Did you find this blog helpful? Are you planning to explore LinuxKit? Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution, join me at Docker Community Slack Channel.

Introducing new RexRay 0.8 with Docker 17.03 Managed Plugin System for Persistent Storage on Cloud Platforms

Estimated Reading Time: 5 minutes

DellEMC Rex-Ray 0.8 Final Release was announced last week. Graduated as top-level project within {code} community, RexRay 0.8 release has been considered as one of the largest releases till date. The new release introduced support for long lists of new storage platforms like S3FS, EBS, EFS, GCEPD & ScaleIO shown below:

rexray010

Public cloud storage is one of the fastest growing sector in storage with leaders like Amazon AWS, Google Cloud Storage and Microsoft Azure. With the release of RexRay 0.8, {code} community took the right approach in targeting the first community-contributed driver starting with Amazon EFS driver and then quickly adding additional community-contributed drivers like Digital Ocean, FittedCloud, Google Cloud Engine (GCEPD) & Microsoft Azure Unmanaged Disk driver.

Introducing New Docker 17.03 Volume Plugin System

With Docker 17.03 release, a new managed plugin system has been introduced. This is quite different from the old Docker plugin system. Plugins are now distributed as Docker images and can be hosted on Docker Hub or on a private registry.A volume plugin enables Docker volumes to persist across multiple Docker hosts.

rex01

In case you are very new to Docker Plugins, they basically extend Docker’s functionality.  A plugin is a process running on the same or a different host as the docker daemon, which registers itself by placing a file on the same docker host in one of the plugin directories described either as a.sock files( UNIX domain sockets placed under /run/docker/plugins), .spec files( text files containing a URL, such as unix:///other.sock or tcp://localhost:8080 placed under /etc/docker/plugins or /usr/lib/docker/plugins)or.json files ( text files containing a full json specification for the plugin placed under /etc/docker/plugin). You can refer this in case you want to develop your own Docker Volume Plugin.

Running RexRay inside Docker container

Yes, you read it correct ! With the introduction of Docker 17.03 New Plugin Managed System, you can now run RexRay inside Docker container flawlessly. Rex-Ray Volume Plugin is written in Go and provides advanced storage functionality for many platforms including VirtualBox, EC2, Google Compute Engine, OpenStack, and EMC.

You can list out available Docker Volume Plugin for various storage platforms using docker search rexray as shown below:

rexray_plugin2

Let us test-drive RexRay Volume Plugin for Swarm Mode cluster for the first time. I have 4-node Swarm Mode cluster running on Google Cloud Platform as shown below:

rex1

Verify that all the cluster nodes are running the latest 17.03.0-ce (Community Edition).

Installing the RexRay Volume plugin is just one-liner command:

rexray_plugininstallation

 

You can inspect the Rex-Ray Volume plugin using docker plugin inspect command:

rexray_inspect    rexray_inspect22

It’s time to create a volume using docker volume create utility :

$sudo docker volume create –driver rexray/gcepd –name storage1 –opt=size=32

rexray_volume1

You can verify if it is visible under GCE Console window:

RexRay_GCE

Let us try running few application which uses RexRay volume plugin as shown:

{code}==>docker run -dit –name mydb -e MYSQL_ROOT_PASSWORD=wordpress -e MYSQL_USER=wordpress -e MYSQL_PASSWORD:wordpress –volume-driver=rexray/gcepd -v dbdata:/var/lib/mysql mysql:5.7

Verify that MySQL service is up and running using docker logs <container-id> command as shown below:

dbdata1

By now, we should be able to see new volume called “dbdata” created and shown under docker volume ls command:

rexray_dbdata

Under GCE console, it should get displayed too:

dbdata_gce

Using Rex-Ray Volume Plugin under Docker 17.03 Swarm Mode

This is the most interesting section of this blog post. RexRay volume plugin worked great for us till now, especially for a single Docker host running multiple number of services.But what if I want to enable RexRay Volume to persist across multiple Docker Hosts(Swarm Mode cluster)? Yes, there is one possible way to achieve this – using Swarm Executor. It executes docker command across the swarm cluster. Credits to Madhu Venugopal @ Docker Team for assisting me testing with this tool.

swarmexec_1

Please remember that this is UNOFFICIAL way of achieving Volume Plugin implementation across swarm cluster.  I found this tool really cool and hope that it gets integrated within Docker official repository.

First, we need to clone this repository:

$git clone https://github.com/mavenugo/swarm-exec

Run the below command to push the plugin across the swarm cluster:

$cd swarm-exec

$./swarm-exec.sh docker plugin install –grant-all-permissions rexray/gcepd GCEPD_TAG=rexray

swarm_exec01

First, let’s quickly  verify the plugin on the master node  as shown below:

swarm_exec2

While verifying it on the worker nodes:

swarm_exec3

rexray_gce11

The docker volume inspect <volname> should display this particular volume as rexray volume driver as shown below:

rexray_gce222

Creating a MySQL service which uses Rex-Ray volume under Swarm Mode cluster:

up_service

Verifying that the service is up and running:

rexray_100

To conclude, the new version of RexRay looks promising and brings support for various Cloud storage platform. It continues to be leading open source container orchestration engine and now with inclusion of Docker 17.03 Managed Plugin architecture, it will definitely reduce the pain for implementing persistent storage solution.