Join our Discord Server
Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 570+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 8900+ members and discord server close to 2200+ members. You can follow him on Twitter(@ajeetsraina).

A First Look at Compose on Kubernetes for Minikube

6 min read

Say Bye to Kompose !

Let’s begin with a problem statement – “The Kubernetes API is quite HUGE. More than 50 first-class objects in the latest release, from Pods and Deployments to ValidatingWebhookConfiguration and ResourceQuota can make anyone go waffling. If you are a developer, I am sure this can lead to a volubility in configuration of the cluster. Hence, there is a need of simplified approach(like Swarm CLI/API) to deploy and manage applications running on Kubernetes cluster.

On the 2nd day of Dockercon, Docker Inc. open sourced Compose on Kubernetes project. This tool is all about simplifying Kubernetes. If you are not aware, Docker Enterprise Edition already had this capability enabled starting Compose File version 3.3 where one can use the same docker-compose.yml file for Swarm deployment as well as one can specify Kubernetes workloads whenever stack is deployed.

Let me explain what it actually mean? Imagine, you are using Docker Desktop running on your Macbook. Docker Desktop provides capability of running a single node Swarm as well as Kubernetes cluster for your development environment. You have a choice of context switching from local cluster to remote Swarm or K8s cluster running on Cloud Platform like GKE/AKS. Once you have developed code in your local single node cluster which could be Minikube or Docker Desktop for Mac, you might want to test it on remote Cloud platform. All it require a “click” to switch the context from local cluster to GKE or AKS. Now, you can use the same Swarm CLI commands to deploy application to Cloud platform using the same Docker Compose file. Isn’t it coool?

Before we jump directly to the implementation phase, let us spend some time to understand how does the mapping from Swarm to Kubernetes actually happen? Fundamentally, 1:1 mapping of Swarm onto K8s is not straightforward. As a stack is essentially just a list of Swarm services the mapping is done on a per service basis. As per the official doc, there are fundamentally two classes of Kubernetes objects required to map a Swarm service: Something to deploy and scale the containers and something to handle intra- and extra-stack networking. In Kubernetes one does not manipulate individual containers but rather a set of containers called a pod. Pods can be deployed and scaled using different controllers depending on what the desired behaviour is. If a service is declared to be global, Compose on Kubernetes uses a DaemonSet to deploy pods. Note: Such services cannot use a persistent volume. If a service uses a volume, a StatefulSet is used. Talking about Intra-stack networking, essentially Kubernetes does not have the notion of a network like Swarm does. Instead all pods that exist in a namespace can network with each other. In order for DNS name resolution between pods to work, a HeadlessService is required. I would recommend this link to understand further on mapping from Swarm to Kubernetes.

Architecture

Compose on Kubernetes is made up of server-side and client-side components. This architecture was chosen so that the entire life cycle of a stack can be managed. The following image is a high-level diagram of the architecture:

Compose on Kubernetes architecture

The REST API is provided by a custom API server exposed to Kubernetes clients using API server aggregation.

The client communicates with the server using a declarative REST API. It creates a stack by either POSTing a serialized stack struct (v1beta2) or a Compose file (v1beta1) to the API server. This API server stores the stack in an etcd key-value store.

Server Side Architecture

There are two server-side components in Compose on Kubernetes:

  • the Compose API server, and
  • the Compose controller.

The Compose API server extends the Kubernetes API by adding routes for creating and manipulating stacks. It is responsible for storing the stacks in an etcd key-value store. It also contains the logic to convert v1beta1 representations to v1beta2, so that the Compose controller only needs to work one representation of a stack.

The Compose controller is responsible for converting a stack struct (v1beta2 schema) into Kubernetes objects and then reconciling the current cluster state with the desired one. It does this by interacting with the Kubernetes API — it is a Kubernetes client that watches for interesting events and manipulates lower level Kubernetes objects.

Under this blog post, I will show how to implement Compose on Kubernetes for Minikube.

Tested Infrastructure

  • Docker Edition: Docker Desktop Community v2.0.1.0
  • System: macOS High Sierra v10.13.6
  • Docker Engine: v18.09.1
  • Docker Compose : v1.23.2
  • Kubernetes: v1.13.0

Pre-requisites:

  • Install Docker Desktop Community Edition v2.0.1.0 directly from this link
  • Enable Kubernetes with the below feature enabled
My Image

Verifying Docker Desktop version

[Captains-Bay]? >  docker version
Client: Docker Engine - Community
 Version:           18.09.1
 API version:       1.39
 Go version:        go1.10.6
 Git commit:        4c52b90
 Built:             Wed Jan  9 19:33:12 2019
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.1
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       4c52b90
  Built:            Wed Jan  9 19:41:49 2019
  OS/Arch:          linux/amd64
  Experimental:     true
 Kubernetes:
  Version:          v1.12.4
  StackAPI:         v1beta2
[Captains-Bay]? >


Installing Minikube

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \
  && chmod +x minikube

Verifying Minikube Version

minikube version
minikube version: v0.32.0

Checking Minikube Status

minikube status
host: Stopped
kubelet:
apiserver:

Starting Minikube

]? >  minikube start
Starting local Kubernetes v1.12.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Stopping extra container runtimes...
Machine exists, restarting cluster components...
Verifying kubelet health ...
Verifying apiserver health ....Kubectl is now configured to use the cluster.
Loading cached images from config file.


Everything looks great. Please enjoy minikube!

By now, you should be able to see context switching happening under UI windows under Kubernetes section as shown below:

Checking the Minikube Status

? >  minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100[Captains-Bay]? >

Listing out Minikube Cluster Nodes

 kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
minikube   Ready     master    12h       v1.12.4

Creating “compose” namespace

kubectl create namespace compose
namespace "compose" created

Creating the tiller service account

kubectl -n kube-system create serviceaccount tiller
serviceaccount "tiller" created

Granting Access to your Cluster

kubectl -n kube-system create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount kube-system:tiller
clusterrolebinding "tiller" created

Initializing Helm component.

? >  helm init --service-account tiller
$HELM_HOME has been configured at /Users/ajeetraina/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Verifying Helm Version

helm version
Client: &version.Version{SemVer:"v2.12.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

Deploying etcd operator

? >  helm install --name etcd-operator stable/etcd-operator --namespace compose
NAME:   etcd-operator
LAST DEPLOYED: Fri Jan 11 10:08:06 2019
NAMESPACE: compose
STATUS: DEPLOYED

RESOURCES:
==> v1/ServiceAccount
NAME                                               SECRETS  AGE
etcd-operator-etcd-operator-etcd-backup-operator   1        1s
etcd-operator-etcd-operator-etcd-operator          1        1s
etcd-operator-etcd-operator-etcd-restore-operator  1        1s

==> v1beta1/ClusterRole
NAME                                       AGE
etcd-operator-etcd-operator-etcd-operator  1s

==> v1beta1/ClusterRoleBinding
NAME                                               AGE
etcd-operator-etcd-operator-etcd-backup-operator   1s
etcd-operator-etcd-operator-etcd-operator          1s
etcd-operator-etcd-operator-etcd-restore-operator  1s

==> v1/Service
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)    AGE
etcd-restore-operator  ClusterIP  10.104.102.245  <none>       19999/TCP  1s

==> v1beta1/Deployment
NAME                                               DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
etcd-operator-etcd-operator-etcd-backup-operator   1        1        1           0          1s
etcd-operator-etcd-operator-etcd-operator          1        1        1           0          1s
etcd-operator-etcd-operator-etcd-restore-operator  1        1        1           0          1s

==> v1/Pod(related)
NAME                                                             READY  STATUS             RESTARTS  AGE
etcd-operator-etcd-operator-etcd-backup-operator-7978f8bc4r97s7  0/1    ContainerCreating  0         1s
etcd-operator-etcd-operator-etcd-operator-6c57fff9d5-kdd7d       0/1    ContainerCreating  0         1s
etcd-operator-etcd-operator-etcd-restore-operator-6d787599vg4rb  0/1    ContainerCreating  0         1s


NOTES:
1. etcd-operator deployed.
  If you would like to deploy an etcd-cluster set cluster.enabled to true in values.yaml
  Check the etcd-operator logs
    export POD=$(kubectl get pods -l app=etcd-operator-etcd-operator-etcd-operator --namespace compose --output name)
    kubectl logs $POD --namespace=compose
? >


Creating an etcd cluster

$cat compose-etcd.yaml
apiVersion: "etcd.database.coreos.com/v1beta2"
kind: "EtcdCluster"
metadata:
  name: "compose-etcd"
  namespace: "compose"
spec:
  size: 3
  version: "3.2.13"
kubectl apply -f compose-etcd.yaml
etcdcluster "compose-etcd" created


This should bring an etcd cluster in the compose namespace.

Download the Compose Installer

wget https://github.com/docker/compose-on-kubernetes/releases/download/v0.4.18/installer-darwin

Deploying Compose on Kubernetes

./installer-darwin -namespace=compose -etcd-servers=http://compose-etcd-client:2379 -tag=v0.4.18
INFO[0000] Checking installation state
INFO[0000] Install image with tag "v0.4.18" in namespace "compose"
INFO[0000] Api server: image: "docker/kube-compose-api-server:v0.4.18", pullPolicy: "Always"
INFO[0000] Controller: image: "docker/kube-compose-controller:v0.4.18", pullPolicy: "Always"

Ensuring that Compose Stack controller gets enabled

[Captains-Bay]? >  kubectl api-versions| grep compose
compose.docker.com/v1beta1
compose.docker.com/v1beta2

Listing out services of Minikube

[Captains-Bay]? >  minikube service list
|-------------|-------------------------------------|-----------------------------|
|  NAMESPACE  |                NAME                 |             URL             |
|-------------|-------------------------------------|-----------------------------|
| compose     | compose-api                         | No node port                |
| compose     | compose-etcd-client                 | No node port                              |
| compose     | etcd-restore-operator               | No node port                |
| default     | db1                                 | No node port                |
| default     | example-etcd-cluster-client-service | http://192.168.99.100:32379 |
| default     | kubernetes                          | No node port                |
| default     | web1                                | No node port                |
| default     | web1-published                      | http://192.168.99.100:32511 |
| kube-system | kube-dns                            | No node port                |
| kube-system | kubernetes-dashboard                | No node port                |
| kube-system | tiller-deploy                       | No node port                |
|-------------|-------------------------------------|-----------------------------|
[Captains-Bay]? >

Verifying StackAPI

[Captains-Bay]? >  docker version
Client: Docker Engine - Community
 Version:           18.09.1
 API version:       1.39
 Go version:        go1.10.6
 Git commit:        4c52b90
 Built:             Wed Jan  9 19:33:12 2019
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.1
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       4c52b90
  Built:            Wed Jan  9 19:41:49 2019
  OS/Arch:          linux/amd64
  Experimental:     true
 Kubernetes:
  Version:          v1.12.4
  StackAPI:         v1beta2
[Captains-Bay]? >

Deploying Web Application Stack directly using Docker Compose

[Captains-Bay]? >  docker stack deploy -c docker-compose2.yml myapp4
Waiting for the stack to be stable and running...
db1: Ready		[pod status: 1/2 ready, 1/2 pending, 0/2 failed]
web1: Ready		[pod status: 2/2 ready, 0/2 pending, 0/2 failed]

Stack myapp4 is stable and running

[Captains-Bay]? >  docker stack ls
NAME                SERVICES            ORCHESTRATOR        NAMESPACE
myapp4              2                   Kubernetes          default
[Captains-Bay]? >  kubectl get po
NAME                    READY     STATUS    RESTARTS   AGE
db1-55959c855d-jwh69    1/1       Running   0          57s
db1-55959c855d-kbcm4    1/1       Running   0          57s
web1-58cc9c58c7-sgsld   1/1       Running   0          57s
web1-58cc9c58c7-tvlhc   1/1       Running   0          57s

Hence, we successfully deployed a web application stack onto 1-node Kubernetes cluster running in Minikube using Docker Compose file.

In my next blog post, I will showcase Compose on Kubernetes for GKE cluster. If you’re in hurry, drop me a comment or reach out to me via Twitter @ajeetsraina. I would love to assist you.

Have Queries? Join https://launchpass.com/collabnix

Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 570+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 8900+ members and discord server close to 2200+ members. You can follow him on Twitter(@ajeetsraina).
Join our Discord Server
Index