Join our Discord Server
Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).

Kubernetes Application Deployment Made Easy using Helm on Docker for Mac 18.05.0 CE

7 min read

 

Docker for Mac 18.05.0 CE Release went GA last month. With this release, you can now select your orchestrator directly from the UI in the “Kubernetes” pane which allows “docker stack” commands to deploy to swarm clusters, even if Kubernetes is enabled in Docker for Mac. This feature was introduced for the first time under any Desktop Edition. To try out this feature, ensure that you are using Edge Release of Docker for Mac 18.05.0 CE. Once you update your Docker for Mac, you can find this new feature by clicking on Preference Pane UI and then selecting Kubernetes as shown below:

 

 

 

Whenever you select your choice of orchestrator, it updates ~/.docker/config.json file in the backend as shown below:

 

Docker for Mac is used everyday by hundreds of thousands of developers to build, test and debug containerized apps in their local and dev environment. Developers building both docker-compose and Swarm-based apps, and apps destined for deployment on Kubernetes now get a simple-to-use development system that takes optimal advantage of their laptop or workstation. All container tasks – build, run and push – will run on the same Docker instance with a shared set of images, volumes and containers. With the current release, it is way more simple to install, so you can have Docker containers running on your Mac in just a few minutes.

Check out the curated list of blogs around Docker for Mac

 

Docker for Mac is built with LinuxKit. How to access the LinuxKit VM
Top 5 Exclusive Features of Docker for Mac That you can’t afford to ignore
5 Minutes to Bootstrap Kubernetes Cluster on GKE using Docker for Mac 18.03.0
Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0
2-minutes to Kubernetes Cluster on Docker for Mac 18.01 using Swarm CLI
Docker For Mac 1.13.0 brings support for macOS Sierra, now runs ARM & AARCH64 based Docker containers
Docker for Mac 18.03.0 now comes with NFS Volume Sharing Support for Kubernetes

Docker for Mac provides docker stack command to deploy your application for both Swarm and Kubernetes. This become very useful for Docker Swarm users as they can use the same Swarm CLI to bring up Kubernetes users. But here is an extra bonus – Docker for Mac now works flawlessly for Helm Package Manager.

Why Yet another Package Manager?

Let’s accept the fact that Kubernetes can become very complex with all the objects you need to handle ― such as ConfigMaps, services, pods, Persistent Volumes ― in addition to the number of releases you need to manage. These can be managed with Kubernetes Helm, which offers a simple way to package everything into one simple application and advertises what you can configure.

In case you are completely new – Helm is an open source project that enables developers to create packages of containerized apps to make installation much simpler. Helm is the package manager for Kubernetes and it’s the best way to find, share, and deploy software to k8s.The project was initially created by Deis and has since been donated to the Cloud Native Computing Foundation (CNCF).

Users can install Helm with one click or configure it to suit their organization’s needs. For example, if you want to package and release version 1.0, making only certain parts configurable, this can be done with Helm. Then with version 2.0, additional parts can be made configurable.

Up until now, it was a sub-project of Kubernetes, the popular container orchestration tool, but as of today it is a stand-alone project.

 

Helm is built on three important concepts:

  • Charts – a bundle of information necessary to create an instance of a Kubernetes application
  • Config – contains configuration information that can be merged into a packaged chart to create a releasable object
  • Release – a running instance of a chart, combined with a specific config

Architecture of Helm:

Architecturally it’s built on two major components:

 

Helm Client, a command line tool with the following responsibilities

  • Interacting with the Tiller server
  • Sending charts to be installed
  • Upgrading or uninstalling of existing releases
  • Managing repositories

Tiller Server, an in-cluster server with the following responsibilities:

  • Interacts with the Helm client
  • Interfaces the Kubernetes API server
  • Combining a chart and configuration to build a release
  • Installing charts and tracking the release
  • Upgrading and uninstalling charts

Both the Helm client and Tiller are written in Go and uses gRPC to interact with each other. Tiller (as the server part running inside Kubernetes) provides a gRPC server to connect with the client and it uses the k8s client library to communicate with Kubernetes. It does not require it’s own database as the information is stored within Kubernetes as ConfigMaps.

 

Installing Helm

Pre-requisites:

  • Docker for Mac 18.05.0 CE – Edge Release
  • Enable Kubernetes under Preference Pane UI

To install Helm, you just need a single-liner command on your macOS:

[Captains-Bay]? >  brew install kubernetes-helm
[Captains-Bay]? >  helm
The Kubernetes package manager

To begin working with Helm, run the 'helm init' command:

	$ helm init

This will install Tiller to your running Kubernetes cluster.
It will also set up any necessary local configuration.

Common actions from this point include:

- helm search:    search for charts
- helm fetch:     download a chart to your local directory to view
- helm install:   upload the chart to Kubernetes
- helm list:      list releases of charts

Environment:
  $HELM_HOME          set an alternative location for Helm files. By default, these are stored in ~/.helm
  $HELM_HOST          set an alternative Tiller host. The format is host:port
  $HELM_NO_PLUGINS    disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins.
  $TILLER_NAMESPACE   set an alternative Tiller namespace (default "kube-system")
  $KUBECONFIG         set an alternative Kubernetes configuration file (default "~/.kube/config")

Usage:
  helm [command]

Available Commands:
  completion  Generate autocompletions script for the specified shell (bash or zsh)
  create      create a new chart with the given name
  delete      given a release name, delete the release from Kubernetes
  dependency  manage a chart's dependencies
  fetch       download a chart from a repository and (optionally) unpack it in local directory
  get         download a named release
  history     fetch release history
  home        displays the location of HELM_HOME
  init        initialize Helm on both client and server
  inspect     inspect a chart
  install     install a chart archive
  lint        examines a chart for possible issues
  list        list releases
  package     package a chart directory into a chart archive
  plugin      add, list, or remove Helm plugins
  repo        add, list, remove, update, and index chart repositories
  reset       uninstalls Tiller from a cluster
  rollback    roll back a release to a previous revision
  search      search for a keyword in charts
  serve       start a local http web server
  status      displays the status of the named release
  template    locally render templates
  test        test a release
  upgrade     upgrade a release
  verify      verify that a chart at the given path has been signed and is valid
  version     print the client/server version information

Flags:
      --debug                           enable verbose output
  -h, --help                            help for helm
      --home string                     location of your Helm config. Overrides $HELM_HOME (default "/Users/ajeetraina/.helm")
      --host string                     address of Tiller. Overrides $HELM_HOST
      --kube-context string             name of the kubeconfig context to use
      --tiller-connection-timeout int   the duration (in seconds) Helm will wait to establish a connection to tiller (default 300)
      --tiller-namespace string         namespace of Tiller (default "kube-system")

Use "helm [command] --help" for more information about a command.

Verify the Helm version. If Server and client version doesn’t match, you need to upgrade to deploy application seamlessly(as shown below):

[Captains-Bay]? >  helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
[Captains-Bay]? >  helm init --upgrade
$HELM_HOME has been configured at /Users/ajeetraina/.helm.

Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
[Captains-Bay]? >  helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

Installing WordPress Application using Helm

Say, you want to install WordPress application using Helm. First you need to update the repository and then you can search the application using helm search command as shown below:

[Captains-Bay]? > helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

[Captains-Bay]? >  helm search wordpress
NAME            	CHART VERSION	APP VERSION	DESCRIPTION
stable/wordpress	1.0.2        	4.9.4      	Web publishing platform for building blogs and ...

Just a single-liner command and your WordPress application is up and running:

[Captains-Bay]? >  helm install stable/wordpress --name mywp
NAME:   mywp
LAST DEPLOYED: Sat Jun  2 07:19:25 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Deployment
NAME            DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
mywp-mariadb    1        1        1           0          0s
mywp-wordpress  1        1        1           0          0s

==> v1/Pod(related)
NAME                             READY  STATUS             RESTARTS  AGE
mywp-mariadb-b689ddf74-mlprh     0/1    Init:0/1           0         0s
mywp-wordpress-774555bd4b-hcdc2  0/1    ContainerCreating  0         0s

==> v1/Secret
NAME            TYPE    DATA  AGE
mywp-mariadb    Opaque  2     1s
mywp-wordpress  Opaque  2     1s

==> v1/ConfigMap
NAME                DATA  AGE
mywp-mariadb        1     1s
mywp-mariadb-tests  1     1s

==> v1/PersistentVolumeClaim
NAME            STATUS   VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mywp-mariadb    Bound    pvc-2e1f1122-6607-11e8-8d79-025000000001  8Gi       RWO           hostpath      1s
mywp-wordpress  Pending  hostpath                                  1s

==> v1/Service
NAME            TYPE          CLUSTER-IP      EXTERNAL-IP  PORT(S)                     AGE
mywp-mariadb    ClusterIP     10.98.65.236    <none>       3306/TCP                    0s
mywp-wordpress  LoadBalancer  10.109.204.199  localhost    80:31016/TCP,443:30638/TCP  0s


NOTES:
1. Get the WordPress URL:

  NOTE: It may take a few minutes for the LoadBalancer IP to be available.
        Watch the status with: 'kubectl get svc --namespace default -w mywp-wordpress'

  export SERVICE_IP=$(kubectl get svc --namespace default mywp-wordpress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  echo http://$SERVICE_IP/admin

2. Login with the following credentials to see your blog

  echo Username: user
  echo Password: $(kubectl get secret --namespace default mywp-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode)

[Captains-Bay]? >

Checking the Status of WordPress Application

[Captains-Bay]? >  kubectl get svc --namespace default -w mywp-wordpress
NAME             TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
mywp-wordpress   LoadBalancer   10.109.204.199   localhost     80:31016/TCP,443:30638/TCP   47s

That’s it. Just browse the WordPress using your localhost IP:

 

Cleaning up WordPress

[Captains-Bay] > helm delete mywp
release "mywp" deleted

Installing Prometheus Stack using Helm

[Captains-Bay]? >  helm install stable/prometheus
NAME:   hasty-ladybug
LAST DEPLOYED: Sun Jun  3 09:00:30 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolumeClaim
NAME                                   STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
hasty-ladybug-prometheus-alertmanager  Bound   pvc-7732a80b-66de-11e8-b7d4-025000000001  2Gi       RWO           hostpath      2s
hasty-ladybug-prometheus-server        Bound   pvc-773541f3-66de-11e8-b7d4-025000000001  8Gi       RWO           hostpath      2s

==> v1/ServiceAccount
NAME                                         SECRETS  AGE
hasty-ladybug-prometheus-alertmanager        1        2s
hasty-ladybug-prometheus-kube-state-metrics  1        2s
hasty-ladybug-prometheus-node-exporter       1        2s
hasty-ladybug-prometheus-pushgateway         1        2s
hasty-ladybug-prometheus-server              1        2s

==> v1beta1/ClusterRoleBinding
NAME                                         AGE
hasty-ladybug-prometheus-kube-state-metrics  2s
hasty-ladybug-prometheus-server              2s

==> v1beta1/DaemonSet
NAME                                    DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR  AGE
hasty-ladybug-prometheus-node-exporter  1        1        0      1           0          <none>         2s

==> v1/Pod(related)
NAME                                                          READY  STATUS             RESTARTS  AGE
hasty-ladybug-prometheus-node-exporter-9ggqj                  0/1    ContainerCreating  0         2s
hasty-ladybug-prometheus-alertmanager-5c67b8b874-4xxtj        0/2    ContainerCreating  0         2s
hasty-ladybug-prometheus-kube-state-metrics-5cbcd4d86c-788p4  0/1    ContainerCreating  0         2s
hasty-ladybug-prometheus-pushgateway-c45b7fd6f-2wwzm          0/1    Pending            0         2s
hasty-ladybug-prometheus-server-799d6c7c75-jps8k              0/2    Init:0/1           0         2s

==> v1/ConfigMap
NAME                                   DATA  AGE
hasty-ladybug-prometheus-alertmanager  1     2s
hasty-ladybug-prometheus-server        3     2s

==> v1beta1/ClusterRole
NAME                                         AGE
hasty-ladybug-prometheus-kube-state-metrics  2s
hasty-ladybug-prometheus-server              2s

==> v1/Service
NAME                                         TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)   AGE
hasty-ladybug-prometheus-alertmanager        ClusterIP  10.96.193.91   <none>       80/TCP    2s
hasty-ladybug-prometheus-kube-state-metrics  ClusterIP  None           <none>       80/TCP    2s
hasty-ladybug-prometheus-node-exporter       ClusterIP  None           <none>       9100/TCP  2s
hasty-ladybug-prometheus-pushgateway         ClusterIP  10.97.92.108   <none>       9091/TCP  2s
hasty-ladybug-prometheus-server              ClusterIP  10.96.118.138  <none>       80/TCP    2s

==> v1beta1/Deployment
NAME                                         DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
hasty-ladybug-prometheus-alertmanager        1        1        1           0          2s
hasty-ladybug-prometheus-kube-state-metrics  1        1        1           0          2s
hasty-ladybug-prometheus-pushgateway         1        1        1           0          2s
hasty-ladybug-prometheus-server              1        1        1           0          2s


NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
hasty-ladybug-prometheus-server.default.svc.cluster.local


Get the Prometheus server URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9090


The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
hasty-ladybug-prometheus-alertmanager.default.svc.cluster.local


Get the Alertmanager URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9093


The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
hasty-ladybug-prometheus-pushgateway.default.svc.cluster.local


Get the PushGateway URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9091

For more information on running Prometheus, visit:
https://prometheus.io/
[Captains-Bay]? >  helm ls
NAME         	REVISION	UPDATED                 	STATUS  	CHART           	NAMESPACE
hasty-ladybug	1       	Sun Jun  3 09:00:30 2018	DEPLOYED	prometheus-6.7.0	default
mywp         	1       	Sat Jun  2 07:19:25 2018	DEPLOYED	wordpress-1.0.2 	default

Accessing Prometheus UI under Web Browser

export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9090

Now you can access Prometheus:

open http://localhost:9090

 

Cleaning Up

[Captains-Bay]? >  helm delete hasty-ladybug
release "hasty-ladybug" deleted

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Have Queries? Join https://launchpass.com/collabnix

Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).
Join our Discord Server
Index