Top 5 Features of Docker Engine v18.09.1 That You Shouldn’t Miss Out

Estimated Reading Time: 11 minutes

Docker Engine v18.09.1 went GA last month. It was made available for both the Community and Enterprise users. Docker Enterprise is a superset of all the features in Docker Community Edition. It incorporates defect fixes that you can use in environments where new features cannot be adopted as quickly for consistency and compatibility reasons.

New in 18.09 is an aligned release model for Docker Engine – Community and Docker Engine – Enterprise. The new versioning scheme is YY.MM.x where x is an incrementing patch version. They will ship concurrently with the same x patch version based on the same code base.

[Updated – 2/15/2019]:Security fixes for Docker Engine – Enterprise and Docker Engine – Community

  • Update runc to address a critical vulnerability that allows specially-crafted containers to gain administrative privileges on the host. CVE-2019-5736
  • Ubuntu 14.04 customers using a 3.13 kernel will need to upgrade to a supported Ubuntu 4.x kernel

Docker Engine v18.09.1 comes with dozens of new features, improvements and bug fixes. Let us go through the list of Top features which you should be aware of while you upgrade to Docker 18.09.1 release.

  • Docker Engine v18.09.1 comes with containerd v1.2.2
  • BuildKit 0.3.3 is Available & is out of experimental mode
  • Support for Compose on Kubernetes
  • Exposing Product Info under docker info command
  • Process isolation on Windows 10 for the first time
  • Support for remote connections using SSH
  • Support for SSH agent socket forwarder
  • Support for “registry-mirrors” and “insecure-registries” when using BuildKit
  • Support for build-time secrets using a --secret flag when using BuildKit
  • Support for docker build –pull … when using BuildKit

Docker Engine v18.09.1 comes with containerd 1.2.2 version

Docker Engine v18.09.2 release for CE and EE both is shipped with containerd 1.2.2 version. In Docker versions prior to 18.09, containerd was managed by the Docker engine daemon. In Docker Engine 18.09, containerd is managed by systemd.

Let us try installing the latest Docker 18.09.2 on a fresh Ubuntu 18.10 VM and try verifying the containerd version.

sudo curl -sSL https://get.docker.com/ | sh
# Executing docker install script, commit: 26dda3d
+ sudo -E sh -c apt-get update -qq >/dev/null
+ sudo -E sh -c apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
+ sudo -E sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | apt-key add -qq - >/dev/null
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sudo -E sh -c echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu cosmic edge" > /etc/apt/sources.list.d/docker.list
+ sudo -E sh -c apt-get update -qq >/dev/null
+ sudo -E sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/null
+ sudo -E sh -c docker version
Client:
 Version:           18.09.2
 API version:       1.39
 Go version:        go1.10.6
 Git commit:        6247962
 Built:             Sun Feb 10 04:13:46 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       6247962
  Built:            Sun Feb 10 03:42:13 2019
  OS/Arch:          linux/amd64
  Experimental:     false
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker robertsingh181

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.
@collabnix:~$ sudo usermod -aG docker robertsingh181

Verifying Docker Version

$sudo docker version
Client:
 Version:           18.09.2
 API version:       1.39
 Go version:        go1.10.6
 Git commit:        6247962
 Built:             Sun Feb 10 04:13:46 2019
 OS/Arch:           linux/amd64
 Experimental:      false
Server: Docker Engine - Community
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       6247962
  Built:            Sun Feb 10 03:42:13 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Please Note: The client and container runtime are now in separate packages from the daemon in Docker Engine 18.09. Users should install and update all three packages at the same time to get the latest patch releases.

For example, on Ubuntu: sudo apt install docker-ce docker-ce-cli containerd.io.

$ sudo dpkg --list | grep container
ii  containerd.io                  1.2.2-3                             amd64        An open and reliable container runtime
ii  docker-ce                      5:18.09.2~3-0~ubuntu-cosmic         amd64        Docker: the open-source application container engine
ii  docker-ce-cli                  5:18.09.2~3-0~ubuntu-cosmic         amd64        Docker CLI: the open-source application container engine

Verifying Containerd Version

$ sudo containerd --help
NAME:
 containerd - 
                  __        _                     __
_________  ____  / /_____ _(_)___  ___  _________/ /
/ ___/ __ \/ __ \/ __/ __ `/ / __ \/ _ \/ ___/ __  /
/ /__/ /_/ / / / / /_/ /_/ / / / / /  __/ /  / /_/ /
\___/\____/_/ /_/\__/\__,_/_/_/ /_/\___/_/   \__,_/
high performance container runtime
USAGE:
 containerd [global options] command [command options] [arguments...]
VERSION:
 1.2.2
COMMANDS:
   config    information on the containerd config
   publish   binary to publish events to containerd
   oci-hook  provides a base for OCI runtime hooks to allow arguments to be injected.
   help, h   Shows a list of commands or help for one command
GLOBAL OPTIONS:
 --config value, -c value     path to the configuration file (default: "/etc/containerd/config.toml")
 --log-level value, -l value  set the logging level [trace, debug, info, warn, error, fatal, panic]
 --address value, -a value    address for containerd's GRPC server
 --root value                 containerd root directory
 --state value                containerd state directory
 --help, -h                   show help
 --version, -v                print the version
$ sudo ctr version
Client:
Version:  1.2.2
Revision: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
Server:
Version:  1.2.2
Revision: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce

Since containerd is managed by systemd under v18.09.1 release, any custom configuration to the docker.service systemd configuration which changes mount settings (for example, MountFlags=slave) breaks interactions between the Docker Engine daemon and containerd, and you will not be able to start containers. Run the following command to get the current value of the MountFlags property for the docker.service:

sudo systemctl show --property=MountFlags docker.service
MountFlags=

Update your configuration if this command prints a non-empty value for MountFlags, and restart the docker service.

BuildKit 0.3.3 is Available & is out of experimental mode

BuildKit is a new project under the Moby umbrella for building and packaging software using containers. It is a toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner. Docker 18.09.0 is the first release with buildkit support. With this latest release, you can now run Buildkit without experimental mode enabled. Buildkit can now be configured with an option in daemon.json too.

The docker build is a Docker integrated tool for building images using Dockerfile. It requires Docker daemon to be running. It’s similar to docker run command but some features are intentionally removed for security reasons like no volumes(docker run -v, docker run --mount) and no privileged mode(`docker run –privileged). Buildkit is meant to become the next generation backend implementation for docker build command and github.com/docker/docker/builder package. This doesn’t mean any changes to Dockerfile format as buildkit draws a boundary between build backends and frontends. Dockerfile would be one of the frontend implementations. When invoked from the Docker CLI buildkit would be capable of exposing clients context directory as a source and use Docker containers as a worker. The snapshots would be backed by Docker’s layer store(containerD snapshot drivers). End results from the builder would be exported to docker images.

What problems does BuildKit solve for us?

If you look at Dockerfile, we are reading all commands in Dockerfile from start to the end one by one until we reach to the end.Modifying a single line always invalidates the caches of the subsequent lines. For example: N-th line is assumed to be always dependent on (N-1)th line

FROM debian
EXPOSE 80
RUN apt update && apt install git

As shown above, modifying the 2nd line(EXPOSE 80) always invalidates the apt cache due to false dependency. A user need to arrange the instructions carefully for efficient caching. This brings inefficient caching problem.

Not only this, inaccessible to private assets is another major problem with traditional Dockerfile. There is no safe way to access private assets(eg. Git Repos, S3) from build containers. Hence, copying credentials using COPY can leak the credentials accidently.Buildkit solves the above problems by using DAG-style low level language called LLB.

The main areas BuildKit improves on the current experience are performance, storage management, and extensibility. From the performance side, a significant update is a new fully concurrent build graph solver. It can run build steps in parallel when possible and optimize out commands that don’t have an impact on the final result. We have also optimized the access to the local source files. By tracking only the updates made to these files between repeated build invocations, there is no need to wait for local files to be read or uploaded before the work can begin.

Let us compare docker build Vs Buildkit and see how Buildkit builds the Docker image a more faster than traditional approach.

$ git clone https://github.com/ajeetraina/hellowhale
Cloning into 'hellowhale'...
remote: Enumerating objects: 28, done.
remote: Total 28 (delta 0), reused 0 (delta 0), pack-reused 28
Unpacking objects: 100% (28/28), done.
~$ cd hellowhale/
~$ ls
Dockerfile  README.md  html  wrapper.sh
:~/hellowhale$ time docker build -t ajeetraina/hellowhale .
Sending build context to Docker daemon  153.1kB
Step 1/4 : FROM nginx:latest
latest: Pulling from library/nginx
6ae821421a7d: Pull complete 
da4474e5966c: Pull complete 
eb2aec2b9c9f: Pull complete 
Digest: sha256:dd2d0ac3fff2f007d99e033b64854be0941e19a2ad51f174d9240dda20d9f534
Status: Downloaded newer image for nginx:latest
 ---> f09fe80eb0e7
Step 2/4 : COPY wrapper.sh /
 ---> 10d671c6cf08
Step 3/4 : COPY html /usr/share/nginx/html
 ---> 3e8a09f56168
Step 4/4 : CMD ["./wrapper.sh"]
 ---> Running in b1f24992f9e5
Removing intermediate container b1f24992f9e5
 ---> 9dae85ca0867
Successfully built 9dae85ca0867
Successfully tagged ajeetraina/hellowhale:latest
real    0m6.359s
user    0m0.035s
sys     0m0.022s

Let’s build it with buildkit:

 time docker build -t ajeetraina/hellowhale .
[+] Building 1.7s (9/9) FINISHED                                                                                                                                                 
 => [internal] load build definition from Dockerfile                                                                                                                        0.1s
 => => transferring dockerfile: 135B                                                                                                                                        0.0s
 => [internal] load .dockerignore                                                                                                                                           0.0s
 => => transferring context: 2B                                                                                                                                             0.0s
 => [internal] load metadata for docker.io/library/nginx:latest                                                                                                             0.0s
 => [internal] helper image for file operations                                                                                                                             0.4s
 => => resolve docker.io/docker/dockerfile-copy:v0.1.9@sha256:e8f159d3f00786604b93c675ee2783f8dc194bb565e61ca5788f6a6e9d304061                                              0.7s
 => => sha256:e8f159d3f00786604b93c675ee2783f8dc194bb565e61ca5788f6a6e9d304061 2.03kB / 2.03kB                                                                              0.0s
 => => sha256:a546a4352bcaa6512f885d24fef3d9819e70551b98535ed1995e4b567ac6d05b 736B / 736B                                                                                  0.0s
 => => sha256:494e63343c3f0d392e7af8d718979262baec9496a23e97ad110d62b9c90d6182 766B / 766B                                                                                  0.0s
 => => sha256:df3b4bed1f63b36992540a09e0d10bd3f9d0b082d50810313841d745d7cce368 898.21kB / 898.21kB                                                                          0.2s
 => => sha256:f7b6696c3fee7264ec4486cebe146a6a98aa8d1e46747843107ff473aada8d56 861.00kB / 861.00kB                                                                          0.2s
 => => extracting sha256:df3b4bed1f63b36992540a09e0d10bd3f9d0b082d50810313841d745d7cce368                                                                                   0.1s
 => => extracting sha256:f7b6696c3fee7264ec4486cebe146a6a98aa8d1e46747843107ff473aada8d56                                                                                   0.1s
 => [1/3] FROM docker.io/library/nginx:latest                                                                                                                               0.0s
 => => resolve docker.io/library/nginx:latest                                                                                                                               0.0s
 => [internal] load build context                                                                                                                                           0.0s
 => => transferring context: 34.39kB                                                                                                                                        0.0s
 => [2/3] COPY wrapper.sh /                                                                                                                                                 0.2s
 => [3/3] COPY html /usr/share/nginx/html                                                                                                                                   0.2s
 => exporting to image                                                                                                                                                      0.1s
 => => exporting layers                                                                                                                                                     0.0s
 => => writing image sha256:db60ac4c90d7412b8c9f9382711f0d97a9ad9d4a33c05200aa36dc4c935c8cb3                                                                                0.0s
 => => naming to docker.io/ajeetraina/hellowhale                                                                                                                            0.0s
real    0m1.732s
user    0m0.042s
sys     0m0.019s
~/hellowhale$

Buildkit took just 0m1.732sec compared to 0m6.359sec taken by traditional docker build method.

Below are the list of dozens of enhancements which comes around Buildkit and I shall be discussing these in details in my upcoming blog posts:

Support for Compose on Kubernetes

Compose on Kubernetes allows you to deploy Docker Compose files onto a Kubernetes cluster. Compose on Kubernetes comes installed on Docker Desktop and Docker Enterprise. On Docker Desktop you will need to activate Kubernetes in the settings to use Compose on Kubernetes.

Check out my last 2 blogs which talks about how Compose on Kubernetes work for Play with Kubernetes & Minikube.

Docker v18.09.1 now exposes Product Info under docker info command

Under Docker v18.09.1 it is now possible to verify if it is an enterprise or community product as shown below:

Process isolation on Windows 10 for the first time

Process-isolation containers were already possible on Windows Server, but for the first time they are now also available on the regular Windows 10 of your laptop. Windows 10–1809 (“October 2018 Update”) + Docker engine 18.09.1 + the Windows 1809 base images from Dockerhub are the first combination that allows you to run “real” Windows containers on Windows 10, without the need of Hyper-v virtualization.

One need to install the Docker Desktop   Edge version 2.0.1.0 or newer. Please remember that Docker Engine should be at version 18.09.1 or higher.You must select a Windows base image from Dockerhub that matches the kernel of your host’s Windows version. For Windows 10–1809, that’s the :1809version/tag of nanoserverservercore and windows (or any higher-level image, that builds up on one of these).

Let us run a test container in process-isolation mode by adding the parameter --isolation=process :

docker run --isolation=process mcr.microsoft.com/windows/nanoserver:1809 cmd.exe /c ping 127.0.0.

In case you’re on Windows 10 Build 1706 and try to run the below command, it won’t work as expected. One need to be running atleast 1809 build for smooth operation.

Support for remote connections using SSH

Docker Engine v18.09 offers the possibility for a Docker client to communicate with a remote daemon via ssh. The Docker client communicates usually with the daemon either locally, via the unix socket /var/run/docker.sock, or over a network via a tcp socket. With Docker 18.09.1 you can now SSH to remote Docker host and execute docker CLI flawlessly.

This new connection method between client and engine allows for simple and shared SSH configuration which is far more common and more easily administered that the prior custom CA/certs solution via the docker cli, it is easily done by setting the env var DOCKER_HOST=ssh://hostname or directly on the docker command using the -H parameter as in

docker -H ssh://hostname info
$ docker -H ssh://ajeetraina@10.94.26.28 run -ti ubuntu echo “hello”
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
6cf436f81810: Pull complete 
987088a85b96: Pull complete 
b4624b3efe06: Pull complete 
d42beb8ded59: Pull complete 
Digest: sha256:7a47ccc3bbe8a451b500d2b53104868b46d60ee8f5

Please note that you might need to configure SSH key based login and run an SSH agent so you only need to enter a passphrase once. Especially if you are on cloud platform you will need SSH passphrase to allow this command to work as it directly doesn’t allow SSH from one cloud instance to another.

Below are further details of features which I am planning to discuss in my upcoming blog posts:

  • Added --chown flag support for ADD and COPY commands on Windows moby/moby#35521
  • Added docker engine subcommand to manage the lifecycle of a Docker Engine running as a privileged container on top of containerd, and to allow upgrades to Docker Engine Enterprise docker/cli#1260

A Bonus..

Please note that the client and container runtime are now in separate packages from the daemon in Docker Engine 18.09. Users should install and update all three packages at the same time to get the latest patch releases. For example, on Ubuntu:sudo apt install docker-ce docker-ce-cli containerd.io.

Hope you found this blog informative. In case of any query, feel free to drop me a message or join me at DockerLabs

Building Data Persistent & Datacenter Asset Reporting Capability with Racktables running inside Docker container

Estimated Reading Time: 5 minutes

Let’s talk about Docker inside the datacenter..

If you are a datacenter administrator and still scouring through a spreadsheet of “unallocated” IP addresses, tracking asset and service tag of your individual computer hardware systems, maintaining quite complex documentation of the racks, devices, links and network resources you have in control etc., you definitely need a robust asset management tool. Racktables is one of the most popular and lightweight tool which you can rely upon.

Racktables is a smart and robust solution for datacenter and server room asset management. It helps document hardware assets, network addresses, space in racks, networks configuration and much much more!

With RackTables you can:

  • Have a list of all devices you’ve got
  • Have a list of all racks and enclosures
  • Mount the devices into the racks
  • Maintain physical ports of the devices and links between them
  • Manage IP addresses, assign them to the devices and group them into networks
  • Document your NAT rules
  • Integrate Nagios, Cacti, Munin, Zabbix etc. as plugin directly into UI
  • Describe your loadbalancing policy and store loadbalancing configuration
  • Attach files to various objects in the system
  • Create users, assign permissions and allow or deny any actions they can do
  • Label everything and even everyone with flexible tagging system

Shown below is the screenshot of Racktables elements which comprises of Rackspace, Objects, IPv4 & IPv6 space, Virtual Respources, Logs, Configuration settings, IP SLB, 820.1Q, Patches & Cables.

My Image

Shown below is the reporting plugin which we are going to integrate into Racktables all using Docker containers. In case you’re new, Plugins provide the ability to add functionality to RackTables and, in some cases, override existing behavior. See the racktables-contribs repository for user-submitted plugins.

My Image

Racktables is a great tool based on LAMP stack. Sometimes it becomes cumbersome to install & manage this tool as you can expect great deal of dependencies around its packages for various distros of Linux.

If you visit https://www.freelists.org/list/racktables-users , you will notice numerous issues faced by first time Lab admins to build this platform with desired plugins and manage them. To simplify this, I started looking out how to build high persistent Racktables tool along with plugins integration & reporting capabilities.

Here’s a 2 min guide to setup Racktables along with its effective plugins inside Docker container:

Tested Infrastructure

PlatformNumber of InstanceReading Time
Ubuntu 18.04 VM15 min

Pre-requisite

  • Install Ubuntu 18.04 VM either on VM or Bare Metal system
  • Install Docker & Docker Compose

Install Docker

curl -sSL https://get.docker.com/ | sh

Install Docker Compose

curl -L https://github.com/docker/compose/releases/download/1.24.0-rc1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Clone this Repository

git clone https://github.com/collabnix/racktables-docker
cd racktables-docker

Install Docker

curl -sSL https://get.docker.com/ | sh

[Optional]Configuring DNS for your Docker container(in case you’re behind the firewall)

Edit daemon.json file and restart the docker daemon using “systemctl restart docker” command:

 cat /etc/docker/daemon.json
{
"dns": ["8.8.8.8"]
}

Bring up Racktables Services using Docker Compose

All you need is a Dockerfile & Docker compose file which brings up microservices together to build up Racktables tool. Under Dockerfile, you can add plugins of your choice. Under this example, I have added Reporting plugin so as to generate reports like who is the owner of the specific HW parts, how many number of specific types of components are present and so on..

The Content of Dockerfile look like

# vim: set ft=dockerfile:
FROM alpine:3.6
# Author with no obligation to maintain
MAINTAINER Ajeet Singh Raina ajeetraina@gmail.com>

ENV DBHOST="mariadb" \
    DBNAME="racktables" \
    DBUSER="racktables" \
    DBPASS=""

COPY entrypoint.sh /entrypoint.sh
RUN apk update
RUN apk --no-cache add \
    ca-certificates \
    curl \
    php5-bcmath \
    php5-curl \
    php5-fpm \
    php5-gd \
    php5-json \
    php5-ldap \
    php5-pcntl \
    php5-pdo_mysql \
    php5-snmp \
    && chmod +x /entrypoint.sh \
    && curl -sSLo /racktables.tar.gz 'https://github.com/RackTables/racktables/archive/RackTables-0.21.1.tar.gz' \
    && mkdir /opt \
    && tar -xz -C /opt -f /racktables.tar.gz \
    && mv /opt/racktables-RackTables-0.21.1 /opt/racktables \
    && rm -f /racktables.tar.gz \
    && sed -i \
    -e 's|^listen =.*$|listen = 9000|' \
    -e 's|^;daemonize =.*$|daemonize = no|' \
    /etc/php5/php-fpm.conf

# Adding Plugins for Racktable Reports

RUN apk add git \
   && git clone https://github.com/collabnix/racktables-contribs \
   && cd racktables-contribs/extensions \
   && cp -r plugins/* /opt/racktables/plugins/

VOLUME /opt/racktables/wwwroot
EXPOSE 9000
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/usr/bin/php-fpm5"]

The content of Docker Compose file look like this:

mariadb:
  image: mariadb
  environment:
  - MYSQL_DATABASE=racktables
  - MYSQL_USER=racktables
  - MYSQL_PASSWORD=password123
  - MYSQL_RANDOM_ROOT_PASSWORD=password123
  volumes:
  - ./db_data:/var/lib/mysql

racktables:
  build: .
  links:
  - mariadb
  environment:
  - DBHOST=mariadb
  - DBNAME=racktables
  - DBUSER=racktables
  - DBPASS=password123
nginx:
  image: nginx:stable-alpine
  links:
  - racktables
  volumes_from:
  - racktables
  volumes:
  - ./nginx.conf:/etc/nginx/nginx.conf
  ports:
  - 80:80

cd racktables-docker
docker-compose build
docker-compose up -d 

Accessing Racktables UI

Start by browsing to http://localhost/?module=installer&step=5

Please don’t forget to add /?module=installer&step=5 at the end of the URL. Click on “Next”, enter your new password. Login as “admin” and “your new password”. That’s it. Here you have ~ your new Racktables tool installed along with Reporting + Data persistent capabilities.

If you browse to Main Page > Reports section you will see “Custom”, “Server”, “Switches”, “Virtual Machines” getting added automatically as shown below:

My Image

Did we talk about data persistence?

Containers are ephemeral in nature. Depending upon your frequent changes, there could be chances that your container might go down anytime and your application become inaccessible. No worry…As data persistence capability is already implemented, you need not worry about loosing your data.

The below code under Docker compose takes care of this functionality:

mariadb:
  image: mariadb
  environment:
  - MYSQL_DATABASE=racktables
  - MYSQL_USER=racktables
  - MYSQL_PASSWORD=password123
  - MYSQL_RANDOM_ROOT_PASSWORD=password123
  volumes:
  - ./db_data:/var/lib/mysql

Hope you found this blog informative. If you are facing any issue, feel free to raise any issue under https://github.com/collabnix/racktables-docker.

In my future blog post, I will talk about additional monitoring plugins which can be integrated into Racktables using Docker containers.

References:

  • https://www.freelists.org/list/racktables-users
  • https://github.com/RackTables/racktables-contribs

Test Drive Compose on Kubernetes on Play with Kubernetes(PWK) Playground in 5 Minutes

Estimated Reading Time: 8 minutes

On the 2nd day of Dockercon, Docker Inc. open sourced Compose on Kubernetes project. This project provides a simple way to define cloud native applications with a higher-level abstraction, the Docker Compose file. Docker Compose is a tool for defining and running multi-container Docker applications is already used by millions of Docker users.Docker Enterprise Edition already had this capability enabled starting Compose File version 3.3 where one can use the same docker-compose.yml file for Swarm deployment as well as one can specify Kubernetes workloads whenever stack is deployed.

2 week back, I noticed a community ask for running Compose on Kubernetes on Play with Kubernetes Playground. To my interest, I started looking at how to simplify the solution so that it becomes easy for anyone who can set it up in no time. I forked the repository and begin to build a simple script and Makefile to get it up and running over PWK.

ICYMI.. Check out my recent blog post ~ “Compose on Kubernetes for Minikube”.

Under this blog post, we will see how Compose on Kubernetes can be enabled on top of Play with Kubernetes Platform with just 2 scripts . Let’s get started.

Open https://labs.play-with-k8s.com/ to access Kubernetes Playground.

Click on the Login button to authenticate with Docker Hub or GitHub ID.

Once you start the session, you will have your own lab environment.

Adding First Kubernetes Node

Click on “Add New Instance” on the left to build your first Kubernetes Cluster node. It automatically names it as “node1”. Each instance has Docker Community Edition (CE) and Kubeadm already pre-installed. This node will be treated as the master node for our cluster.

Bootstrapping the Master Node

Clone the Repository and run this script on your 1st instance

git clone https://github.com/collabnix/compose-on-kubernetes
cd compose-on-kubernetes/scripts/pwk/
sh bootstrap-pwk.sh

When you execute this script, as part of initialization, the kubeadm write several configuration files needed, setup RBAC and deployed Kubernetes control plane components (like kube-apiserver, kube-dns, kube-proxy, etcd, etc.). Control plane components are deployed as Docker containers.

Copy the above kubeadm join token command and save it for the next step. This command will be used to join other nodes to your cluster.

Adding Worker Nodes

Click on “Add New Node” to add a new worker node.

Checking the Cluster Status

[node1 ~]$ kubectl get nodes
NAME      STATUS     ROLES     AGE       VERSION
node1     Ready      master    18m       v1.11.3
node2     Ready      <none>    4m        v1.11.3
node3     Ready      <none>    39s       v1.11.3
node4     NotReady   <none>    22s       v1.11.3
node5     NotReady   <none>    4s        v1.11.3

[node1 ]$ kubectl get po
No resources found.
[node1 ]$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   1h

Executing the below script which setup Compose Namespace, etcd cluster & Compose controller all in a single shot


chmod +x prepare-pwk.sh
sh prepare-pwk.sh
[node1 pwk]$ sh prepare-pwk.sh
Creating Compose Namespace...
namespace/compose created
Installing Helm...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 21.6M  100 21.6M    0     0  25.3M      0 --:--:-- --:--:-- --:--:-- 25.4M
Preparing Helm
linux-amd64/
linux-amd64/tiller
linux-amd64/helm
linux-amd64/LICENSE
linux-amd64/README.md
Creating tiller under kube-system namespace...
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
 Tiller is still coming up...Please Wait
NAME                             READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-699fx         1/1       Running   0          5m
coredns-78fcdf6894-trslx         1/1       Running   0          5m
etcd-node1                       1/1       Running   0          4m
kube-apiserver-node1             1/1       Running   0          4m
kube-controller-manager-node1    1/1       Running   0          4m
kube-proxy-5gskp                 1/1       Running   0          4m
kube-proxy-5hbkb                 1/1       Running   0          5m
kube-proxy-lcsnz                 1/1       Running   0          4m
kube-scheduler-node1             1/1       Running   0          4m
tiller-deploy-85744d9bfb-bjw2f   0/1       Running   0          15s
weave-net-9vt2s                  2/2       Running   1          4m
weave-net-k87d7                  2/2       Running   0          5m
weave-net-nmmt5                  2/2       Running   0          4m
NAME:   etcd-operator
LAST DEPLOYED: Mon Jan 21 14:27:50 2019
NAMESPACE: compose
STATUS: DEPLOYED

RESOURCES:
==> v1/ServiceAccount
NAME                                               SECRETS  AGE
etcd-operator-etcd-operator-etcd-backup-operator   1        4s
etcd-operator-etcd-operator-etcd-operator          1        4s
etcd-operator-etcd-operator-etcd-restore-operator  1        4s

==> v1beta1/ClusterRole
NAME                                       AGE
etcd-operator-etcd-operator-etcd-operator  4s

==> v1beta1/ClusterRoleBinding
NAME                                               AGE
etcd-operator-etcd-operator-etcd-backup-operator   4s
etcd-operator-etcd-operator-etcd-operator          3s
etcd-operator-etcd-operator-etcd-restore-operator  3s

==> v1/Service
NAME                   TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)    AGE
etcd-restore-operator  ClusterIP  10.108.89.92  <none>       19999/TCP  3s

==> v1beta2/Deployment
NAME                                               DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
etcd-operator-etcd-operator-etcd-backup-operator   1        1        1           0          3s
etcd-operator-etcd-operator-etcd-operator          1        1        1           0          3s
etcd-operator-etcd-operator-etcd-restore-operator  1        1        1           0          3s

==> v1/Pod(related)
NAME                                                             READY  STATUS             RESTARTS  AGE
etcd-operator-etcd-operator-etcd-backup-operator-56fd448cd897mk  0/1    ContainerCreating  0      2s
etcd-operator-etcd-operator-etcd-operator-c5b8b8f74-pttr2        0/1    ContainerCreating  0      2s
etcd-operator-etcd-operator-etcd-restore-operator-58587cdc9g4br  0/1    ContainerCreating  0      2s


NOTES:
1. etcd-operator deployed.
  If you would like to deploy an etcd-cluster set cluster.enabled to true in values.yaml
  Check the etcd-operator logs
    export POD=$(kubectl get pods -l app=etcd-operator-etcd-operator-etcd-operator --namespacecompose --output name)
    kubectl logs $POD --namespace=compose
Loaded plugins: fastestmirror, ovl
base                                                                    | 3.6 kB  00:00:00
docker-ce-stable                                                        | 3.5 kB  00:00:00
extras                                                                  | 3.4 kB  00:00:00
kubernetes/signature                                                    |  454 B  00:00:00
kubernetes/signature                                                    | 1.4 kB  00:00:10 !!!
updates                                                                 | 3.4 kB  00:00:00
(1/7): base/7/x86_64/group_gz                                           | 166 kB  00:00:00
(2/7): extras/7/x86_64/primary_db                                       | 156 kB  00:00:00
(3/7): base/7/x86_64/primary_db                                         | 6.0 MB  00:00:00
(4/7): updates/7/x86_64/primary_db                                      | 1.3 MB  00:00:00
(5/7): docker-ce-stable/x86_64/primary_db                               |  20 kB  00:00:00
(6/7): docker-ce-stable/x86_64/updateinfo                               |   55 B  00:00:01
(7/7): kubernetes/primary                                               |  42 kB  00:00:01
Determining fastest mirrors
 * base: mirror.nl.datapacket.com
 * extras: mirror.nl.datapacket.com
 * updates: mirror.denit.net
kubernetes                                                                             305/305
Resolving Dependencies
--> Running transaction check
---> Package wget.x86_64 0:1.14-18.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===============================================================================================
 Package            Arch                 Version                      Repository          Size
===============================================================================================
Installing:
 wget               x86_64               1.14-18.el7                  base               547 k

Transaction Summary
===============================================================================================
Install  1 Package

Total download size: 547 k
Installed size: 2.0 M
Downloading packages:
wget-1.14-18.el7.x86_64.rpm                                             | 547 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : wget-1.14-18.el7.x86_64                                                     1/1
install-info: No such file or directory for /usr/share/info/wget.info.gz
  Verifying  : wget-1.14-18.el7.x86_64                                                     1/1

Installed:
  wget.x86_64 0:1.14-18.el7

Complete!
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
etcdcluster.etcd.database.coreos.com/compose-etcd created
NAME                             READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-699fx         1/1       Running   0          6m
coredns-78fcdf6894-trslx         1/1       Running   0          6m
etcd-node1                       1/1       Running   0          5m
kube-apiserver-node1             1/1       Running   0          6m
kube-controller-manager-node1    1/1       Running   0          5m
kube-proxy-5gskp                 1/1       Running   0          6m
kube-proxy-5hbkb                 1/1       Running   0          6m
kube-proxy-lcsnz                 1/1       Running   0          5m
kube-scheduler-node1             1/1       Running   0          5m
tiller-deploy-85744d9bfb-bjw2f   1/1       Running   0          1m
weave-net-9vt2s                  2/2       Running   1          6m
weave-net-k87d7                  2/2       Running   0          6m
weave-net-nmmt5                  2/2       Running   0          5m
--2019-01-21 14:28:49--  https://github.com/docker/compose-on-kubernetes/releases/download/v0.4.18/installer-linux
Resolving github.com (github.com)... 140.82.118.3, 140.82.118.4
Connecting to github.com (github.com)|140.82.118.3|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/158560458/e9a86500-15b2-11e9-8620-1eec5bf160e3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20190121%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190121T142850Z&X-Amz-Expires=300&X-Amz-Signature=bd4020beb0f68210e2a3cfa8ca8166dddcf1d1e4868737eb9ad83363cd39c660&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dinstaller-linux&response-content-type=application%2Foctet-stream [following]
--2019-01-21 14:28:50--  https://github-production-release-asset-2e65be.s3.amazonaws.com/158560458/e9a86500-15b2-11e9-8620-1eec5bf160e3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20190121%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190121T142850Z&X-Amz-Expires=300&X-Amz-Signature=bd4020beb0f68210e2a3cfa8ca8166dddcf1d1e4868737eb9ad83363cd39c660&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dinstaller-linux&response-content-type=application%2Foctet-stream
Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.216.161.163
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.216.161.163|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 28376064 (27M) [application/octet-stream]
Saving to: 'installer-linux'

100%[=====================================================>] 28,376,064  15.9MB/s   in 1.7s

2019-01-21 14:28:52 (15.9 MB/s) - 'installer-linux' saved [28376064/28376064]

INFO[0000] Checking installation state
INFO[0000] Install image with tag "v0.4.18" in namespace "compose"
INFO[0000] Api server: image: "docker/kube-compose-api-server:v0.4.18", pullPolicy: "Always"
INFO[0001] Controller: image: "docker/kube-compose-controller:v0.4.18", pullPolicy: "Always"
failed to find a Stack API version
error: the server doesn't have a resource type "stacks"
[node1 pwk]$ sh prepare-pwk.sh
Creating Compose Namespace...
Installing Helm...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 21.6M  100 21.6M    0     0  14.9M      0  0:00:01  0:00:01 --:--:-- 14.9M
Preparing Helm
linux-amd64/
linux-amd64/tiller
linux-amd64/helm
linux-amd64/LICENSE
linux-amd64/README.md
Creating tiller under kube-system namespace...
clusterrolebindings.rbac.authorization.k8s.io "tiller-cluster-rule" already exists
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
 Tiller is still coming up...Please Wait
NAME                             READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-699fx         1/1       Running   0          7m
coredns-78fcdf6894-trslx         1/1       Running   0          7m
etcd-node1                       1/1       Running   0          6m
kube-apiserver-node1             1/1       Running   0          6m
kube-controller-manager-node1    1/1       Running   0          6m
kube-proxy-5gskp                 1/1       Running   0          6m
kube-proxy-5hbkb                 1/1       Running   0          7m
kube-proxy-lcsnz                 1/1       Running   0          6m
kube-scheduler-node1             1/1       Running   0          6m
tiller-deploy-85744d9bfb-bjw2f   1/1       Running   0          2m
weave-net-9vt2s                  2/2       Running   1          6m
weave-net-k87d7                  2/2       Running   0          7m
weave-net-nmmt5                  2/2       Running   0          6m
NAME            REVISION        UPDATED                         STATUS          CHART        APP VERSION     NAMESPACE
etcd-operator   1               Mon Jan 21 14:27:50 2019        DEPLOYED        etcd-operator-0.8.3    0.9.3           compose
Run: helm ls --all etcd-operator; to check the status of the release
Or run: helm del --purge etcd-operator; to delete it
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
 * base: mirror.nl.datapacket.com
 * extras: mirror.nl.datapacket.com
 * updates: mirror.denit.net
Package wget-1.14-18.el7.x86_64 already installed and latest version
Nothing to do
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
etcdcluster.etcd.database.coreos.com/compose-etcd unchanged
NAME                             READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-699fx         1/1       Running   0          7m
coredns-78fcdf6894-trslx         1/1       Running   0          7m
etcd-node1                       1/1       Running   0          6m
kube-apiserver-node1             1/1       Running   0          7m
kube-controller-manager-node1    1/1       Running   0          7m
kube-proxy-5gskp                 1/1       Running   0          7m
kube-proxy-5hbkb                 1/1       Running   0          7m
kube-proxy-lcsnz                 1/1       Running   0          6m
kube-scheduler-node1             1/1       Running   0          6m
tiller-deploy-85744d9bfb-bjw2f   1/1       Running   0          2m
weave-net-9vt2s                  2/2       Running   1          7m
weave-net-k87d7                  2/2       Running   0          7m
weave-net-nmmt5                  2/2       Running   0          6m
--2019-01-21 14:30:05--  https://github.com/docker/compose-on-kubernetes/releases/download/v0.4.18/installer-linux
Resolving github.com (github.com)... 140.82.118.3, 140.82.118.4
Connecting to github.com (github.com)|140.82.118.3|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/158560458/e9a86500-15b2-11e9-8620-1eec5bf160e3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20190121%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190121T143006Z&X-Amz-Expires=300&X-Amz-Signature=53d5f390f91b968a53219512c18b696e1a085cbbd59cdb953ca95bea1aca4d60&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dinstaller-linux&response-content-type=application%2Foctet-stream [following]
--2019-01-21 14:30:06--  https://github-production-release-asset-2e65be.s3.amazonaws.com/158560458/e9a86500-15b2-11e9-8620-1eec5bf160e3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20190121%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190121T143006Z&X-Amz-Expires=300&X-Amz-Signature=53d5f390f91b968a53219512c18b696e1a085cbbd59cdb953ca95bea1aca4d60&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dinstaller-linux&response-content-type=application%2Foctet-stream
Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.216.233.59
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.216.233.59|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 28376064 (27M) [application/octet-stream]
Saving to: 'installer-linux.1'

100%[=====================================================>] 28,376,064  16.3MB/s   in 1.7s

2019-01-21 14:30:08 (16.3 MB/s) - 'installer-linux.1' saved [28376064/28376064]

INFO[0000] Checking installation state
INFO[0001] Compose version v0.4.18 is already installed in namespace "compose" with the same settings
compose.docker.com/v1beta1
compose.docker.com/v1beta2
Waiting for the stack to be stable and running...
db1: Pending            [pod status: 0/2 ready, 2/2 pending, 0/2 failed]
web1: Pending           [pod status: 0/3 ready, 3/3 pending, 0/3 failed]
db1: Ready              [pod status: 1/2 ready, 1/2 pending, 0/2 failed]
web1: Ready             [pod status: 1/3 ready, 2/3 pending, 0/3 failed]

Stack hellostack is stable and running

NAME         SERVICES   PORTS        STATUS                            CREATED AT
hellostack   2          web1: 8082   Progressing (Stack is starting)   2019-01-21T14:30:10Z

Verifying the Stack

[node1 pwk]$ kubectl get stack
NAME         SERVICES   PORTS        STATUS                         CREATED AT
hellostack   2          web1: 8082   Available (Stack is started)   2019-01-21T14:30:10Z

References:

  • https://github.com/docker/compose-on-kubernetes/issues/35
  • https://github.com/collabnix/compose-on-kubernetes/tree/master/scripts/pwk

A First Look at Compose on Kubernetes for Minikube

Estimated Reading Time: 8 minutes

Say Bye to Kompose !

Let’s begin with a problem statement – “The Kubernetes API is quite HUGE. More than 50 first-class objects in the latest release, from Pods and Deployments to ValidatingWebhookConfiguration and ResourceQuota can make anyone go waffling. If you are a developer, I am sure this can lead to a volubility in configuration of the cluster. Hence, there is a need of simplified approach(like Swarm CLI/API) to deploy and manage applications running on Kubernetes cluster.

On the 2nd day of Dockercon, Docker Inc. open sourced Compose on Kubernetes project. This tool is all about simplifying Kubernetes. If you are not aware, Docker Enterprise Edition already had this capability enabled starting Compose File version 3.3 where one can use the same docker-compose.yml file for Swarm deployment as well as one can specify Kubernetes workloads whenever stack is deployed.

Let me explain what it actually mean? Imagine, you are using Docker Desktop running on your Macbook. Docker Desktop provides capability of running a single node Swarm as well as Kubernetes cluster for your development environment. You have a choice of context switching from local cluster to remote Swarm or K8s cluster running on Cloud Platform like GKE/AKS. Once you have developed code in your local single node cluster which could be Minikube or Docker Desktop for Mac, you might want to test it on remote Cloud platform. All it require a “click” to switch the context from local cluster to GKE or AKS. Now, you can use the same Swarm CLI commands to deploy application to Cloud platform using the same Docker Compose file. Isn’t it coool?

Before we jump directly to the implementation phase, let us spend some time to understand how does the mapping from Swarm to Kubernetes actually happen? Fundamentally, 1:1 mapping of Swarm onto K8s is not straightforward. As a stack is essentially just a list of Swarm services the mapping is done on a per service basis. As per the official doc, there are fundamentally two classes of Kubernetes objects required to map a Swarm service: Something to deploy and scale the containers and something to handle intra- and extra-stack networking. In Kubernetes one does not manipulate individual containers but rather a set of containers called a pod. Pods can be deployed and scaled using different controllers depending on what the desired behaviour is. If a service is declared to be global, Compose on Kubernetes uses a DaemonSet to deploy pods. Note: Such services cannot use a persistent volume. If a service uses a volume, a StatefulSet is used. Talking about Intra-stack networking, essentially Kubernetes does not have the notion of a network like Swarm does. Instead all pods that exist in a namespace can network with each other. In order for DNS name resolution between pods to work, a HeadlessService is required. I would recommend this link to understand further on mapping from Swarm to Kubernetes.

Architecture

Compose on Kubernetes is made up of server-side and client-side components. This architecture was chosen so that the entire life cycle of a stack can be managed. The following image is a high-level diagram of the architecture:

Compose on Kubernetes architecture

The REST API is provided by a custom API server exposed to Kubernetes clients using API server aggregation.

The client communicates with the server using a declarative REST API. It creates a stack by either POSTing a serialized stack struct (v1beta2) or a Compose file (v1beta1) to the API server. This API server stores the stack in an etcd key-value store.

Server Side Architecture

There are two server-side components in Compose on Kubernetes:

  • the Compose API server, and
  • the Compose controller.

The Compose API server extends the Kubernetes API by adding routes for creating and manipulating stacks. It is responsible for storing the stacks in an etcd key-value store. It also contains the logic to convert v1beta1 representations to v1beta2, so that the Compose controller only needs to work one representation of a stack.

The Compose controller is responsible for converting a stack struct (v1beta2 schema) into Kubernetes objects and then reconciling the current cluster state with the desired one. It does this by interacting with the Kubernetes API — it is a Kubernetes client that watches for interesting events and manipulates lower level Kubernetes objects.

Under this blog post, I will show how to implement Compose on Kubernetes for Minikube.

Tested Infrastructure

  • Docker Edition: Docker Desktop Community v2.0.1.0
  • System: macOS High Sierra v10.13.6
  • Docker Engine: v18.09.1
  • Docker Compose : v1.23.2
  • Kubernetes: v1.13.0

Pre-requisites:

  • Install Docker Desktop Community Edition v2.0.1.0 directly from this link
  • Enable Kubernetes with the below feature enabled
My Image

Verifying Docker Desktop version

[Captains-Bay]? >  docker version
Client: Docker Engine - Community
 Version:           18.09.1
 API version:       1.39
 Go version:        go1.10.6
 Git commit:        4c52b90
 Built:             Wed Jan  9 19:33:12 2019
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.1
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       4c52b90
  Built:            Wed Jan  9 19:41:49 2019
  OS/Arch:          linux/amd64
  Experimental:     true
 Kubernetes:
  Version:          v1.12.4
  StackAPI:         v1beta2
[Captains-Bay]? >


Installing Minikube

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \
  && chmod +x minikube

Verifying Minikube Version

minikube version
minikube version: v0.32.0

Checking Minikube Status

minikube status
host: Stopped
kubelet:
apiserver:

Starting Minikube

]? >  minikube start
Starting local Kubernetes v1.12.4 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Stopping extra container runtimes...
Machine exists, restarting cluster components...
Verifying kubelet health ...
Verifying apiserver health ....Kubectl is now configured to use the cluster.
Loading cached images from config file.


Everything looks great. Please enjoy minikube!

By now, you should be able to see context switching happening under UI windows under Kubernetes section as shown below:

Checking the Minikube Status

? >  minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100[Captains-Bay]? >

Listing out Minikube Cluster Nodes

 kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
minikube   Ready     master    12h       v1.12.4

Creating “compose” namespace

kubectl create namespace compose
namespace "compose" created

Creating the tiller service account

kubectl -n kube-system create serviceaccount tiller
serviceaccount "tiller" created

Granting Access to your Cluster

kubectl -n kube-system create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount kube-system:tiller
clusterrolebinding "tiller" created

Initializing Helm component.

? >  helm init --service-account tiller
$HELM_HOME has been configured at /Users/ajeetraina/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Verifying Helm Version

helm version
Client: &version.Version{SemVer:"v2.12.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

Deploying etcd operator

? >  helm install --name etcd-operator stable/etcd-operator --namespace compose
NAME:   etcd-operator
LAST DEPLOYED: Fri Jan 11 10:08:06 2019
NAMESPACE: compose
STATUS: DEPLOYED

RESOURCES:
==> v1/ServiceAccount
NAME                                               SECRETS  AGE
etcd-operator-etcd-operator-etcd-backup-operator   1        1s
etcd-operator-etcd-operator-etcd-operator          1        1s
etcd-operator-etcd-operator-etcd-restore-operator  1        1s

==> v1beta1/ClusterRole
NAME                                       AGE
etcd-operator-etcd-operator-etcd-operator  1s

==> v1beta1/ClusterRoleBinding
NAME                                               AGE
etcd-operator-etcd-operator-etcd-backup-operator   1s
etcd-operator-etcd-operator-etcd-operator          1s
etcd-operator-etcd-operator-etcd-restore-operator  1s

==> v1/Service
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)    AGE
etcd-restore-operator  ClusterIP  10.104.102.245  <none>       19999/TCP  1s

==> v1beta1/Deployment
NAME                                               DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
etcd-operator-etcd-operator-etcd-backup-operator   1        1        1           0          1s
etcd-operator-etcd-operator-etcd-operator          1        1        1           0          1s
etcd-operator-etcd-operator-etcd-restore-operator  1        1        1           0          1s

==> v1/Pod(related)
NAME                                                             READY  STATUS             RESTARTS  AGE
etcd-operator-etcd-operator-etcd-backup-operator-7978f8bc4r97s7  0/1    ContainerCreating  0         1s
etcd-operator-etcd-operator-etcd-operator-6c57fff9d5-kdd7d       0/1    ContainerCreating  0         1s
etcd-operator-etcd-operator-etcd-restore-operator-6d787599vg4rb  0/1    ContainerCreating  0         1s


NOTES:
1. etcd-operator deployed.
  If you would like to deploy an etcd-cluster set cluster.enabled to true in values.yaml
  Check the etcd-operator logs
    export POD=$(kubectl get pods -l app=etcd-operator-etcd-operator-etcd-operator --namespace compose --output name)
    kubectl logs $POD --namespace=compose
? >


Creating an etcd cluster

$cat compose-etcd.yaml
apiVersion: "etcd.database.coreos.com/v1beta2"
kind: "EtcdCluster"
metadata:
  name: "compose-etcd"
  namespace: "compose"
spec:
  size: 3
  version: "3.2.13"
kubectl apply -f compose-etcd.yaml
etcdcluster "compose-etcd" created


This should bring an etcd cluster in the compose namespace.

Download the Compose Installer

wget https://github.com/docker/compose-on-kubernetes/releases/download/v0.4.18/installer-darwin

Deploying Compose on Kubernetes

./installer-darwin -namespace=compose -etcd-servers=http://compose-etcd-client:2379 -tag=v0.4.18
INFO[0000] Checking installation state
INFO[0000] Install image with tag "v0.4.18" in namespace "compose"
INFO[0000] Api server: image: "docker/kube-compose-api-server:v0.4.18", pullPolicy: "Always"
INFO[0000] Controller: image: "docker/kube-compose-controller:v0.4.18", pullPolicy: "Always"

Ensuring that Compose Stack controller gets enabled

[Captains-Bay]? >  kubectl api-versions| grep compose
compose.docker.com/v1beta1
compose.docker.com/v1beta2

Listing out services of Minikube

[Captains-Bay]? >  minikube service list
|-------------|-------------------------------------|-----------------------------|
|  NAMESPACE  |                NAME                 |             URL             |
|-------------|-------------------------------------|-----------------------------|
| compose     | compose-api                         | No node port                |
| compose     | compose-etcd-client                 | No node port                              |
| compose     | etcd-restore-operator               | No node port                |
| default     | db1                                 | No node port                |
| default     | example-etcd-cluster-client-service | http://192.168.99.100:32379 |
| default     | kubernetes                          | No node port                |
| default     | web1                                | No node port                |
| default     | web1-published                      | http://192.168.99.100:32511 |
| kube-system | kube-dns                            | No node port                |
| kube-system | kubernetes-dashboard                | No node port                |
| kube-system | tiller-deploy                       | No node port                |
|-------------|-------------------------------------|-----------------------------|
[Captains-Bay]? >

Verifying StackAPI

[Captains-Bay]? >  docker version
Client: Docker Engine - Community
 Version:           18.09.1
 API version:       1.39
 Go version:        go1.10.6
 Git commit:        4c52b90
 Built:             Wed Jan  9 19:33:12 2019
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.1
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       4c52b90
  Built:            Wed Jan  9 19:41:49 2019
  OS/Arch:          linux/amd64
  Experimental:     true
 Kubernetes:
  Version:          v1.12.4
  StackAPI:         v1beta2
[Captains-Bay]? >

Deploying Web Application Stack directly using Docker Compose

[Captains-Bay]? >  docker stack deploy -c docker-compose2.yml myapp4
Waiting for the stack to be stable and running...
db1: Ready		[pod status: 1/2 ready, 1/2 pending, 0/2 failed]
web1: Ready		[pod status: 2/2 ready, 0/2 pending, 0/2 failed]

Stack myapp4 is stable and running

[Captains-Bay]? >  docker stack ls
NAME                SERVICES            ORCHESTRATOR        NAMESPACE
myapp4              2                   Kubernetes          default
[Captains-Bay]? >  kubectl get po
NAME                    READY     STATUS    RESTARTS   AGE
db1-55959c855d-jwh69    1/1       Running   0          57s
db1-55959c855d-kbcm4    1/1       Running   0          57s
web1-58cc9c58c7-sgsld   1/1       Running   0          57s
web1-58cc9c58c7-tvlhc   1/1       Running   0          57s

Hence, we successfully deployed a web application stack onto 1-node Kubernetes cluster running in Minikube using Docker Compose file.

In my next blog post, I will showcase Compose on Kubernetes for GKE cluster. If you’re in hurry, drop me a comment or reach out to me via Twitter @ajeetsraina. I would love to assist you.

Running Cron Jobs container on 5-Node Docker Swarm Mode Cluster

Estimated Reading Time: 3 minutes

A Docker Swarm consists of multiple Docker hosts which run in swarm mode and act as managers (to manage membership and delegation) and workers (which run swarm services). When you create a service, you define its optimal state (number of replicas, network and storage resources available to it, ports the service exposes to the outside world, and more). Docker works to maintain that desired state. For instance, if a worker node becomes unavailable, Docker schedules that node’s tasks on other nodes. A task is a running container which is part of a swarm service and managed by a swarm manager, as opposed to a standalone container.

Let us talk a bit more about Services…

A Swarm service is a 1st class citizen and is the definition of the tasks to execute on the manager or worker nodes. It is the central structure of the swarm system and the primary root of user interaction with the swarm. When one create a service, you specify which container image to use and which commands to execute inside running containers.Swarm mode allows users to specify a group of homogenous containers which are meant to be kept running with the docker service CLI. Its ever running process.This abstraction which is undoubtedly powerful, may not be the right fit for containers which are intended to eventually terminate or only run periodically. Hence, one might need to run some containers for specific period of time and terminate it acccordingly.

Let us consider few example:

  • You are a System Administrator who wishes to allow users to submit long-running compiler jobs on a Swarm cluster
  • A website which needs to process all user uploaded images into thumbnails of various sizes
  • An operator who wishes to periodically run docker rmi $(docker images –filter dangling=true -q) on each machine

Today Docker Swarm doesn’t come with this feature by default. But there are various workaround to make it work. Under this tutorial, we will show you how to run on-off cron-job on 5-Node Swarm Mode Cluster.

Tested Infrastructure

PlatformNumber of InstanceReading Time
Play with Docker55 min

Pre-requisite

  • Create an account with DockerHub
  • Open PWD Platform on your browser
  • Click on Spanner on the left side of the screen to bring up 5-Node Swarm Mode Cluster

Verifying 5-Node Swarm Mode Cluster

$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
 ENGINE VERSION
y2ewcgx27qs4qmny9840zj92p *   manager1            Ready               Active              Leader
 18.06.1-ce
qog23yabu33mpucu9st4ibvp5     manager2            Ready               Active              Reachable
 18.06.1-ce
tq0ed0p2gk5n46ak4i1yek9yc     manager3            Ready               Active              Reachable
 18.06.1-ce
tmbcma9d3zm8jcx965ucqu2mf     worker1             Ready               Active
 18.06.1-ce
dhht9gr8lhbeilrbz195ffhrn     worker2             Ready               Active
 18.06.1-ce

Cloning the Repository

git clone https://github.com/crazy-max/swarm-cronjob
cd swarm-cronjob/.res/example

Bring up Swarm Cronjob

docker stack deploy -c swarm_cronjob.yml swarm_cronjob

Listing Docker Services

$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE
  PORTS

qsmd3x69jds1        myswarm_app         replicated          1/1                 crazymax/swarm-cronjob:latest

Visualizing the Swarm Cluster running Cronjob container

git clone https://github.com/collabnix/dockerlabs
cd dockerlabs/intermediate/swarm/visualizer/
docker-compose up -d
My image

Example #1: Running Date container every 5 minutes

My image

Edit date.yml file to change cronjob from * to */5 to run every 5 minutes as shown:

$ cd .res/example/
$ cat date.yml
version: "3.2"
services:  test:    image: busybox    command: date    deploy:
      labels:
        - "swarm.cronjob.enable=true"
        - "swarm.cronjob.schedule=*/5 * * * *"
        - "swarm.cronjob.skip-running=false"
      replicas: 0
      restart_policy:
        condition: none

[manager1]

Bringing up App Stack

docker stack deploy -c date.yml date
My image

Contributor

Special Credits

Top 5 Most Exciting Dockercon EU 2018 Announcements

Estimated Reading Time: 9 minutes

Last week I attended Dockercon 2018 EU which took place at Centre de Convencions Internacional de Barcelona (CCIB) in Barcelona, Spain. With over 3000+ attendees from around the globe, 52 breakout sessions, 11 Community Theatres, 12 workshops, over 100+ total sessions, exciting Hallway Tracks & Hands-on Labs/Trainings, paid trainings, women’s networking event, DockerPals and so on..Dockercon allowed developers, sysadmins, Product Managers & industry evangelists come closer to share their wealth of experience around the container technology. This time I was lucky enough to get chance to Emcee for Docker for Developer Track  for the first time. Not only this, I conducted Hallway Track for OpenUSM project & DockerLabs community contribution effort. Around 20-30 participants  showed up their interest to learn more around this system management, monitoring & Log Analytics tool.

This Dockercon we had Docker Captains Summit for the first time where the entire day was  dedicated to Captains. On Dec #3 ( 10:00 AM till 3:00 PM), we got chance to interact with Docker Staffs, where we put all our queries around Docker Future roadmap. It was amazing to meet all young Captains who joined us this year as well as getting familiar to what they have been contributing to during the initial introductory rounds.

This Dockercon, there has been couple of exciting announcements. 3 of the new features were targeted at Docker Community Edition, while the two were for Docker Enterprise customers. Here’s a rundown of what I think are the most 5 exciting announcements made last week –

#1. Announcement of Cloud Native Application Bundles(CNAB)

Microsoft and Docker have captured a great piece of attention with announcement around CNAB – Cloud Native Application Bundles.

What is CNAB? 

Cloud Native Application Bundles (CNAB) are a standard packaging format for multi-component distributed applications. It allows packages to target different runtimes and architectures. It empowers application distributors to package applications for deployment on a wide variety of cloud platforms, cloud providers, and cloud services. It also provides the capabilities necessary for delivering multi-container applications in disconnected environments.

Is it platform-specific tool?

CNAB is not a platform-specific tool. While it uses containers for encapsulating installation logic, it remains un-opinionated about what cloud environment it runs in. CNAB developers can bundle applications targeting environments spanning IaaS (like OpenStack or Azure), container orchestrators (like Kubernetes or Nomad), container runtimes (like local Docker or ACI), and cloud platform services (like object storage or Database as a Service). CNAB can also be used for packaging other distributed applications, such as IoT or edge computing. In nutshell, CNAB are a package format specification that describes a technology for bundling, installing, and managing distributed applications, that are by design, cloud agnostic.

Why do we need CNAB?

The current distributed computing landscape involves a combination of executable units and supporting API-based services. Executable units include Virtual Machines (VMs), Containers (e.g. Docker and OCI) and Functions-as-a-Service (FaaS), as well as higher-level PaaS services. Along with these executable units, many managed cloud services (from load balancers to databases) are provisioned and interconnected via REST (and similar network-accessible) APIs. The overall goal of CNAB is to provide a packaging format that can enable application providers and developers with a way of installing a multi-component application into a distributed computing environment, supporting all of the above types.


Is it open source? Tell me more about CNAB format?

It is an open source, cloud-agnostic specification for packaging and running distributed applications. It is a nascent specification that offers a way to repackage distributed computing apps

The CNAB format is a packaging format for a broad range of distributed applications. It specifies a pairing of a bundle definition(bundle.json) to define the app, and an invocation image to install the app.

The bundle definition is a single file that contains the following information:

  • Information about the bundle, such as name, bundle version, description, and keywords
  • Information about locating and running the invocation image (the installer program)
  • A list of user-overridable parameters that this package recognizes
  • The list of executable images that this bundle will install
  • A list of credential paths or environment variables that this bundle requires to execute

What’s Docker future plan to do with CNAB?

This project was incubated by Microsoft and Docker 1 year back. The first implementation of the spec is an experimental utility called Docker App, which Docker officially rolled out this Dockercon and expected to be integrated with Docker Enterprise in near future.  Microsoft and Docker plan to donate CNAB to an open source foundation publicly which is expected to happen early next year.

If you have no patience, head over Docker App CNAB examples recently posted by Gareth Rushgrove, Docker Employee, which is accessible via https://github.com/garethr/docker-app-cnab-examples

The examples in this repository show some basic examples of using docker-app, in particular showing some of the CNAB integration details. Check it out –

 #2. Support for using Docker Compose on Kubernetes.

On the 2nd day of Dockercon, Docker Inc. open sourced Compose on Kubernetes project. Docker Enterprise Edition already had this capability enabled starting Compose File version 3.3 where one can use the same docker-compose.yml file for Swarm deployment as well as one can specify Kubernetes workloads whenever stack is deployed. 

What benefit does this bring to Community Developers?

By making it open source, Docker, Inc has really paved a way of infinite possibilities around simplified way of deploying Kubernetes application. Docker Swarm gained popularity because of its simplified approach of application deployment using docker-compose.yml file. Now the community developers can use the same YAML file to deploy their K8s application. 

Imagine, you are using Docker Desktop on your Macbook. Docker Desktop provides capability of running both Swarm & Kubernetes. You have context set to GKE cluster which is running on Google Cloud Platform. You just deployed your app using docker-compose.yml on your local Macbook. Now you want to deploy it in the same way but this time on your GKE cluster.  Just use docker stack deploy command to deploy it to GKE cluster. Interesting, Isn’t it?

How does Compose on Kubernetes architecture look like?

Compose on Kubernetes is made up of server-side and client-side components. This architecture was chosen so that the entire life cycle of a stack can be managed. The following image is a high-level diagram of the architecture:

Compose on Kubernetes architecture

If you’re interested to learn further, I would suggest you to visit this link.

How can I test it now?

First we need to install the Compose on Kubernetes controller into your Kubernetes cluster(which could be GKE/AKS). You can download the latest binary(as of 12/13/2018) via https://github.com/docker/compose-on-kubernetes/releases/tag/v0.4.16 .

This controller uses the standard Kubernetes extension points to introduce the `Stack` to the Kubernetes API. You can use any Kubernetes cluster you like, but if you don’t already have one available then remember that Docker Desktop comes with Kubernetes and the Compose controller built-in, and enabling it is as simple as ticking a box in the settings

Check out the latest doc which shows how to make it work with AKS here.

#3. Introducing Docker Desktop Enterprise

The 3rd Big announcement was an introduction to Docker Desktop Enterprise. With this, Docker Inc. made a new addition to their desktop product portfolio which currently includes the free Docker Desktop Community products for MacOS and Windows. Docker Desktop is the simplest way to get started with container-based development on both Windows 10 and macOS with a set of features now available for the enterprise.

Desktop Comparison Table

How will Docker Desktop Enterprise be different from Docker Desktop Community Edition?

Good question. Docker Desktop has Docker Engine and Kubernetes built-in and with the addition of swappable version packs you can now synchronize your desktop development environment with the same Docker API and Kubernetes versions that are used in production with Docker Enterprise. You get the assurance that your application will not break due to incompatible API calls, and if you have multiple downstream environments running different versions of the APIs, you can quickly change your desktop configuration with the click of a button.

Not only this, with Docker Desktop Enterprise, you get access to the Application Designer which is a new workflow that provides production-ready application and service templates that let you get coding quickly, with the reassurance that your application meets architectural standards

source ~ Docker, Inc


For those who are interested in Docker Desktop Enterprise – Please note that it is expected to be available for preview in January & General Availability is slated to happen during 1H 2019.

#4. From Zero to Docker in Seconds with “docker assemble” CLI

This time, Docker Team announced a very interesting docker subcommand rightly named as “assemble” to the public. Ann Rahma and Gareth Rushgrove from Docker, Inc. announced assemble, a new command that generates optimized images from non dockerized apps. It will get you from source to an optimized Docker images in seconds.

Here are few of interesting facts around docker assemble utility:

  • Docker assemble has capability to build an image without a Dockerfile, all about auto detecting the code framework.
  • It generates docker images (and lot more) from your code with single command and zero effort! which mean no more dockerfile needed for your app till you have a config (.pom file there). 
  • It can analyze your applications, dependencies, and caches, and give you a sweet Docker image without having to author your own Dockerfiles.
  • It is built on top of buildKit, will auto detect framework, versions etc. from a config file (.pom file) and automatically add dependencies to the image label, optimize image size and push.
  • Docker Assemble can also figure out what ports need to be published and what healthchecks are relevant.
  • The dockerassemble builds app without configuration files, without Dockerfile, just a git repository to deploy

Is it an open source project?

It’s an enterprise feature for now — not in the community version. It is available for a couple languages and frameworks (like Java as demonstrated on Dockercon stage). 

How is it different from buildpack?

By reading all through its feature, Docker assemble might look very similar to buildpacks  as it overlap with some of the stuff docker-assemble does. But the huge benefit with assemble is that it’s more than just an image (also ports, healthchecks, volume mounts, etc), and it’s integrated into the enterprise toolchain. The docker-assemble is sort of an enterprise-grade buildpack to help with digitalization.

Keep eye on my next blog post to get more detail around the fancy docker assemblecommand.

#5. Docker-app & CNAB together for the first time

On the 2nd day of Dockercon, Docker confirmed that they are the first to implement CNAB for containerized applications and will be expanding it across the Docker platform to support new application development, deployment and lifecycle management. Initially CNAB support will be released as part of our docker-app experimental tool for building, packaging and managing cloud-native applications. With this, Docker now lets you package CNAB bundles as Docker images, so you can distribute and share through Docker registry tools including Docker Hub and Docker Trusted Registry. Additionally, Docker will enable organizations to deploy and manage CNAB-based applications in Docker Enterprise in the upcoming months.

Can I test the preview binaries of docker-app which comes with CNAB support?

Yes, you can find some preview binaries of docker-app with CNAB support here.The latest release of Docker App is one such tool that implements the current CNAB spec. Tt can be used to both build CNAB bundles for Compose (which can then be used with any other CNAB client), and also to install, upgrade and uninstall any other CNAB bundle.

In case you have no patience, head over to this recently added example of how to deploy a Helm chart

You can visit https://github.com/docker/app/releases/tag/cnab-dockercon-preview for accessing preview build.

I hope you found this blog helpful. In my next blog series, I will deep-dive around each of these announcements in terms of implementation and enablements.

Switching Docker 18.09 Community Edition to Enterprise Engine with no downtime

Estimated Reading Time: 3 minutes 

Under the newer Docker Engine 18.09 release,  a new feature called CE-EE Node Activate has been introduced. It allows a user to perform an in-place seamless activation of the Enterprise engine feature set on an existing Community Edition (CE) node through the Docker command line.CE-EE Node Activate applies a license, and switch the Docker engine to the Enterprise engine binary.

Pre-requisite:

  • Docker Community Edition (CE) version must be 18.09 or higher.
  • All of the Docker packages must be installed: docker-cli, docker-server, and containerd.
  • Node-level Engine activation between CE and EE is only supported in the same version of Docker Enterprise Engine for Docker

Tested Infrastructure

Platform Number of Instance Reading Time
Google Cloud Platform 1 5 min

Pre-requisite

  • Create an account with Google Cloud Engine (Free Tier)
  • Pick up Ubuntu 18.10 as OS instance

Installing Docker Community Editon 18.09

Verifying Ubuntu 18.10 release

$ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.10 (Cosmic Cuttlefish)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.10"
VERSION_ID="18.10"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=cosmic
UBUNTU_CODENAME=cosmic

Installing Docker 18.09 Release

sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic test"
sudo apt install docker-ce
~$ sudo docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:49:01 2018
 OS/Arch:           linux/amd64
 Experimental:      false
Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:16:44 2018
  OS/Arch:          linux/amd64
  Experimental:     false

Running Nginx Docker container

$ sudo docker run -d -p 80:80 nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
a5a6f2f73cd8: Pull complete 
67da5fbcb7a0: Pull complete 
e82455fa5628: Pull complete 
Digest: sha256:31b8e90a349d1fce7621f5a5a08e4fc519b634f7d3feb09d53fac9b12aa4d991
Status: Downloaded newer image for nginx:latest
ba4a5822d7c991c04418b2fbbcadb86057eef4d98ba3f930bff569ac8058468e
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES
ba4a5822d7c9        nginx               "nginx -g 'daemon of…"   5 seconds ago       Up 3 seconds        0.0.0.0:80->80/tcp   peaceful_swanson

Verifying Nginx Docker container Up and Running

~$ sudo curl localhost:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Connect your system to DockerHub Account

$sudo docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: ajeetraina
Password: 
WARNING! Your password will be stored unencrypted in /home/joginderkour1950/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
 

Downloading Your Docker Enterprise License

  • Go to https://store.docker.com/my-content site.
  • Login with your Docker ID.
  • Under your profile page, click on “My Content”
  • Click on “Setup” to get your Docker Enterprise License
  • Download your Docker Enterprise License in your system
  • Copy the content of .lic file
  • Create a file called mylicense.lic on your Ubuntu sytem and save it in some location.

Activate the EE license. You must use sudo even if your user is part of the docker group.

$ sudo docker engine activate --license mylicense.lic
License: Quantity: 10 Nodes     Expiration date: 2018-12-10     License is currently active
18.09.0: resolved 
267a9a121ee1: done 
4365cd59d876: done [==================================================>]  1.161kB/1.161kB
7ec4ee35c404: done [==================================================>]   4.55MB/4.55MB
3c60d2c9ddf3: done [==================================================>]  25.71MB/25.71MB
55fa4079a8ab: done [==================================================>]  1.122MB/1.122MB
c5a93cbd4679: done [==================================================>]  333.9kB/333.9kB
e661b0f8ba29: done [==================================================>]   4.82kB/4.82kB
Successfully activated engine.
Restart docker with 'systemctl restart docker' to complete the activation.

Restarting the Docker service

$ sudo systemctl restart docker

Verifying Docker Enterprise Version

$ sudo docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:49:01 2018
 OS/Arch:           linux/amd64
 Experimental:      false
Server: Docker Engine - Enterprise
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       33a45cd
  Built:            Wed Nov  7 00:17:07 2018
  OS/Arch:          linux/amd64
  Experimental:     false

Verifying if Nginx container is still running

$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES
ba4a5822d7c9        nginx               "nginx -g 'daemon of…"   6 minutes ago       Up 6 minutes        0.0.0.0:80->80/tcp   peaceful_swanson

 

6 Things You Should Know before Dockercon 2018 EU

Estimated Reading Time: 6 minutes

 
At Dockercon 2018 this year, you can expect 2,500+ participates, 52 breakout sessions, 11 Community Theatres, 12 workshops, over 100+ total sessions, exciting Hallway Tracks & Hands-on Labs/Trainings, paid trainings, women’s networking event, DockerPals Meetup, chance to meet Docker community experts – Docker Captains & Community Leaders, attending Ecosystem Expo… and only 3 days to accomplish it all. It’s so easy to get overwhelmed but at the same time you need to attend with the right information, so that you walk out triumphant.
 

 

 
Coming Dec 3-5 2018, I will be attending my 3rd Dockercon conference which is slated to happen in the beautiful city of Barcelona, home to the largest football stadium in all of Europe! Based on my past experience, I am here to share and walk you through the inside scoop on where to go when, what to watch out, must-see sessions, who to meet, and much more.
 

#1. UnFold & Unlock with Dockercon AB – “Dockercon 2018 Agenda Builder”

Once you get your Dockercon ID ready via Registration & Info Desk, just turn it around to unfold and unlock Dockercon Agenda for next 3 days. Very simple, isn’t it? Dockercon Agenda Builder is right in your hand.

 
 
 
Wait, I want to access it beforehand? Sure. You can find out when and where everything is happening at Dockercon with this simple Agenda Builder
 
If you’re CI-CD Enthusiast like me, you should use filters under Agenda Builder to choose CI-CD Keywords. You should be able to easily find out what all sessions(breakout, community theatre, General or Workshops) is scheduled to happen on all 3 days.
 
 
I personally find Session Tracks very useful. This time in Barcelona, I will try my best to attend most of these tracks listed below – 
 
– Using Docker for Developers
– Using Docker for IT Infrastructure & Ops
– Customer Stories
– Black Belt
– Docker Technology
– Ecosystem
– Community Theatre
 
 
 

#2. Don’t Miss out HTP – Dockercon 2018 Hallway Track Platform

Trust Me..Dockercon is full of life-time opportunities. If you’re looking for a place which offers you a great way to network and get to know your fellow Docker enthusiasts, the answer is Hallway Tracks.

Hallway Track is an easy way to connect and learn from each other through conversations about things you care about. Share your knowledge and book 1-on-1 group conversations with other participants on the Hallway Track Platform. 

Recently I submitted a track around self-initiated DockerLabs community program. You can join me directly via https://hallwaytrack.dockercon.com/topics/29045/

 

Did you know that you can even enroll yourself for Hallway Track right away before Dockercon? If interested, Head over to https://hallwaytrack.dockercon.com/

#3. Get Your Hands Dirty with HOL –  Dockercon 2018 Hands-on Labs (HOL)

With Dockercon conference pass, comes a Docker’s Hands-on Lab. These self paced tutorials allow you to learn at your pace anytime during the conference. Featuring a wide range of topics for developers and sys admins working with Windows and Linux environments. The Docker Staff will be available to answer questions and help you along. 

What I love about Docker HOL is you don’t need pre-registration, just stop by during the available hours on Monday through Wednesday. All you need is carry your laptop for lab sessions.

Further Information: https://europe-2018.dockercon.com/hands-on-labs/

 

#4. Deepen Your Container Knowledge with DW – “Dockercon 2018 Workshops”

Pre-conference Docker workshops is an amazing opportunity for you to become better acquainted with Docker and take a deep dive into the Docker platform features, services and uses before the start of the conference. These two hour workshops will provide technical deep dives, practical guidance and hands on tutorials. Reserving a space require just a simple step – RSVP with your Agenda Builder. Please note that this is included under Full Conference Pass.

Below are the list of workshops you might be interested to attend on Monday, Tuesday & Wednesday(Dec 3 – Dec 5 2018):

 

252727 – Workshop: Migrating .NET Applications to Docker Containers

252740 – Workshop: Docker Application Package

252734 – Workshop: Container Networking for Swarm and Kubernetes in Docker Enterprise

252737 – Workshop: Container Storage Concepts and How to Use Them

252728 – Workshop: Security Best Practices for Kubernetes

252733 – Workshop: Building a Secure, Automated Software Supply Chain

252720 – Workshop: Using Istio

262804 – Workshop: Container 101 – Getting Up and Running with Docker Containers

252731 – Workshop: Container Monitoring and Logging

252738 – Workshop: Migrating Java Applications to Docker Containers

261088 – Workshop: Container Troubleshooting with Sysdig

262792 – Workshop: Swarm Orchestration – Features and Workflows

Further Information: https://europe-2018.dockercon.com/hands-on-labs/

 

#5. Meet Your Favourite Captains & Community Leaders via  Dockercon! Pals

DockerPals is an excellent opportunity to meet Docker Captains and Community Leaders who are open to engaging with container enthusiasts of all skill levels, specialities and backgrounds. By participating in Docker Pals you will be introduced to other conference attendees and connected with a DockerCon veteran, giving you a built-in network before you arrive in Barcelona.

If you’re new to Dockercon, you can sign up as a Docker Pal. Docker Pals are matched with 4-5 other conference attendees and one Guide who knows their way around DockerCon. Pals might be newer to DockerCon, or solo attendees who want a group of new friends. Guides help Pals figure out which sessions and activities to attend and are a familiar face at the after-conference events. Both Guides and Pals benefit from making new connections in the Docker Community. You can sign up for Docker Guide under this link.

 

#6 Don’t Miss out Black Belt Sessions

Are you code and demo-heavy contributor? If yes, then you gonna love these sessions.

Attendees of this track will learn from technical deep dives that haven’t been presented anywhere else by members of the Docker team and from the Docker community. These sessions are code and demo-heavy and light on the slides. One way to achieve a deep understanding of complex distributed systems is to isolate the various components of that system, as well as those that interact with it, and examine all of them relentlessly. This is what is discussed under the Black Belt track! It features deeply technical talks covering not only container technology but also related projects.

 

Looking out for Tips for attending Conference???

Earlier this year, I presented a talk around “5 Tips for Making Most out of International Conference” which you might find useful. Do let me know your feedback, if any.

If you still have queries around Dockercon, I would suggest you to join us all at Docker Slack channel. Search for #dc_pals Slack channel to get connected to DockerPals program.

To join Docker Slack, you are requested to go through this link first. 

 

How to become a DevOps Engineer?

Estimated Reading Time: 2 minutes 

Who is DevOps engineer?
       
 DevOps engineers are a group of influential individuals who encapsulates depth of knowledge and years of hands-on experience around a wide variety of open source technologies and tools. They come with core attributes which involve an ability to code and script, data management skills as well as a strong focus on business outcomes. They are rightly called “Special Forces” who hold core attributes around collaboration, open communication and reaching across functional borders.

 DevOps engineer always shows interest and comfort working with frequent, incremental code testing and deployment. With a strong grasp of automation tools, these individuals are expected to move the business quicker and forward, at the same time giving a stronger technology advantage. In nutshell, a DevOps engineer must have a solid interest in scripting and coding,  skill in taking care of deployment automation, framework computerization and capacity to deal with the version control system.

 

Qualities of a DevOps Engineer

  Collated below are the characteristics/attributes of the DevOps Engineer.

  • Experience in a wide range of open source tools and techniques
  • A Broad knowledge on Sysadmin and Ops roles
  • Expertise in software coding, testing, and deployment
  • Experiences on DevOps Automation tools like Ansible, Puppet, and Chef
  • Experience in Continuous Integration, Delivery & Deployment
  • Industry-wide experience in implementation of  DevOps solutions for team collaborations
  • A firm knowledge of the various computer programming languages
  • Good awareness in Agile Methodology of Project Management
  • A Forward-thinker with an ability to connect the technical and business goals     
  • Demand for people with DevOps skills is growing rapidly because businesses get great results from DevOps. Organizations using DevOps practices are overwhelmingly high-functioning: They deploy code up to 30 times more frequently than their competitors, and 50 percent fewer of their deployments fail.

What exactly DevOps Engineer do?

DevOps is not a way to get developers doing operational tasks so that you can get rid of the operations team and vice versa.  Rather it is a way of working that encourages the Development and Operations teams to work together in a highly collaborative way towards the same goal. In nutshell, DevOps integrates developers and operations team to improve collaboration and productivity.

Duties of Developing and Operations team

The main goal of DevOps is not only to increase the product’s quality to a greater extent but also to increase the collaboration of Dev and Ops team as well so that the workflow within the organization becomes smoother & efficient at the same time.

Interested to read more? Read the complete story at Knowledgehut.

 
 

Kubernetes Hands-on Lab #4 – Deploy Prometheus Stack using Helm on Play with Kubernetes Platform

Estimated Reading Time: 8 minutes 

Let’s talk about Kubernetes Deployment Challenges…

As monolithic systems become too large to deal with, many enterprises are drawn to breaking them down into the microservices architecture.Whenever we move from monolithic to microservice architecture, application consists of multiple components in terms of services talking to each other. Each components has its own resources and can be scaled individually. If we talk about Kubernetes, it can become very complex with all the objects you need to handle ― such as ConfigMaps, services, pods, Persistent Volumes ― in addition to the number of releases you need to manage. The below list of challenges might occur :

 

 
 
1. Manage, Edit and Update multiple k8s configuration
2. Deploy Multiple K8s configuration as a SINGLE application
3. Share and reuse K8s configurations and applications
4. Parameterize and support multiple environments
5. Manage application releases: rollout, rollback, diff, history
6. Define deployment lifecycle(control operations to be run in different phases
7. Validate release state after deployment
 
These can be managed with Kubernetes Helm, which offers a simple way to package everything into one simple application and advertises what you can configure.
 

Helm is a Deployment Management(and NOT JUST PACKAGE MANAGER) for Kubernetes. It does a heavy lifting of repeatable deployment, management of dependencies(reuse and share), management of multiple configurations, update, rollback and test application deployments(Releases).

Under this blog post, we will test drive Helm on top of Play with Kubernetes Platform. Let’s get started.

Open https://labs.play-with-k8s.com/ to access Kubernetes Playground.

Click on the Login button to authenticate with Docker Hub or GitHub ID.

 

Once you start the session, you will have your own lab environment.

Adding First Kubernetes Node

Click on “Add New Instance” on the left to build your first Kubernetes Cluster node. It automatically names it as “node1”. Each instance has Docker Community Edition (CE) and Kubeadm already pre-installed. This node will be treated as the master node for our cluster.

Bootstrapping the Master Node

 

You can bootstrap the Kubernetes cluster by initializing the master (node1) node with the below script. Copy this script content into bootstrap.sh file and make it executable using “chmod +x bootstrap.sh” command.

When you execute this script, as part of initialization, the kubeadm write several configuration files needed, setup RBAC and deployed Kubernetes control plane components (like kube-apiserver, kube-dns, kube-proxy, etcd, etc.). Control plane components are deployed as Docker containers.

Copy the above kubeadm join token command and save it for the next step. This command will be used to join other nodes to your cluster.

Adding Worker Nodes

Click on “Add New Node” to add a new worker node.

Checking the Cluster Status

[node1 ~]$ kubectl get nodes
NAME      STATUS     ROLES     AGE       VERSION
node1     Ready      master    18m       v1.11.3
node2     Ready      <none>    4m        v1.11.3
node3     Ready      <none>    39s       v1.11.3
node4     NotReady   <none>    22s       v1.11.3
node5     NotReady   <none>    4s        v1.11.3
[node1 ~]$
[node1 ]$ kubectl get po
No resources found.
[node1 ]$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   1h
[node1]$

 

Verifying the running Pods


 

[node1 ~]$ kubectl get nodes -o json |
>       jq ".items[] | {name:.metadata.name} + .status.capacity"

{
  "name": "node1",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node2",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node3",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node4",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node5",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}

Installing OpenSSL

[node1 ~]$ yum install -y openssl

Installing Helm

$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
[node1 ~]$ sh get_helm.sh
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.11.0-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
get_helm.sh: line 177: which: command not found
Run 'helm init' to configure helm.
[node1 ~]$ helm init
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming

Installing Prometheus 

Let us try to install Prometheus Stack on top of 5-Node K8s cluster using Helm.

First one can search for application stack using helm search <packagename> option.

[node1 ~]$ helm search prometheus
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
stable/prometheus                       7.3.4           2.4.3           Prometheus is a monitoring system and time series database.
stable/prometheus-adapter               v0.2.0          v0.2.1          A Helm chart for k8s prometheus adapter
stable/prometheus-blackbox-exporter     0.1.3           0.12.0          Prometheus Blackbox Exporter
stable/prometheus-cloudwatch-exporter   0.2.1           0.5.0           A Helm chart for prometheus cloudwatch-exporter
stable/prometheus-couchdb-exporter      0.1.0           1.0             A Helm chart to export the metrics from couchdb in Promet...
stable/prometheus-mysql-exporter        0.2.1           v0.11.0         A Helm chart for prometheus mysql exporter with cloudsqlp...
stable/prometheus-node-exporter         0.5.0           0.16.0          A Helm chart for prometheus node-exporter
stable/prometheus-operator              0.1.7           0.24.0          Provides easy monitoring definitions for Kubernetes servi...
stable/prometheus-postgres-exporter     0.5.0           0.4.6           A Helm chart for prometheus postgres-exporter
stable/prometheus-pushgateway           0.1.3           0.6.0           A Helm chart for prometheus pushgateway
stable/prometheus-rabbitmq-exporter     0.1.4           v0.28.0         Rabbitmq metrics exporter for prometheus
stable/prometheus-redis-exporter        0.3.2           0.21.1          Prometheus exporter for Redis metrics
stable/prometheus-to-sd                 0.1.1           0.2.2           Scrape metrics stored in prometheus format and push them ...
stable/elasticsearch-exporter           0.4.0           1.0.2           Elasticsearch stats exporter for Prometheus
stable/karma                            1.1.2           v0.14           A Helm chart for Karma - an UI for Prometheus Alertmanager
stable/stackdriver-exporter             0.0.4           0.5.1           Stackdriver exporter for Prometheus
stable/weave-cloud                      0.3.0           1.1.0           Weave Cloud is a add-on to Kubernetes which provides Cont...
stable/kube-state-metrics               0.9.0           1.4.0           Install kube-state-metrics to generate and expose cluster...
stable/mariadb                          5.2.2           10.1.36         Fast, reliable, scalable, and easy to use open-source rel...
[node1 ~]$

Update the Repo

[node1 ~]$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

Installing Prometheus

$helm install stable/prometheus

Error: namespaces “default” is forbidden: User “system:serviceaccount:kube-system:default” cannot get namespaces in the namespace “default”

How to fix?

To fix this issue, you need to follow below steps:

kubectl --namespace kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade

Listing Helm

[node1 ~]$ helm list
NAME            REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
excited-elk     1               Sun Oct 28 10:00:02 2018        DEPLOYED        prometheus-7.3.4        2.4.3           default
[node1 ~]$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
[node1 ~]$ helm install stable/prometheus
NAME:   excited-elk
LAST DEPLOYED: Sun Oct 28 10:00:02 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/DaemonSet
NAME                                  AGE
excited-elk-prometheus-node-exporter  1s

==> v1/Pod(related)

NAME                                                        READY  STATUS             RESTARTS  AGE
excited-elk-prometheus-node-exporter-7bjqc                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-node-exporter-gbcd7                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-node-exporter-tk56q                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-node-exporter-tkk9b                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-alertmanager-68f4f57c97-wrfjz        0/2    Pending            0         1s
excited-elk-prometheus-kube-state-metrics-858d44dfdc-vt4wj  0/1    ContainerCreating  0         1s
excited-elk-prometheus-pushgateway-58bfd54d6d-m4n69         0/1    ContainerCreating  0         1s
excited-elk-prometheus-server-5958586794-b97xn              0/2    Pending            0         1s

==> v1/ConfigMap

NAME                                 AGE
excited-elk-prometheus-alertmanager  1s
excited-elk-prometheus-server        1s

==> v1/ServiceAccount
excited-elk-prometheus-alertmanager        1s
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-node-exporter       1s
excited-elk-prometheus-pushgateway         1s
excited-elk-prometheus-server              1s

==> v1beta1/ClusterRole
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-server              1s

==> v1beta1/Deployment
excited-elk-prometheus-alertmanager        1s
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-pushgateway         1s
excited-elk-prometheus-server              1s

==> v1/PersistentVolumeClaim
excited-elk-prometheus-alertmanager  1s
excited-elk-prometheus-server        1s

==> v1beta1/ClusterRoleBinding
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-server              1s

==> v1/Service
excited-elk-prometheus-alertmanager        1s
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-node-exporter       1s
excited-elk-prometheus-pushgateway         1s
excited-elk-prometheus-server              1s


NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
excited-elk-prometheus-server.default.svc.cluster.local


Get the Prometheus server URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9090


The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
excited-elk-prometheus-alertmanager.default.svc.cluster.local


Get the Alertmanager URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9093


The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
excited-elk-prometheus-pushgateway.default.svc.cluster.local


Get the PushGateway URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9091

For more information on running Prometheus, visit:
https://prometheus.io/
[node1 ~]$ kubectl get all
NAME                                                             READY     STATUS    RESTARTS   AGE
pod/excited-elk-prometheus-alertmanager-68f4f57c97-wrfjz         0/2       Pending   0          3m
pod/excited-elk-prometheus-kube-state-metrics-858d44dfdc-vt4wj   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-7bjqc                   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-gbcd7                   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-tk56q                   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-tkk9b                   1/1       Running   0          3m
pod/excited-elk-prometheus-pushgateway-58bfd54d6d-m4n69          1/1       Running   0          3m
pod/excited-elk-prometheus-server-5958586794-b97xn               0/2       Pending   0          3m

NAME                                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/excited-elk-prometheus-alertmanager         ClusterIP   10.106.159.46   <none>        80/TCP     3m
service/excited-elk-prometheus-kube-state-metrics   ClusterIP   None            <none>        80/TCP     3m
service/excited-elk-prometheus-node-exporter        ClusterIP   None            <none>        9100/TCP   3m
service/excited-elk-prometheus-pushgateway          ClusterIP   10.106.88.15    <none>        9091/TCP   3m
service/excited-elk-prometheus-server               ClusterIP   10.107.15.64    <none>        80/TCP     3m
service/kubernetes                                  ClusterIP   10.96.0.1       <none>        443/TCP    37m

NAME                                                  DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/excited-elk-prometheus-node-exporter   4         4         4         4            4           <none>          3m

NAME                                                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/excited-elk-prometheus-alertmanager         1         1         1            0           3m
deployment.apps/excited-elk-prometheus-kube-state-metrics   1         1         1            1           3m
deployment.apps/excited-elk-prometheus-pushgateway          1         1         1            1           3m
deployment.apps/excited-elk-prometheus-server               1         1         1            0           3m

NAME                                                                   DESIRED   CURRENT   READY     AGE
replicaset.apps/excited-elk-prometheus-alertmanager-68f4f57c97         1         1         0         3m
replicaset.apps/excited-elk-prometheus-kube-state-metrics-858d44dfdc   1         1         1         3m
replicaset.apps/excited-elk-prometheus-pushgateway-58bfd54d6d          1         1         1         3m
replicaset.apps/excited-elk-prometheus-server-5958586794               1         1         0         3m
[node1 ~]$

Wait for few minutes while you can access Prometheus UI using https://<external-ip>:9090 In the upcoming blog series, I will bring more interesting stuffs around Helm on PWD Playground.

Kubernetes Hands-on Lab #3 – Deploy Istio Mesh on K8s Cluster

Kubernetes Hands-on Lab #1 – Setting up 5-Node K8s Cluster