How to Get the Most Out of Dockercon 2018 – (June 12 – 15th, SF)

Estimated Reading Time: 8 minutes

 

In a recent survey of 191 Top Fortune 1000 executives, 69% of them believe that conference presents a wealth of networking opportunities which is clearly a driving force behind their decision making. What it means is that the conference is NOT only about attending sessions but all about –

  • Gaining visibility,
  • Networking ,
  • Building strong relationship,
  • Connecting with Speakers
  • Connection with your customer
  • Meeting Like-Minded People

Hence, attending conference might be one of the best things you can do for your career. It is the time where you learn about new industry trends, gain some new skills and make all kind of new connections.

Dockercon 2018 is just a month away. It is happening this year at Moscone Center ~ one of the largest convention & exhibition complex in San Francisco, California starting June 13-15, 2018.  The event is expected to welcome 6,000+ developers, sysadmins, architects, VP of Apps & other IT leaders to get hands-on with the latest innovations in the container ecosystem. This is where all general DockerCon activities such as keynotes, breakouts, networking, meals, etc. will take place.

 

Use code “CaptainAjeet” to get 10% off on 2018.

New to Dockercon?

DockerCon is an event where the Docker community comes to learn, belong, and collaborate. Attendees are a mix of beginner, intermediate and advanced users who are all looking to level up their skills and go home inspired. With a 2 full days of training, more than 100 sessions, 100+ sponsors, free workshops & hands-on labs, and the wealth of experience brought by each attendee, DockerCon is the place to be if you’re looking to learn Docker in 2018.

I assume that you are able to convince your boss for Dockercon. Awesome! If you are still planning for it, here is a bonus – Docker Team has prepared this booklet for you.

Starting with booking your conference – the cost of many conferences increases substantially in the run up to the event, make sure you book as soon as possible to take advantage of early bird registration rates. Booking in advance always keep your travel and accommodation costs lower too, so whether you’re funding yourself or spending from a grant, it’s always good to save a little cash along the way (which you can hopefully put towards your next conference!). But in case you missed it out, here is the latest price list for various training sessions you might be interested to invest on –

Please Note: Your Full conference pass includes talks, workshops, keynotes, breakfast/lunch, unlimited coffee & access to party. The training & certification packages are additional purchases & not included in the full conference pass. In case you’re new, please note that as  a DockerCon attendee, you’ll have the opportunity to appear and earn ‘Docker Certified Associate’ designation with the digital certificate and verification to prove it! You should be able to schedule your exam during the below timeframe:

 

Under this blog post, I have come up with the list of important suggestions on how to make the most out of this amazing event.

#1 – Let’s begin with Housekeeping stuffs…

Based on my past experience, I suggest to keep your phone and laptop chargers with you.You’re going to spend a huge part of your day on your devices — don’t get caught with dead batteries. If you are travelling from India, be aware a normal laptop charger plugin point mightn’t work for you. You might need to carry US complaint laptop battery charger plug point. During my first visit to Docker Distribution System Conference, Berlin(Germany), I had a hard time looking out the right plug point for my laptop charger.

Secondly, ensure that you pack enough business cards. Make sure you have some on hand and a stash in your luggage. You never know how many people you’re going to meet. Thirdly, if you are presenter or conducting a workshop in the conference, ensure that you  bring the materials you need for demos. By no means should you spend the conference pitching to people who don’t want to be pitched. However, if one of those pre-set prospect meetings turns into a real sales opportunity, it’ll be more efficient — and impressive — if you can provide a walkthrough on the spot.

 

 

 

#2 – Prioritize Your Time

At any conference your time will compete with many activities. How can you maximize your time while you’re there? Here are some things to consider before you arrive.Are there many tracks going on at once? Will you have access to video recordings after the conference?Will there be a lounge or hack area outside the sessions?

The one session you should not miss is the keynote. All conferences have a keynote. It dictates the rest of the conference with high profile speakers or major announcements. Yet, the keynote is the single common session for the entire conference. You can use the keynote to break the ice while networking. Below is the snapshot of Dockercon Schedule you can go through and plan for prioritizing your time.

 

 

Conferences are broken down into “tracks” – these are different sets of simultaneous panels, talks or workshops that can be set up using a variety of criteria. Some of those criteria can be based on subject matter (e.g. Technology track, Business track, Legal track, etc.). Other times, they can set up based on format (e.g. Talks, Panels, Workshops, etc.). Here are the list

  • Kubernetes extensibility by Tim Hockin and Eric Tune (Google)
  • Accelerating Development Velocity of Production ML Systems with Docker by Kinnary Jangla (Pinterest)
  • Digital Transformation with Docker, Cloud and DevOps: How JCPenney Handles Black Friday and 100K Deployments Per Year by Sanjoy Mukherjee, (JCPenney)
  • Don’t have a Meltdown! Practical Steps for Defending your Apps by Liz Rice (Aqua) and Justin Cormack (Docker)
  • Creating Effective Docker Images by Abby Fuller (AWS)
  • App Transformation with Docker: 5 Patterns for Success by Elton Stoneman (Docker)

#3 – Break Out of Your Comfort Zone

A Conference is the time to meet new people, but its also a time build on the relationships you already have. You might be interacting with your counterparts in other part of world, hence this is the time where you can strengthen your bonding. If you know of people you want to reconnect, you can reach out few weeks before the conference to setup a time to meet for coffee or a meal while you’re at the event.

 

 

DockerCon is all about learning new things and connecting with the right people. One of such innovative platform introduced during Dockercon 2017 was called Hallway Track. Docker Hallway Track is an innovative platform that helps you find like-minded people to meet one-on-one and share knowledge in a structured way, so you get tangible results from networking.It is a one-on-one or group conversations based on topics of interest that you schedule with other attendees during DockerCon. Hallway Track’s recommendation algorithm curates an individualized selection of Hallway Track topics for each participant, based on their behavior and interests. Don’t miss out this chance which allows you meet and share knowledge with community members and practitioners at the conference.

 

 

 

 

Dockercon is a great place to meet Docker Captains. Docker Captains are technology experts and leaders in their communities who are passionate about sharing their Docker knowledge with others. Don’t miss out this chance to say “Hi” to them and know about their contribution towards the community.

#4 – Don’t Miss out the Evenings

Conferences don’t stop after the final session of the day. There will be many opportunities to network, dinners and other events. Be sure to go to as much of these as possible and use to meet new people. These are the perfect networking events and can help you relax after a long conference day.

#5 – Don’t Forget to Follow-Up

The insights you gained at the conference are likely to be useful for your team, so make sure to set aside time to pass on what you learned. Whether it’s leading an in-person session or writing an email or post to document the most valuable information, proactively sharing information will help your colleagues do better work while establishing you as a leader on your team.There’s no better place than a conference to take stock of the state of your industry and your profession. Make the most of your time, and have fun!

References:

 

 

0
0

Under the Hood: Demystifying Docker For Mac CE Edition

Estimated Reading Time: 6 minutes

 

Docker is a full development platform for creating containerized apps, and Docker for Mac is the most efficient way to start and run Docker on your MacBook. It runs on a LinuxKit VM and NOT on VirtualBox or VMware Fusion. It embeds a hypervisor (based on xhyve), a Linux distribution which runs on LinuxKit and filesystem & network sharing that is much more Mac native. It is a Mac native application, that you install in /Applications. At installation time, it creates symlinks in /usr/local/bin for docker & docker-compose and others, to the commands in the application bundle, in /Applications/Docker.app/Contents/Resources/bin.

One of the most amazing feature about Docker for Mac is “drag & Drop” the Mac application to /Applications to run Docker CLI and it just works flawlessly. The way the filesystem sharing maps OSX volumes seamlessly into Linux containers and remapping macOS UIDs into Linux is one of the most anticipated feature.

Few Notables Features of Docker for Mac:

  • Docker for Mac runs in a LinuxKit VM.
  • Docker for Mac uses HyperKit instead of Virtual Box. Hyperkit is a lightweight macOS virtualization solution built on top of Hypervisor.framework in macOS 10.10 Yosemite and higher.
  • Docker for Mac does not use docker-machine to provision its VM. The Docker Engine API is exposed on a socket available to the Mac host at /var/run/docker.sock. This is the default location Docker and Docker Compose clients use to connect to the Docker daemon, so you to use docker and docker-compose CLI commands on your Mac.
  • When you install Docker for Mac, machines created with Docker Machine are not affected.
  • There is no docker0 bridge on macOS. Because of the way networking is implemented in Docker for Mac, you cannot see a docker0 interface on the host. This interface is actually within the virtual machine.
  • Docker for Mac has now Multi-Architectural support. It provides binfmt_misc multi architecture support, so you can run containers for different Linux architectures, such as arm, mips, ppc64le, and even s390x.

Top 5 Exclusive Features of Docker For Mac That You Can’t Afford to Ignore

Under this blog, I will deep dive into Docker for Mac architecture and show how to access service containers running on top of LinuxKit VM.

At the base of architecture, we have hypervisor called Hyperkit which is derived from xhyve. The xhyve hypervisor is a port of bhyve to OS X. It is built on top of Hypervisor.framework in OS X 10.10 Yosemite and higher, runs entirely in userspace, and has no other dependencies. HyperKit is basically a toolkit for embedding hypervisor capabilities in your application. It includes a complete hypervisor optimized for lightweight virtual machines and container deployment. It is designed to be interfaced with higher-level components such as the VPNKit and DataKit.

Just sitting next to HyperKit is Filesystem sharing solution. The osxfs is a new shared file system solution, exclusive to Docker for Mac. osxfs provides a close-to-native user experience for bind mounting macOS file system trees into Docker containers. To this end, osxfs features a number of unique capabilities as well as differences from a classical Linux file system.On macOS Sierra and lower, the default file system is HFS+. On macOS High Sierra, the default file system is APFS.With the recent release, NFS Volume sharing has been enabled both for Swarm & Kubernetes.

There is one more important component sitting next to Hyperkit, rightly called as VPNKit. VPNKit is a part of HyperKit attempts to work nicely with VPN software by intercepting the VM traffic at the Ethernet level, parsing and understanding protocols like NTP, DNS, UDP, TCP and doing the “right thing” with respect to the host’s VPN configuration. VPNKit operates by reconstructing Ethernet traffic from the VM and translating it into the relevant socket API calls on OSX. This allows the host application to generate traffic without requiring low-level Ethernet bridging support.

On top of these open source components, we have LinuxKit VM which runs containerd and service containers which includes Docker Engine to run service containers. LinuxKit VM is built based on YAML file. The docker-for-mac.yml contains an example use of the open source components of Docker for Mac. The example has support for controlling dockerd from the host via vsudd and port forwarding with VPNKit. It requires HyperKit, VPNKit and a Docker client on the host to run.

Sitting next to Docker CE service containers, we have kubelet binaries running inside LinuxKit VM. If you are new to K8s, kubelet is an agent that runs on each node in the cluster. It makes sure that containers are running in a pod. It basically takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.On top of Kubelet, we have kubernetes services running. We can either run Swarm Cluster or Kubernetes Cluster. We can use the same Compose YAML file to bring up both the clusters side by side.

Peeping into LinuxKit VM

Curious about VM and how Docker for Mac CE Edition actually look like?

Below are the list of commands which you can leverage to get into LinuxKit VM and see kubernetes services up and running. Here you go..

How to enter into LinuxKit VM?

Open MacOS terminal and run the below command to enter into LinuxKit VM:

$screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty

Listing out the service containers:

Earlier the ctr tasks ls used to list the service containers running inside LinuxKit VM but in the recent release, namespace concept has been introduced, hence you might need to run the below command to list out the service containers:

$ ctr -n services.linuxkit tasks ls
TASK                    PID     STATUS
acpid                   854     RUNNING
diagnose                898     RUNNING
docker-ce               936     RUNNING
host-timesync-daemon    984     RUNNING
ntpd                    1025    RUNNING
trim-after-delete       1106    RUNNING
vpnkit-forwarder        1157    RUNNING
vsudd                   1198    RUNNING

How to display containerd version?

Under Docker for Mac 18.05 RC1, containerd version 1.0.1 is available as shown below:

linuxkit-025000000001:~# ctr version
Client:
  Version:  v1.0.1
  Revision: 9b55aab90508bd389d7654c4baf173a981477d55

Server:
  Version:  v1.0.1
  Revision: 9b55aab90508bd389d7654c4baf173a981477d55
linuxkit-025000000001:~#

How shall I enter into docker-ce service container using containerd?

ctr -n services.linuxkit tasks exec -t --exec-id 936 docker-ce sh
/ # docker version
Client:
 Version:      18.05.0-ce-rc1
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   33f00ce
 Built:        Thu Apr 26 00:58:14 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.05.0-ce-rc1
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.10.1
  Git commit:   33f00ce
  Built:        Thu Apr 26 01:06:49 2018
  OS/Arch:      linux/amd64
  Experimental: true
/ #

How to verify Kubernetes Single Node Cluster?

/ # kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-23T09:38:59Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
/ # kubectl get nodes
NAME                 STATUS    ROLES     AGE       VERSION
docker-for-desktop   Ready     master    26d       v1.9.6
/ #

 

Interested to read further? Check out my curated list of blog posts –

Docker for Mac is built with LinuxKit. How to access the LinuxKit VM
Top 5 Exclusive Features of Docker for Mac That you can’t afford to ignore
5 Minutes to Bootstrap Kubernetes Cluster on GKE using Docker for Mac 18.03.0
Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0
2-minutes to Kubernetes Cluster on Docker for Mac 18.01 using Swarm CLI
Docker For Mac 1.13.0 brings support for macOS Sierra, now runs ARM & AARCH64 based Docker containers
Docker for Mac 18.03.0 now comes with NFS Volume Sharing Support for Kubernetes

 

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

3
0

Top 5 Exclusive Features of Docker For Mac That You Can’t Afford to Ignore

Estimated Reading Time: 7 minutes

 

Docker for Mac 18.04.0 CE Edge Release went GA early last month. This was the first time Kubernetes version 1.9.6 & Docker Compose 1.21.0 was introduced under any Docker Desktop edition. ICYMI – Docker for Mac VM is entirely built with LinuxKit, hence this was the first release which enabled the RBD and CephFS kernel modules under LinuxKit VM. In case you’re new to RBD, the linux kernel RBD (rados block device) driver allows striping a linux block device over multiple distributed object store data objects. Usually the libceph module takes care of that.This release brought a number of fixes around upgrades from Docker for Mac 17.12, synchronisation between CLI `docker login` & GUI login, support for AUFS and much more.

Under this blog post, I will talk about top 5 exclusive and very useful features of Docker of Mac that you can’t afford to miss out.

#1: Docker for Mac support Docker Swarm, Swarm Mode & Kubernetes

Starting from Docker for Mac 17.12 CE Edge Release, Docker Inc introduced a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance, is not configurable, and is a single-node cluster.

One of the most anticipated feature introduced with this release was the Kubernetes server running within a Docker container on your local system. When Kubernetes support is enabled, you can deploy your workloads, in parallel, on Kubernetes, Swarm, and as standalone containers. Enabling or disabling the Kubernetes server does not affect your other workloads.

You can use Docker for Mac to test single-node features of swarm mode introduced with Docker Engine 17.12, including initializing a swarm with a single node, creating services, and scaling services. Docker “Moby” on Hyperkit serves as the single swarm node. You can also use Docker Machine, which comes with Docker for Mac, to create and experiment a multi-node swarm.

While testing Kubernetes, you may want to deploy some workloads in swarm mode. You can use the DOCKER_ORCHESTRATOR variable to override the default orchestrator for a given terminal session or a single Docker command. This variable can be unset (the default, in which case Kubernetes is the orchestrator) or set to swarm or kubernetes.

 

Check out my blog post:

2-minutes to Kubernetes Cluster on Docker for Mac 18.01 using Swarm CLI

 

#2:  You can use the same Docker Compose to build Swarm & Kubernetes Cluster

Yes, you read it correct. Starting from Docker for Mac 17.12, Docker introduced a new type called “Stack” under compose.docker.com. This object, that you can create with kubectl or more easily with docker stack deploy, contains the compose file.Behind the scene, a controller watches for stacks and create/update corresponding kubernetes objets (deployments, services, etc). The job of the controller is to reconcile the stacks (stored in the api-server or crd) with k8s native object.

The docker stack deploy manages to deploy to K8s. It convert docker-compose files to k8s manifests (something like kompose) before deployment. Let me showcase an example which shows how one can use the same YAML file to build Swarm Mode as well as K8s cluster.

Clone the Repository

$git clone https://github.com/ajeetraina/docker101

Change to the right location

$cd docker101/play-with-kubernetes/examples/stack-deploy-on-mac/

Example-1 : Demonstrating a Simple Web Application

Building the Web Application Stack

$docker stack deploy -c docker-stack1.yml myapp1

Verifying the Stack

$docker stack ls

Verifying using Kubectl

$kubectl get pods

Verifying if the web application is accessible

$curl localhost:8083

Cleaning up the Stack

$docker stack rm myapp`

Example:2 – Demonstrating ReplicaSet

Building the Web Application Stack

$docker stack deploy -c docker-stack2.yml myapp2

Verifying the Stack

$docker stack ls

Verifying using Kubectl

$kubectl get pods
Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl get stacks
NAME      AGE
myapp2    22m
Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
db1-d977d5f48-l6v9d     1/1       Running   0          22m
db1-d977d5f48-mpd25     1/1       Running   0          22m
web1-6886bb478f-s7mvz   1/1       Running   0          22m
web1-6886bb478f-wh824   1/1       Running   0          22m
Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl get stacks myapp2 -o yaml
apiVersion: compose.docker.com/v1beta2
kind: Stack
metadata:
  creationTimestamp: 2018-01-28T02:55:28Z
  name: myapp2
  namespace: default
  resourceVersion: "3186"
  selfLink: /apis/compose.docker.com/v1beta2/namespaces/default/stacks/myapp2
  uid: b25bf776-03d6-11e8-8d4c-025000000001
spec:
  stack:
    Configs: {}
    Networks: {}
    Secrets: {}
    Services:
    ..
      WorkingDir: ""
    Volumes: {}
status:
  message: Stack is started
  phase: Available

Verifying if the web application is accessible

$curl localhost:8083

Cleaning up the Stack

$docker stack rm myapp2

An Interesting Read:

5 Minutes to Bootstrap Kubernetes Cluster on GKE using Docker for Mac 18.03.0

#3:  Docker for Mac provides Multi-Architecture Support

Docker for Mac provides binfmt_misc multi architecture support. This means that now you can run containers for different Linux architectures, such as arm, mips, ppc64le, and even s390x.

#4:  Support for NFS Volume sharing under Swarm as well as Kubernetes

With Docker for Mac 18.03 release, NFS Volume sharing support for both Swarm & Kubernetes was introduced. To demonstrate this feature, follow the below steps:

Pre-Requisite:

  • Install Docker for Mac 18.03 and future version
  • Enable Kubernetes under Preference Pane UI

Cloning the Repository

git clone https://github.com/ajeetraina/docker101/
cd docker101/for-mac/nfs

Execute the below script on your macOS system

sh env_vars.sh
sh setup_native_nfs_docker_osx.sh

 +-----------------------------+
 | Setup native NFS for Docker |
 +-----------------------------+

WARNING: This script will shut down running containers.

-n Do you wish to proceed? [y]:
y

== Stopping running docker containers...
== Resetting folder permissions...
Password:
== Setting up nfs...
== Restarting nfsd...
The nfsd service does not appear to be running.
Starting the nfsd service
== Restarting docker...

SUCCESS! Now go run your containers 🐳

Bringing up Your Application

docker stack deploy -c docker-compose.yml myapp2
 docker stack ls
NAME                SERVICES
myapp2                1
[Captains-Bay]🚩 >  kubectl get po
NAME      READY     STATUS    RESTARTS   AGE
web-0     1/1       Running   0          3m
[Captains-Bay]🚩 >  kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP     1d
web          ClusterIP   None         <none>        55555/TCP   3m
[Captains-Bay]🚩 >  kubectl describe po web-0
Name:           web-0
Namespace:      default
Node:           docker-for-desktop/192.168.65.3
Start Time:     Wed, 11 Apr 2018 23:00:18 +0530
Labels:         com.docker.service.id=up2u-web
                com.docker.service.name=web
                com.docker.stack.namespace=up2u
                controller-revision-hash=web-7dbbf8689d
                statefulset.kubernetes.io/pod-name=web-0
Annotations:    <none>
Status:         Running
IP:             10.1.0.34
Controlled By:  StatefulSet/web
Containers:
  web:
    Container ID:  docker://ec9ad2a3192bdeb0cc5028453310f40fd0ac3595021b070465c4e7725f626d63
    Image:         alpine:3.6
    Image ID:      docker-pullable://alpine@sha256:3d44fa76c2c83ed9296e4508b436ff583397cac0f4bad85c2b4ecc193ddb5106
    Port:          <none>
    Args:
      ping
      127.0.0.1
    State:          Running
      Started:      Wed, 11 Apr 2018 23:00:19 +0530
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      #{CONTAINER_DIR} from nfsmount (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-n8trf (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  nfsmount:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nfsmount-web-0
    ReadOnly:   false
  default-token-n8trf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-n8trf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                         Message
  ----    ------                 ----  ----                         -------
  Normal  Scheduled              5m    default-scheduler            Successfully assigned web-0 to docker-for-desktop
  Normal  SuccessfulMountVolume  5m    kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "pvc-bbdc7903-3dad-11e8-a612-025000000001"
  Normal  SuccessfulMountVolume  5m    kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "default-token-n8trf"
  Normal  Pulled                 5m    kubelet, docker-for-desktop  Container image "alpine:3.6" already present on machine
  Normal  Created                5m    kubelet, docker-for-desktop  Created container
  Normal  Started                5m    kubelet, docker-for-desktop  Started container

 

 

#5:  Docker for Mac support context switching from docker-for-desktop to Cloud instances in a matter of a Click

Starting from Docker for Mac 18.02 RC release, the context switching feature was introduced which helped developers and operators to switch from docker-for-desktop to any Cloud environment in just a matter of a “toggle”.

 

I have a detailed blog post published early this year which demonstrates this feature with crystal clear examples. Check it out.

Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0

Other attractive features of Docker for Mac 18.04 includes –

  • Docker for Mac VM is entirely built with LinuxKit

How to enter into LinuxKit VM?

$screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
/ # cat /etc/issue

Welcome to LinuxKit

                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""\___/ ===
          {                       /  ===-
           \______ O           __/
             \    \         __/
              \____\_______/
/ # cat /etc/os-release
PRETTY_NAME="Docker for Mac"
/ #
linuxkit-025000000001:~# cat /etc/os-release
PRETTY_NAME="Docker for Mac"
linuxkit-025000000001:~# runc list
ID                 PID         STATUS      BUNDLE                                CREATED                          OWNER
000-metadata       0           stopped     /containers/onboot/000-metadata       2018-05-05T06:27:44.345735031Z   root
001-sysfs          0           stopped     /containers/onboot/001-sysfs          2018-05-05T06:27:44.768313965Z   root
002-binfmt         0           stopped     /containers/onboot/002-binfmt         2018-05-05T06:27:45.630283593Z   root
003-format         0           stopped     /containers/onboot/003-format         2018-05-05T06:27:46.341011253Z   root
004-extend         0           stopped     /containers/onboot/004-extend         2018-05-05T06:27:47.08889973Z    root
005-mount          0           stopped     /containers/onboot/005-mount          2018-05-05T06:27:55.334088074Z   root
006-swap           0           stopped     /containers/onboot/006-swap           2018-05-05T06:27:56.486815308Z   root
007-ip             0           stopped     /containers/onboot/007-ip             2018-05-05T06:28:03.894591249Z   root
008-move-logs      0           stopped     /containers/onboot/008-move-logs      2018-05-05T06:28:05.980232896Z   root
009-sysctl         0           stopped     /containers/onboot/009-sysctl         2018-05-05T06:28:06.15775421Z    root
010-mount-vpnkit   0           stopped     /containers/onboot/010-mount-vpnkit   2018-05-05T06:28:06.356833391Z   root
011-bridge         0           stopped     /containers/onboot/011-bridge         2018-05-05T06:28:06.551619273Z   root
linuxkit-025000000001:~# ctr tasks ls
  • Docker for Mac uses raw format VM disks for systems running APFS on SSD on High Sierra by default
  • DNS name docker.for.mac.host.internal should be used instead of docker.for.mac.localhost (still valid) for host resolution from containers.

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

1
0

Demystifying Serverless & OpenFaas

Estimated Reading Time: 9 minutes

Is Function a new container? Why so buzz around Serverless computing like OpenFaas?

 

One of the biggest tech trend of 2018 has been a year of “shipping code and moving quickly”. It has been a year for developers not to think about infrastructure, not to worry about scaling, load-balancing etc. What developer need is shipping code, running code and moving quickly without any infrastructure concern.

If you look back 20 years from now, the first wave of computing started with x86 servers inside the datacenter. It was the very first wave of computing which democratized computing by becoming more and more accessible and that’s when the actual IT revolution started. The small and medium businesses(SMBs) could afford to run servers in their environments. The second wave of computing started with virtualization and the credit goes to VMware who actually introduced a very affordable hypervisor and then made it possible to run virtualization or VMs so one server with multiple VMs. The third & more recent wave is containerization which brought a new mechanism of packaging and deploying applications, paving way to some of the emerging patterns like microservices.

The most recent & very exciting wave of computing is serverless computing and this is a paradigm where we never have to deal with infrastructure. Though the name “serverless” is used but still it sounds “misnomer” as serverless computing still requires server. The name “serverless computing” is used because the server management and capacity planning decisions are completely hidden from the developer or operator. Serverless code can be used in conjunction with code deployed in traditional styles, such as microservices. Alternatively, applications can be written to be purely serverless and use no provisioned servers at all.

The whole idea about Serverless is even though we still have servers but we don’t really need to care about them anymore. Serverless means that you are not going to have server that you are going to manage. You don’t have anything running. Your code is lying around there. When somebody calls that particular code, then it gets triggered and will be executed. You write functions and that functions can be executed at that instance when there is something coming and triggering.

What is Function-as-a-Service(FaaS)?

Under serverless, we don’t really take an entire application and deploy like the way you do under PaaS but we take tiny code snippets that are written as stateless functions and deploy them in this environment So instead of packaging a large application, you actually break the application into one function at a time. You basically deploy them in this environment which is serverless computing, hence expected to write one function, test it and then upload it to serverless computing platform. After you have uploaded multiple number of independent isolated autonomous functions, you decide how you are going to connect all of them to compose an application. Sounds like microservices? Well that’s what it is. It can be rightly called as “Nano-Services” which is much more stateless than microservices

Is Function a Special Type of Container?

Function is small bits of code that can do one thing well & are easy to understand and maintain. These functions are then deployed as a single unit as code into your FaaS platform and these platform then deal with provisioning your infra, scaling, reliability, billing and security in any language which you care.

Function can be rightly called as a special type of container which has property like –

  • Short Running
  • Ephemeral
  • Stateless
  • Invoked
  • Single Purpose
  • Self-Contained

Introducing OpenFaas

OpenFaas is a framework for building Serverless applications. Functions in OpenFaas are actually Docker containers. It is a matured open source serverless software program that can run any CLI-driven binary program embedded in a Docker container. As an Open Source project, it has gained large-scale adoption within the community. OpenFaaS is a framework for building serverless functions with Docker and Kubernetes which has first class support for metrics.

OpenFaaS is an independent project created by Alex Ellis which is now being built and shaped by a growing community of contributors.

What does OpenFaas provides?

OpenFaaS provides :

  • Easy install (1minute!) – Ease of use through UI portal and one-click install
  • Multiple language support – Write functions in any language for Linux or Windows and package in Docker/OCI image format
  • Transparent autoscaling – Auto-scales as demand increases
  • No infrastructure headaches/concerns
  • Support for Docker Swarm and Kubernetes
  • Console UI to enable deployment, invocation of functions
  • Integrated with Prometheus Alerts
  • Async support
  • Market Serverless function repository
  • RESTful API
  • CLI tools to build & deploy functions to the cluster.

OpenFaaS is a framework for building serverless functions with Docker and Kubernetes which has first class support for metrics.

OpenFaas Stack – Under the Hood

OpenFaas stack consists of the following components:-

  • API Gateway
  • Function Watchdog
  • Prometheus

The API gateway and Prometheus instance runs as services while the function watchdog runs as function container. The below picture depicts API gateway and Prometheus stack under docker-compose.yml leveraged by OpenFaas deploy-stack.yml file.

 

 

Let us dive into each components separately:

API Gateway:

API Gateway is where you define all of your functions. The API Gateway is a RESTful microservice. It provides an external route into your functions and collects Cloud Native metrics through Prometheus. Your API Gateway will scale functions according to demand by altering the service replica count in the Docker Swarm or Kubernetes API. A UI is baked in allowing you to invoke functions in your browser and create new ones as needed.

 

                      Figure:2 – Swarm overlay connecting API gateway to Function containers

 

Function Watchdog ( fwatchdog)

Function Watchdog is embedded in every container and that allow every container to become serverless. It is a tiny Golang webserver. It provides an unmanaged and generic interface between the outside world and your function. Its job is to organize a HTTP request accepted on the API Gateway and to invoke your chosen application.

Each function container consists of a function watchdog, fwatchdog & certain function program written in any language(Python/Go/CSharp). The Function Watchdog is the entrypoint allowing HTTP requests to be forwarded to the target process via STDIN. The response is sent back to the caller by writing to STDOUT from your application.

It is important to note that the Dockerfile describing a function container must have the fprocess environment variable pointing to the function program name and arguments.

Reference: https://github.com/openfaas/faas/tree/master/watchdog

Prometheus:

Prometheus is an open-source systems monitoring and alerting toolkit used in this stack . It underpins the complete stack and collects statistic to build dashboard, certain metrics interpolation. Whenever certain functions have high traffic, it will scale out for you automatically using Docker Swarm and K8s API.

Orchestration Engines(Swarm & Kubernetes) & Docker Platform

All the above components run on top of Docker Swarm or Kubernetes orchestration engine The container runtime can be any modern version of Docker or containerd.

Want to test Drive OpenFaas?

Under this demo, I will showcase how to get started with OpenFaas in detailed manner. We will leverage PWD platform which is the fastest method to setup 5-node Swarm cluster. In case you are completely new to play-with-docker, you can follow the below step by step instructions –

  • Open https://labs.play-with-docker.com/
  • Click on login & start
  • Click on tool near the setting on the left side of PWD interface

  • Choose 3 Managers and 2 workers and allow it to bring up 5 node cluster

Setting up Visualizer tool:

https://github.com/ajeetraina/openfaas
cd openfaas/visualizer/
docker-compose up -d
$ docker ps
CONTAINER ID        IMAGE                             COMMAND             CREATED             STATUS              PORTS                    NAMES05b89b6b8aa9        dockersamples/visualizer:stable   "npm start"         56 seconds ago      U
p 55 seconds       0.0.0.0:8085->8080/tcp   visualizer_visualizer_1

This will setup a visualizer tool under port 8085:

As of now, there is no service up and running on Swarm and hence it will show up blank.

 

Cloning the OpenFaas Repository

git clone https://github.com/openfaas/faas
cd faas
./deploy_stack.sh

This single script will bring up complete OpenFaas stack.

Looks cool?

Click on port 8080 which appear on the top of PWD screen and it will redirect you to OpenFaas UI page as shown below:

 

As directed by UI page, let us head over to CLI and get faas-cli installed:

curl -sSL https://cli.openfaas.com | sudo sh
$ faas-cli list
Function                        Invocations     Replicas
func_wordcount                  0               1func_hubstats                   0               1
func_base64                     0               1func_echoit                     0               1
func_markdown                   0               1tcpdump                         8               1
func_nodeinfo                   1               1[manager1] (local) root@192.168.0.151 ~
$

OpenFaas by default comes with few already baked in functions like as shown below:

All you need is to choose a function(like func_nodeinfo) and click on “Invoke” on the right hand side, it will display node information.

Building Your First Serverless Function

Before we end up this blog, let us try building a Serverless function called “retweet-bot”. This function retweets all Tweets containing your search term. Let’s make this happen –

Clone this repository

In case you missed it out, clone the repository –

https://github.com/ajeetraina/openfaas

Writing Retweet Function

mkdir -p ~/retweet && \
  cd ~/retweet

The “Retweet” Python function using the CLI:

root@ubuntu18:~/retweet# faas-cli new --lang python retweet
Folder: retweet created.
  ___                   _____           ____
 / _ \ _ __   ___ _ __ |  ___|_ _  __ _/ ___|
| | | | '_ \ / _ \ '_ \| |_ / _` |/ _` \___ \
| |_| | |_) |  __/ | | |  _| (_| | (_| |___) |
 \___/| .__/ \___|_| |_|_|  \__,_|\__,_|____/
      |_|


Function created in folder: retweet
Stack file written: retweet.yml
root@ubuntu18:~/retweet#

This creates 3 major files, 2 of them under retweet directory and 1 YAML file as shown:

retweet# tree
.
├── retweet.py
└── requirements.txt

0 directories, 2 files

Replace handler.py & requirements.txt with these above files and add config from the repository under the same location.

Displaying contents of retweet.yml

root@ubuntu18:~/retweet# cat retweet.yml
provider:
  name: faas
  gateway: http://127.0.0.1:8080

functions:
  retweet:
    lang: python
    handler: ./retweet
    image: retweet

Building the Function

cd ..
faas-cli build -f ./retweet.yml

Verifying the Image


docker images | grep retweet
ajeetraina/retweet          latest              027557a5185d        About a minute ago   83MB

Deploying the Retweet Function

faas-cli deploy -f ./retweet.yml
Deploying: retweet.

Deployed. 200 OK.
URL: http://127.0.0.1:8080/function/retweet

Now open up localhost:8080/ui and watch out for brand new retweet function. Clik on Invoke and there you find retweet bot active to display you hashtags.

 

 

Hurray ! We have built our first retweet Serverless function in just 2 minutes.

If you want to learn more about OpenFaas, head over to https://docs.openfaas.com.

Reference:

1
0

Introducing OpenUSM – Simplifying Server Management & Insight Log Analytics using Docker containers

Estimated Reading Time: 8 minutes

Its 2018 ! Let Containers Manage Your Datacenter..

Containers are changing the dynamics of modern data center. It is a growing technology that is drawing widespread attention across the Enterprise IT. One of the primary reason of the rise & adoption of containers is that it allows developers to move faster. Compared to VMs which takes minutes to stand up, containers take milliseconds and even microseconds. As organizations prioritize shipping new products and features faster, to keep up in the software-eaten world, developers are favoring technology that allows them to scale applications and deploy resources much faster than what traditional VMs on public and private clouds can support.

Docker containers bring variety of cost and performance benefits. It brings ability to run multiple applications on the same server or OS without a hypervisor which eliminates the drag of the hypervisor on system resources, which means that your workloads have a lighter footprint – the container footprint is zero, because it’s simply a boundary of permissions and resources within Linux. They fire up and decommission very rapidly compared to virtual machines – a perfect fit for the ephemeral nature of today’s short-lived workloads, which are often tied to real world events.

Docker is an open platform which make application and workloads more portable and distributed in an effective and standardized way. Combining Docker containers, micro services along with software-defined infrastructure makes the datacenter more agile and quick resource reallocation. Hence, this architecture works well to improve the datacenter operations.

Most DellEMC Server Management software offerings, as well as the entire Software Defined Infrastructure, are built upon standard implementation using RESTful architecture called Redfish. Redfish is a next generation system management standard using a data model representation inside a hypermedia RESTful interface. The data model is defined in terms of a standard, machine-readable schema, with the payload of the messages expressed in JSON and the protocol using OData v4. Since it is a hypermedia API, Redfish is capable of representing a variety of implementations by using a consistent interface. It has mechanisms for discovering and managing data center resources, handling events, and managing long-lived tasks. It is easy to implement, easy to consume and offer scalability advantages over old technologies. Redfish is a RESTful interface over HTTPS in JSON format based on ODATA v4 usable by clients, scripts, and browser-based GUIs

 

What is OpenUSM?

 

 

OpenUSM stands for Open Universal Systems Manager. It is basically a multi-tool product like “a Swiss army knife”. It is a suite of open source tools & scripts which purely uses containers & related tools to perform server management tasks, monitoring & insight Log Analytics. It is 100% container based solution which heavily uses Docker for building microservices for system management tasks like BIOS token change, Firmware update. It is completely an out-of-band system management solution purely based on Redfish API Interface. It is a platform agnostic solution (can be run from laptop, server or cloud) and works on any of Linux or Windows platform with Docker Engine running on top of it.

OpenUSM is a project which was created by Ajeet Singh Raina & Avinash Bendigeri, a DellEMC employee 3-months back.

 

Value Proposition:

OpenUSM follow an easy deployment model. It uses developer’s tools like Docker Compose & CLI to bring up microservices which ensures that system management tasks can be achieved flawlessly. It uses modern tools & technologies and integrates well with near real-time search analytics tools like ELK stack. It enables Sensor Log Analytics & visualization for operations team using the popular Grafana tool. It can scale both vertically & horizontally. As it is completely open source, you are free to build and customize based on your needs and holds a plug-and-play components and functionalities.

Technology Overview of OpenUSM:

OpenUSM is an integrated container solution which brings 3 basic functionalities– system management, monitoring and insight log analytics. System Management stack sits at the middle of the architecture which uses Python, Redfish, Django, Docker & Docker compose to enable system management tasks. On the left hand side of the architecture, we have monitoring stack which uses Prometheus, Grafana, Alert-Manager and Pushgateway to retrieve GPU and Sensor metrics. On the right hand side, it consists of open source version of Elasticsearch, Logstash & Kibana for sensors, lifecycle controller & SEL logs.

Understanding OpenUSM System Management WorkFlow

OpenUSM uses “Container-Per-Server (CPS)” model. For each server management tasks, there are scripts which when executed builds and run Docker containers against each of server platform. It purely uses Redfish API to communicate directly with Dell iDRAC, collects iDRAC/LC logs and pushes it to syslog server. Logstash collects the syslog server and pushes to elasticsearch and finally it gets visualized through Kibana. OpenUSM uses Prometheus Stack for monitoring System components like GPU/CPU monitoring using nvidia-docker & node exporter.

 

Under this blog post, I am going to demonstrate how OpenUSM simplifies the overall system management tasks with the help of Docker containers & Redfish. For this demonstration, I will leverage Ubuntu 17.10 VM running on my ESXi 6.0 system. This code should work on any of Linux & Windows platform too.

The source code for this project is open to the public and is available under https://github.com/openusm/openusm

 

Cloning the Repository

git clone https://github.com/openusm/opensum

Bootstrapping Docker

If you have Docker already installed on your system, you can skip to next step. If not, run the below command to install Docker & Docker Compose on your system.

sh bootstrap.sh install_docker

Based on your network connectivity, this step will take 1-2 minutes to complete.

Bootstrapping ELK

OpenUSM is 100% containerized solution and hence we will be running ELK inside Docker containers. To keep it simple, we designed a docker-compose file which can get you started in a matter of seconds.

Execute the same bootstrap file to bring up ELK stack as shown below:

sh bootstrap.sh provision_elk

Just wait for 30-40 seconds to get ELK stack up and running.

Verifying the ELK services

Run the below command to check if ELK services are up and running:

sh bootstrap.sh logs

Pushing DellEMC iDRAC Logs to ELK Stack

Under this section, I am going to demonstrate how OpenUSM makes it so easy to push logs of DellEMC system management tasks to a centralized  ELK stack and get it visualized via Kibana . Let us pick up a simple “BIOS Token Change” functionality and apply it for multitude of DellEMC servers.

To keep it simple, we designed a script named “bios-token.py” which is placed under the root of OpenUSM GIT repository. Let us first look at various parameters which can be supplied with bios-token.py script –

python bios-token.py --help
usage: bios-token.py [-h] [--verbose] [-i IDRAC] [-n NFS] [-s SHARE]
[-c CONFIG] [-f IPS]
 
Welcome to Universal Systems Manager Bios Token Change
 
optional arguments:
-h, --help            show this help message and exit
--verbose             Turn on verbose logging
-i IDRAC, --idrac IDRAC
iDRAC IP of the Host
-n NFS, --nfs NFS     NFS server IP address
-s SHARE, --share SHARE
NFS Share folder
-c CONFIG, --config CONFIG
XML File to be imported
-f IPS, --ips IPS     IP files to be updated

As shown above, the script is targeted both at a single Dell server as well as multitude of DellEMC servers via a plain text file. We are currently looking at Autodiscovery feature to automate this functionality. One need to provide NFS server IP, share name and BIOS token configuration files as argument to execute it successfully. Once this script is invoked, it creates as many number of Docker containers per DellEMC servers, collects iDRAC logs from each of servers and pushes it to syslog server which runs inside Docker container. Logstash collects it from syslog and dumps it into Elasticsearch to get it visualized under Kibana UI.

bios-token.py -f ips.txt -s /var/nfsshare -c biosconfig.xml -n <NFS server-IP>

where,

/var/nfsshare => NFS share

ips.txt => list of DellEMC iDRAC IP

biosconfig.xml => XML definining the BIOS tokens entry

By now, you should be able to see iDRAC logs visualized under Kibana UI. We can perform ample amount of customization around Kibana UI to display the logs per server basis.

 

Insight Log Analytics (LC, SEL & Sensor Logs)

Unpacking OpenUSM secrets further, this marks as an interesting use case and robust capability around insight Log Analytics. DellEMC server generates varieties of logs like system event logs (SEL), RAID controller logs & Lifecycle controller (LC) logs. When a system event occurs on a managed system, it is recorded in the System Event Log (SEL). The SEL page displays a system health indicator, a time stamp, and a description for each event logged. The same SEL entry is also available in the Lifecycle Controller (LC) log.

Considering a certain use cases where datacenter administrator want to collect LC logs for the last 1 year, it definitely requires a robust and modern tool to collect such huge data and perform analysis on top of the specific software.

We recently designed a script which simplifies such log analytics capability using Docker, Redfish & ELK. You can find the “lclogexporter.py” script under the root of GITHUB repository:

python lclogexporter.py -f ips.txt -ei <ip-address of ELK> -eu elastic -ep changeme

 

This script requires elasticsearch IP address, credentials & list of iDRAC IPs to get all iDRAC LC logs pushed to ELK stack. Whenever you execute this script, it will build a Docker image called “openusm-analytics” first and run this container which automatically pushes all LC, SEL and Sensors logs to ELK stack.

Below is the Kibana UI visualizing Pie-Chart for Dell Lifecycle Controller logs collected for the last 1 year timeframe.

 

Insight Log Metrics (Sensor Logs) using Grafana

 

Did you find OpenUSM interesting? OpenUSM is just 3 months old project and we are looking out for contributors across the globe. If you think the project really looks cool, come & join us to make it more robust. We welcome your participation..

2
0

Under the Hood: Demystifying Docker Enterprise Edition 2.0 Architecture

Estimated Reading Time: 8 minutes

The Only Kubernetes Solution for Multi-Linux, Multi-OS and Multi-Cloud Deployments is right here…

Docker Enterprise Edition(EE) 2.0 Final GA release is available. One of the most promising feature announced with this release includes Kubernetes integration as an optional orchestration solution, running side-by-side with Docker Swarm. Not only this, this release includes Swarm Layer 7 routing improvements, Registry image mirroring, Kubernetes integration to Docker Trusted Registry & Kubernetes integration to Docker EE access controls. With this new release, organizations will be able to deploy applications with either Swarm or fully-conformant Kubernetes while maintaining the consistent developer-to-IT workflow.

Docker EE is more than just a container orchestration solution; it is a full lifecycle management solution for the modernization of traditional applications and microservices across a broad set of infrastructure platforms. It is a Containers-as-a-Service(CaaS) platform for IT that manages and secures diverse applications across disparate infrastructure, both on-premises and in the cloud. Docker EE provides an integrated, tested and certified platform for apps running on enterprise Linux or Windows operating systems and Cloud providers. It is tightly integrated to the underlying infrastructure to provide a native, easy to install experience and an optimized Docker environment.

Docker EE 2.0 GA consists of 3 major components which together enable a full software supply chain, from image creation, to secure image storage, to secure image deployment.

  • Universal Control Plane 3.0.0 (application and cluster management) – Deploys applications from images, by managing orchestrators, like Kubernetes and Swarm. UCP is designed for high availability (HA). You can join multiple UCP manager nodes to the cluster, and if one manager node fails, another takes its place automatically without impact to the cluster.
  • Docker Trusted Registry 2.5.0 – The production-grade image storage solution from Docker &
  • EE Engine 17.06.2- The commercially supported Docker engine for creating images and running them in Docker containers.

Tell me more about Kubernetes Support..

Kubernetes in Docker EE fully supports all Docker EE features, including role-based access control, LDAP/AD integration, scanning, signing enforcement, and security policies.

Kubernetes features on Docker EE include:

  • Kubernetes orchestration full feature set
  • CNCF Certified Kubernetes conformance
  • Kubernetes app deployment by using web UI or CLI
  • Compose stack deployment for Swarm and Kubernetes apps
  • Role-based access control for Kubernetes workloads
  • Pod-based autoscaling, to increase and decrease pod count based on CPU usage
  • Blue-Green deployments, for load balancing to different app versions
  • Ingress Controllers with Kubernetes L7 routing
  • Interoperability between Swarm and Kubernetes workloads for networking and storage.

But wait.. What type of workload shall I run on K8s and what on Swarm?

One of the question raised multiple times in last couple of Docker Meetup was – Good that now we have K8s and Swarm as multi-orchestrator under the single Enterprise Engine. But what type of workload shall I run on Kubernetes and what on Docker Swarm?

 

As shown above, even though there is a bigger overlap in terms of Microservices, Docker Inc. recommends specific types of workloads for both Swarm and Kubernetes.
Kubernetes works really great for large-scale workloads. It is designed to address some of the difficulties that are inherent in managing large-scale containerized environments. Example: blue-green deployments on Cloud Platform, highly resilient infrastructure with zero downtime deployment capabilities, automatic rollback, scaling, and self-healing of containers (which consists of auto-placement, auto-restart, auto-replication , and scaling of containers on the basis of CPU usage). Hence, it can be used in a variety of scenarios, from simple ones like running WordPress instances on Kubernetes, to scaling Jenkins machines, to secure deployments with thousands of nodes.

Docker Swarm, on the other hand, do provide most of the functionality which Kubernetes provides today. With notable features like easy & fast to install & configure, quick container deployment and scaling even in very large clusters, high availability through container replication and service redundancy, automated internal load balancing, process scheduling, Automatically configured TLS authentication and container networking, service discovery, routing mesh etc. Docker Swarm is best suited for MTA program – modernizing your traditional legacy application(.Net app running on Windows Server 2016)  by containerization on Docker Enterprise Edition. By doing this, you reduce the total resource requirements to run your application. This increases operational efficiency and allows you to consolidate your infrastructure.

Under this blog post, I will deep dive into Docker EE Architecture covering its individual components, how they work together and how the platform manages containers.

Let us begin with its architectural diagram –

 

At the base of the architecture, we have Docker Enterprise Engine. All the nodes in the cluster runs Docker Enterprise Edition as the base engine. Currently the stable release is 17.06 EE. It is a lightweight and powerful containerization technology combined with a work flow for building and containerizing your applications.

Sitting on top of the Docker Engine is what we call “Enterprise Edition Stack” where all of these components run as containers(shown below in the picture), except Swarm Mode. Swarm Mode is integrated into Docker Engine and you can turn it on/off based on your choice.Swarm Mode is enabled when Universal Control Plane (UCP) is up & running.

 

UCP depicting ucp-etcd, ucp-hyperkube & ucp-agent containers

In case you’re new to UCP, Docker Universal Control Plane is a solution designed to deploy, manage and scale Dockerized applications and infrastructure. As a Docker native solution, full support for the Docker API ensures seamless transition of your app from development to test to production – no code changes required, support for the Docker tool-set and compatibility with the Docker ecosystem.

 

 

From infrastructure clustering and container scheduling with the embedded Docker Swarm Mode, Kubernetes to multi-container application definition with Docker Compose and image management with Trusted Registry, Universal Control Plane simplifies the process of managing infrastructure, deploying, managing applications and monitoring their activity, with a mix of graphical dashboards and Docker command line driven user interface

There are 3 orchestrators sitting on top of Docker Enterprise Engine – Docker Swarm Mode, Classic Swarm & Kubernetes. For both Classic Swarm and Kubernetes, there is the same underlying etcd instance which is highly available based on setup you have in your cluster.

Docker EE provides access to the full API sets of three popular orchestrators:

  • Kubernetes: Full YAML object support
  • SwarmKit: Service-centric, Compose file version 3
  • “Classic” Swarm: Container-centric, Compose file version 2

Docker EE proxies the underlying API of each orchestrator, giving you access to all of the capabilities of each orchestrator, along with the benefits of Docker EE, like role-based access control and Docker Content Trust.

To manage lifecycle of orchestrator, we have a component called Agents and reconciler which manages both the promotion and demotion flows as well as certification renewal rotation and deal with patching and upgrading.

Agent is the core component of UCP and is a globally-scheduled service. When you install UCP on a node, or join a node to a swarm that’s being managed by UCP, the agent service starts running on that node. Once this service is running, it deploys containers with other UCP components, and it ensures they keep running. The UCP components that are deployed on a node depend on whether the node is a manager or a worker. In nutshell, it monitors the node and ensures the right UCP services are running.

Another important component is Reconciler. When UCP agent detects that the node is not running the right UCP components, it starts the reconciler container to converge the node to its desired state. It is expected for the UCP reconciler container to remain in an exited state when the node is healthy.

In order to integrate with K8s and support authentication, Docker Team built open ID connect provider. In case you’re completely new to OIDC, OpenID Connect is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an inter-operable and REST-like manner.

As Swarm Mode is designed, we have central certificate authority that manages all the certificates and cryptographic node identity of the cluster. All component uses mutual TLS communication and they are using nodes which are issue by certificate authority. We are using unmodified version of Kubernetes.

Sitting on the top of the stack, there is GUI which uses Swarm APIs and Kubernetes APIs. Right next to GUI, we have Docker Trusted Registry which is an enterprise-ready on-premises service for storing, distributing and securing images. It gives enterprises the ability to ensure secure collaboration between developers and sysadmins to build, ship and run applications. Finally we have Docker & Kubernetes CLI capability integrated. Please note that this uses un-modified version of Kubernetes API.

Try Free Hosted Trial

If you’re interested in a fully conformant Kubernetes environment that is ready for the enterprise,    https://trial.docker.com/

Did you find this blog helpful? Feel free to share your experience. Get in touch with me on Twitter –  @ajeetsraina

2
0

Docker’s Birthday Celebration in Bangalore – The Fifth Kind

Estimated Reading Time: 5 minutes

“As you grow up, make sure you have more networking opportunities than chances, and more collaborative approach than just an acquaintance. Reason  ~ “Sharing is Caring”.

Docker’s Birthday Celebration is not just about cakes, food and party. It’s actually a global tradition that is near and dear to our heart because it gives each one of us an opportunity to express our gratitude to our community of contributors, customers, partners and users. The goal of this global celebration is to welcome those new to Docker and bring them up to speed through hands-on labs with the help of more advanced users who act as mentors on-site.

 

This year, celebrations all over the world took place March 19-25, 2018. Yesterday we celebrated Docker’s 5th Birthday in Bangalore at Altimetrik Office, Electronic City. This time I was lucky enough to get chance to lead this event.

I started planning for this event in the first week of March. Started interlocking with  Community Marketing Co-ordinator, Docker Inc. and ordered for Docker Mentor T-shirts, birthday stickers & Docker crafted Balloons. It just took 1 week time to get it shipped to Bangalore address. Quite quick, isn’t it? A Special thanks to Lisa McNikol for helping me with these stuffs. Next, Neependra Khare, Founder of CloudYuga & Docker Bangalore Meetup organizer initiated a thread for looking out Mentors for the event. Around 10+ community members turned up to mentor the participants for lab sessions. I started interlocking with them for the lab preparation. A BIG SHOUT OUT to each and every mentors (names listed towards the end) as I find them quite interested and were very quick in completing the testing with the lab sessions. Few of the issues reported were fixed and handled smoothly.

Next, it was time to work with the organizers at Altimetric India. A special thanks to Sonali Baluni, Altimetrik Employee for multiple interactions on phone and emails for planning this event. Over 30+ balloons, 5kg Docker Birthday Cake, goodies for organizer and participants, smoothening WiFi connectivity & Backups were planned for the day.

Over 200+ participants signed up while 50+ turned up for this event. This was expected as we just completed Joint Meetup with 8 vibrant groups last weekend in Walmart Office. I reached venue at around 8:30 AM and was stoked to find wonderful ambience in place. Worked with organizers to arrange Docker T-shirts and Balloons for the participants, distributed Mentors T-Shirts so that they can get ready for the celebration.

I started the Birthday celebration with an attention grabber –

 

Talked about the agenda for the day which included more networking, mentorship & lab practice as shown –

Next, I asked each of Docker Mentors to introduce themselves to the group. Mentors are group of experts targeted to help the participants with the lab sessions.

 

Most of the participants followed the instructions on their own and where ever needed, mentors helped the participants. There were around 5 Lab sessions categorised under Docker 101, MTA with Java and .Net applications, Kubernetes and Hybrid OS applications. You can find it under https://github.com/dockersamples/docker-fifth-birthday

 

At 11:30 sharp, we did a cake cutting. We gave chance to youngest member in the group to do this honour.

This time I focused on networking and spent considerable amount of time interlocking with the participants. I met with new Community members and tried to understand how they are using Docker. Few of interesting questions from the community were –

  1. What is Serverless?
  2. How to get started with OpenFaas?
  3. How to become Docker Mentor?
  4. Why we switched to Kubernetes? Issues around Docker EE UCP.
  5. How to containerise 20-years old Monolithic Legacy Application inside my organization?
  6. What is RexRay & its architecture?
  7. When can we expect K8s under Docker CE on Linux Platform?
  8. A Query around AWX – An Alternative to RedHat’s Ansible Tower?
  9. When shall I use Kubernetes & when to use Swarm?

I am planning to come up with blog post around most of the queries in the near future.

Thank You –

  • Mentors (Suraj Narwade, Yogesh Kad, Mamta Jha, Shanthi Kumar, Jibin George, Shankar, Kumara K, Vishan Ghule, Y Ramesh Naik, Savitha Ashture, Saurav Kumar, Swapnasagar Pradhan – For effective Mentorship
  • Mano Marks, Elton & Marcos Nills – For the robust PWD Platform for Lab workshop
  • Sonali Baluni & Team, Organizer ( Altimetrik India)
  • Lisa McNikol – Community Marketing Co-ordinator, Docker Inc.
  • Jenny Burcio – Sr. Program Manager, Developer Relations and Advocacy, Docker Inc.
  • All Participants

What I liked most about this Meetup is the energy and curiosity that the participants have. It was a blast discussing with, answering and questioning all the keen minds that make up this community.

Dockercon 2018 is happening this June 12-15. I have plan to attend this event.  Would be completely excited to meet more members of the Docker community over at DockerCon in June and hope to bring home interesting knowledge and tips & tricks for the larger Docker Bangalore Meetup to gain from.

Thanks to all you there who participated to make this event a blast !

1
0

5 Minutes to Bootstrap Kubernetes Cluster on GKE using Docker for Mac 18.03.0

Estimated Reading Time: 7 minutes

Docker 18.03.0 CE Release is now available under Docker for Mac Platform. Docker for Mac 18.03.0 CE is now shipped with Docker Compose version 1.20.1, Kubernetes v1.9.2,  Docker Machine 0.14.0 & Notary 0.6.0.  Few of the promising features included under this release are listed below-

  • Changing VM Swap size under settings
  • Linux Kernel 4.9.87
  • Support of NFS Volume sharing under Kubernetes.
  • Revert the default disk format to qcow2 for users running macOS 10.13 (High Sierra).
  • DNS name `host.docker.internal` used for host resolution from containers.
  • Improvement over Kubernetes Load balanced services (No longer marked as `Pending`)
  • Fixed hostPath mounts in Kubernetes`.
  • Fix support for AUFS.
  • Fix synchronisation between CLI `docker login` and GUI login.
  • Updated Compose on Kubernetes to v0.3.0. Existing Kubernetes stacks will be removed during migration and need to be re-deployed on the cluster… and many more

In my last blog, I talked about context switching and showcased how one can switch the context from docker-for-desktop to Minikube under Docker for Mac Platform. A context element in a kubeconfig file is used to group access parameters under a convenient name. Each context has three parameters: cluster, namespace, and user. By default, the kubectl command-line tool uses parameters from the current context to communicate with the cluster. Under  .kube/config file, you can see the list of context specified a shown below –

– cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURERENDQWZTZ0F3SUJBZ0lSQUpwcmVPY..V0gKZ0hVaVl6dGR…
server: https://35.201.215.156
name: gke_spheric-temple-187614_asia-east1-a_k8s-lab1
– cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOOd2..LQo=
server: https://localhost:6443
name: kubernetes
– cluster:
certificate-authority: /Users/ajeetraina/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
contexts:
– context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop

Under this blog, I will showcase how you can bootstrap Kubernetes Cluster on GKE Platform using context switching functionality under Docker for Mac Platform.

Pre-requisite:

  • Install/Upgrade Docker for Mac 18.03 CE Edition
  • Install google-cloud-sdk
  • Enable Google Cloud Engine API
  • Authenticate Your Google Cloud using gcloud auth

Installing Docker for Mac 18.03 CE Edition

Installing Google Cloud SDK on your macOS

  • Make sure that Python 2.7 is installed on your system:

Ajeets-MacBook-Air:~ ajeetraina$ python -V
Python 2.7.10
  • Download the below package based on your system.
Platform Package Size SHA256 Checksum
macOS 64-bit(x86_64) google-cloud-sdk-195.0.0-darwin-x86_64.tar.gz 15.0 MB 56d72895dfc6c4208ca6599292aff629e357ad517e6979203a68a3a8ca5f6cc8
macOS 32-bit(x86) google-cloud-sdk-195.0.0-darwin-x86.tar.gz 15.0 MB e389ec98b65a0dbfc3f2c2637b9e3a375913b39d50e668fecb07cd04474fc080
  • Extract the archive to any location on your file system.
./google-cloud-sdk/install.sh
  • Restart your terminal for the changes to take effect.

Initializing the SDK

gcloud init

In your browser, log in to your Google user account when prompted and click Allow to grant permission to access Google Cloud Platform resources.

Enabling Kubernetes Engine API

You need to enable K8s engine API to bootstrap K8s cluster on Google Cloud Platform. To do so, open up this link.

Authenticate Your Google Cloud

Next, you need to authenticate your Google Cloud using glcloud auth

gcloud auth login

Done. We are all set to bootstrap K8s cluster…

Creating GKE Cluster Node

Ajeets-MacBook-Air:~ ajeetraina$ gcloud container clusters create k8s-lab1 --disk-size 10 --zone asia-east1-a --machine-type n1-standard-2 --num-nodes 3 --scopes compute-rw
WARNING: The behavior of --scopes will change in a future gcloud release: service-control and service-management scopes will no longer be added to what is specified in --scopes. To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
WARNING: Starting in Kubernetes v1.10, new clusters will no longer get compute-rw and storage-ro scopes added to what is specified in --scopes (though the latter will remain included in the default --scopes). To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
Creating cluster k8s-lab1...done.
Created [https://container.googleapis.com/v1/projects/spheric-temple-187614/zones/asia-east1-a/clusters/k8s-lab1].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/asia-east1-a/k8s-lab1?project=spheric-temple-187614
kubeconfig entry generated for k8s-lab1.
NAME      LOCATION      MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
k8s-lab1  asia-east1-a  1.7.11-gke.1    35.201.215.156  n1-standard-2  1.7.11-gke.1  3          RUNNING

Viewing it on Docker for Mac UI

Click on Whale icon on the top right of Docker for Mac and by now, you must be able to see the new Context getting appeared.

 

Listing the Nodes

 

Ajeets-MacBook-Air:~ ajeetraina$ kubectl get nodes
NAME                                      STATUS    ROLES     AGE       VERSION
gke-k8s-lab1-default-pool-042d2598-591g   Ready     <none>    7m        v1.7.11-gke.1
gke-k8s-lab1-default-pool-042d2598-c633   Ready     <none>    7m        v1.7.11-gke.1
gke-k8s-lab1-default-pool-042d2598-q603   Ready     <none>    7m        v1.7.11-gke.1

Viewing it directly under GCP Platform

 

 

Connecting to Your GKE Cluster

There are 2 ways to do this:

Method-1: Click on “Connection” button to see how to connect to K8s-lab1.

 

Method-2:

You can connect to your cluster via command-line or using a dashboard.

Ajeets-MacBook-Air:~ ajeetraina$gcloud container clusters get-credentials k8s-lab1 --zone asia-east1-a --project captain-199803
Fetching cluster endpoint and auth data.
kubeconfig entry generated for k8s-lab1. 

Fetching cluster endpoint and auth data. kubeconfig entry generated for k8s-lab1.

Listing the Nodes under Google Cloud Platform

 

Deploy Nginx on GKE Cluster

Let us see how to deploy Nginx on remote GKE cluster using Docker for Mac. This requires two commands. deploy and expose.

Step 1: Deploy nginx

$ kubectl run nginx --image=nginx --replicas=3

deployment "nginx" created

This will create a replication controller to spin up 3 pods, each pod runs the nginx container.

Step 2: Verify that the pods are running.

You can see the status of deployment by running:

kubectl get pods -owide
NAME                    READY     STATUS    RESTARTS   AGE       IP          NODE
nginx-7c87f569d-glczj   1/1       Running   0          8s        10.12.2.6   gke-k8s-lab1-default-pool-b2aaa29b-w904
nginx-7c87f569d-pll76   1/1       Running   0          8s        10.12.0.8   gke-k8s-lab1-default-pool-b2aaa29b-2gzh
nginx-7c87f569d-sf8z9   1/1       Running   0          8s        10.12.1.8   gke-k8s-lab1-default-pool-b2aaa29b-qpc7

Youcan see that each nginx pod is now running in a different node (virtual machine).

Once all pods have the Running status, you can then expose the nginx cluster as an external service.

Step 3: Expose the nginx cluster as an external service.

$ kubectl expose deployment nginx --port=80 --target-port=80 \
--type=LoadBalancer

service "nginx" exposed

This command will create a network load balancer to load balance traffic to the three nginx instances.

Step 4: Find the network load balancer address:

kubectl get service nginx
NAME      TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
nginx     LoadBalancer   10.15.247.8   <pending>     80:30253/TCP   12s

It may take several minutes to see the value of EXTERNAL_IP. If you don’t see it the first time with the above command, retry every minute or so until the value of EXTERNAL_IP is displayed.

You can then visit http://EXTERNAL_IP/ to see the server being served through network load balancing.

GKE provides amazing platform to view workloads & Load-balancer as shown below:

GKE also provides UI for displaying Loadbalancer:

In my upcoming blog post, I will showcase how context switching can help you in switching your project between Dev, QA & Production environment flawlessly.

Did you find this blog helpful? Feel free to share your experience. Get in touch with me on Twitter –  @ajeetsraina

If you are looking out for contribution, join me at Docker Community Slack Channel.

1
0

Test-Drive Continuous Integration Pipeline using Docker, Jenkins & GitHub under $0

Estimated Reading Time: 7 minutes

Are you new to CI/CD?

Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early. CI doesn’t get rid of bugs, but it does make them dramatically easier to find and remove.

CI/CD merges development with testing, allowing developers to build code collaboratively, submit it the master branch, and checked for issues. This allows developers to not only build their code, but also test their code in any environment type and as often as possible to catch bugs early in the applications development lifecycle. Since Docker can integrate with tools like Jenkins and GitHub, developers can submit code in GitHub, test the code and automatically trigger a build using Jenkins, and once the image is complete, images can be added to Docker registries. This streamlines the process, saves time on build and set up processes, all while allowing developers to run tests in parallel and automate them so that they can continue to work on other projects while tests are being run.

A Typical CI workflow look like:

1. Developer pushes a commit to GitHub
2. GitHub uses a webhook to notify Jenkins of the update
3. Jenkins pulls the GitHub repository, including the Dockerfile describing the image, as well as the application and test code.
4. Jenkins builds a Docker image on the Jenkins slave node
5. Jenkins instantiates the Docker container on the slave node, and executes the appropriate tests
6. If the tests are successful the image is then pushed up to Dockerhub or Docker Trusted Registry.

Under these series of blog post, I will help you get started with Continuous Integration with Jenkins, Docker & GitHub under $0. I will be using Play with Docker platform which comes for free for the general public.

Let’s get started..

Go to your browser and type –  https://labs.play-with-docker.com/

 

Open up a “New Instance” on the left hand side of the screen.

 

 

Cloning the Repository

Let us start with a simple HTML webpage application. I have a sample GITHUB repository already created for you which contains Dockerfile and Jenkinsfile under the root of the repository as shown below:

 

 

$ git clone https://github.com/ajeetraina/webpage

 

What is Jenkinsfile & Jenkins Pipeline?

A Jenkinsfile is a text file that contains the definition of a Jenkins Pipeline and is checked into source control.

Jenkins Pipeline (or simply “Pipeline”) is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins. It provides an extensible set of tools for modeling simple-to-complex delivery pipelines “as code”. The definition of a Jenkins Pipeline is typically written into a text file (called a Jenkinsfile) which in turn is checked into a project’s source control repository.
I have created a sample Jenkinsfile which clones the repository, builds the Docker Image, test it and then push it to Dockerhub automatically using Jenkins.
node {
    def app

    stage('Clone repository') {
        /* Let's make sure we have the repository cloned to our workspace */

        checkout scm
    }

    stage('Build image') {
        /* This builds the actual image; synonymous to
         * docker build on the command line */

        app = docker.build("ajeetraina/webpage")
    }

    stage('Test image') {
        /* Ideally, we would run a test framework against our image.
         * Just an example */

        app.inside {
            sh 'echo "Tests passed"'
        }
    }

    stage('Push image') {
        /* Finally, we'll push the image with two tags:
         * First, the incremental build number from Jenkins
         * Second, the 'latest' tag.
         * Pushing multiple tags is cheap, as all the layers are reused. */
        docker.withRegistry('https://registry.hub.docker.com', 'dockerhub') {
            app.push("${env.BUILD_NUMBER}")
            app.push("latest")
        }
    }
}

In the above Jenkinsfile, there are various stages like cloning the SCM, build, test and pushing the Docker image. You will need to change it to your respective GITHUB repo if you want to try to build your own Docker image.

Let us quickly setup Jenkins on PWD platform and believe me it’s just a matter of a single docker-compose CLI. Before we proceed, ensure that the below permission is granted so as to get Docker plugin to work flawlessly –

$chmod 666 /var/run/docker.sock

It’s time to clone the repository:

$ git clone https://github.com/ajeetraina/jenkins
Cloning into ‘jenkins’…
remote: Counting objects: 18, done.
remote: Compressing objects: 100% (16/16), done.
remote: Total 18 (delta 2), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (18/18), done.
[node1] (local) root@192.168.0.53 ~/webpage
$ cd jenkins
[node1] (local) root@192.168.0.53 ~/webpage/jenkins
$docker-compose up

Copy the password of the jenkins starting with “5ee571..” as shown in the above screenshot.

Click on 8080 port which appear on the top of the screen and this will redirect you to Jenkins page.

Once you add the Administrator password which we copied earlier, then it will ask to install plugins. Choose “Install Suggested Plugins” on the left hand side as shown below:

Go back to Jenkins dashboard and click “New Item”. Supply any name(i.e demo-webpage) and select “Pipeline” as type of the project.

This will show up a page with Build Triggers as one of the option. Select “Pipeline script from SCM” and choose GIT under SCM. This will allow you to enter your GITHUB URL which in my case is https://github.com/ajeetraina/webpage. This will automatically add Jenkinsfile under Script Path. Click on “Apply” and then Save.

Click on “Build Now” option on the left side of the Jenkins screen. This traverse through the build pipeline i.e cloning the repository, building Docker Image , testing it and pushing it to your Dockerhub account as shown below:

 

 

 

 

Once you click on “Build Now” you will see that it initiates the build pipeline.

 

Jenkins Pipeline Stages View:

 

Click on “Console Output” to show detailed build process as shown below:

 

A Quick View of Stage View:

 

 

Till now, we have a new Docker Image pushed to Dockerhub automatically. Let’s go ahead and verify it by bringing up webpage container in the separate instance window as shown below:

 

 

Here you see…a static HTML page appears running on port 80. This was built as part of Jenkins pipeline.

In order to trigger this pipeline every 15 minutes, you can go back to Jenkins page for specific job and then choose “Build Periodically” under Configure page as shown below:

 

Under my future blog post, I will show you how CI-CD works in terms of Docker Swarm Mode and Kubernetes.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

5
0

Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0

Estimated Reading Time: 7 minutes

Say Bye to Kubectx !

I have been a great fan of kubectx and kubectl which has been a fast way to switch between clusters and namespaces until I came across Docker for Mac 18.02. With the newer Docker for Mac 18.02 RC build, it is just a matter of a “toggle”. Life has become too easy to switch between dev, QA & production environment.

[Updated(2-Feb-2018) : Docker for Mac 18.02.0 CE RC2 build now comes with Kubernetes v1.9.2 for the first time. I upgraded my macOS High Sierra to RC2 build today and it just works flawlessly. Check it out]

 

New to Kubernetes Namespace Vs Context ?

Generally, software development teams partition their development pipelines into discrete units. These units take various forms in a discrete layout –

          Dev  >> Testing|QA >> Staging >> Production

The resulting layouts are ideally suited to Kubernetes Namespaces. Each environment or stage in the pipeline becomes a unique namespace.

In Kubernetes terminology, Namespaces are the way to partition a single Kubernetes cluster into multiple virtual clusters. Namespaces are a logical partitioning capability that enable one Kubernetes cluster to be used by multiple users, teams of users, or a single user with multiple applications without concern for undesired interaction. Each user, team of users, or application may exist within its Namespace, isolated from every other user of the cluster and operating as if it were the sole user of the cluster.

A major benefit of applying namespaces to the development cycle is that the naming of software components (e.g. micro-services/endpoints) can be maintained without collision across the different environments. This is due to the isolation of the Kubernetes namespaces. The fact that each namespace is logically discrete allows the development teams to work within an isolated “development” namespace.

Say, you have two clusters, one for development work and one for scratch work. In the development cluster, your frontend developers work in a namespace called frontend, and your storage developers work in a namespace called storage. In your scratch cluster, developers work in the default namespace, or they create auxiliary namespaces as they see fit. Access to the development cluster requires authentication by certificate. Access to the scratch cluster requires authentication by username and password.

Shown below is an example which clearly shows a file config-demo with this content:

apiVersion: v1
kind: Config
preferences: {}

clusters:
- cluster:
  name: development
- cluster:
  name: scratch

users:
- name: developer
- name: experimenter

contexts:
- context:
  name: dev-frontend
- context:
  name: dev-storage
- context:
  name: exp-scratch

As shown above, a configuration file describes clusters, users, and contexts. Your config-demo file has the framework to describe two clusters, two users, and three contexts.

Under this blog post, I will showcase how to create 3 difference contexts – Google Cloud, Docker for Desktop & Minikube first and then how easy is it to toggle between them under Docker for Mac Platform. Let’s get started –

Pre-requisite:

  • Docker For Mac 18.02 RC2 build

 

  • Enable Kubernetes under Preference Pane

Installing Minikube

Minikube requires that VT-x/AMD-v virtualization is enabled in BIOS. To check that this is enabled on OSX / macOS run:
sysctl -a | grep machdep.cpu.features | grep VMX

Installing Minikube via brew

Ajeets-MacBook-Air:~ ajeetraina$ brew update && brew install kubectl && brew cask install minikube

Starting Minikube

Ajeets-MacBook-Air:~ ajeetraina$ minikube start
Starting local Kubernetes v1.9.0 cluster...
Starting VM...
Downloading Minikube ISO
 142.22 MB / 142.22 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading localkube binary
 162.41 MB / 162.41 MB [============================================] 100.00% 0s
 0 B / 65 B [----------------------------------------------------------]   0.00%
 65 B / 65 B [======================================================] 100.00% 0sSetting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

Viewing Kubernetes context selection pane

By now, you should see Minikube context appear

Verifying it using CLI

Ajeets-MacBook-Air:~ ajeetraina$ kubectl config get-contexts
CURRENT   NAME                          CLUSTER                      AUTHINFO             NAMESPACE
         docker-for-desktop            docker-for-desktop-cluster   docker-for-desktop
         gce                                                        cluster-admin
         kubernetes-admin@kubernetes   kubernetes                   kubernetes-admin
*         minikube                      minikube                     minikube

Viewing Minikube Dashboard

You can just type the below command to bring up qinikube dashboard in a sec.

minikube dashboard

Initializing Docker Swarm

Ajeets-MacBook-Air:testenviron ajeetraina$ docker swarm init
Swarm initialized: current node (zfxiqqjpjmwbvhm1ahjwio3s7) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4vnxn6cbq4gtsjjvaluucncc8m71aexe11dhbm40aoxfqnr7s3-bevjmv2qpklluuhm6ufrfoas2 192.168.65.3:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Ajeets-MacBook-Air:testenviron ajeetraina$ docker node ls
ID                            HOSTNAME                STATUS              AVAILABILITY        MANAGER STATUS
zfxiqqjpjmwbvhm1ahjwio3s7 *   linuxkit-025000000001   Ready               Active              Leader
Ajeets-MacBook-Air:testenviron ajeetraina$ docker stack deploy -c docker-compose.yml myapp3
Creating network myapp3_default
Creating service myapp3_db1
Creating service myapp3_web1
Ajeets-MacBook-Air:testenviron ajeetraina$ docker stack ls
NAME                SERVICES
myapp3              2

Switching the context from Minikube to docker-for-desktop

Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl config get-contexts
CURRENT   NAME                          CLUSTER                      AUTHINFO             NAMESPACE
          docker-for-desktop            docker-for-desktop-cluster   docker-for-desktop
          gce                                                        cluster-admin
          kubernetes-admin@kubernetes   kubernetes                   kubernetes-admin
*         minikube                      minikube                     minikube

Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl config use-context docker-for-desktop
Switched to context "docker-for-desktop".

Verifying through Pane UI

Open up whale icon under D4M and see if the context switched successfully.

Enabling Kubernetes

Go to whale icon > Click on Preference > Click on Kubernetes > Enable Kubernetes > Show Systems Containers

It will take few minutes to get Kubernetes up and running. Expect it to take long time if you are enabling kubernetes for the first time based on your internet speed.

Clone the Repository

$git clone https://github.com/ajeetraina/docker101

Change to the right location

$cd docker101/play-with-kubernetes/examples/stack-deploy-on-mac/

Example-1 : Demonstrating a Simple Web Application

Building the Web Application Stack

$docker stack deploy -c docker-stack1.yml myapp1

Verifying the Stack

$docker stack ls

Verifying using Kubectl

$kubectl get pods

Verifying if the web application is accessible

$curl localhost:8083

Cleaning up the Stack

$docker stack rm myapp`

Example:2 – Demonstrating ReplicaSet

Building the Web Application Stack

$docker stack deploy -c docker-stack2.yml myapp2

Verifying the Stack

$docker stack ls

Verifying using Kubectl

$kubectl get pods
Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl get stacks
NAME      AGE
myapp2    22m
Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
db1-d977d5f48-l6v9d     1/1       Running   0          22m
db1-d977d5f48-mpd25     1/1       Running   0          22m
web1-6886bb478f-s7mvz   1/1       Running   0          22m
web1-6886bb478f-wh824   1/1       Running   0          22m

Adding Context for Google Cloud

Pre-requisites:

  • Install google-cloud-sdk on macOS
  • Enable Google Cloud Engine API
  • Authenticate Your Google Cloud using gcloud auth

Creating GKE Cluster Node

Ajeets-MacBook-Air:~ ajeetraina$ gcloud container clusters create k8s-lab1 --disk-size 10 --zone asia-east1-a --machine-type n1-standard-2 --num-nodes 3 --scopes compute-rw
WARNING: The behavior of --scopes will change in a future gcloud release: service-control and service-management scopes will no longer be added to what is specified in --scopes. To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
WARNING: Starting in Kubernetes v1.10, new clusters will no longer get compute-rw and storage-ro scopes added to what is specified in --scopes (though the latter will remain included in the default --scopes). To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
Creating cluster k8s-lab1...done.
Created [https://container.googleapis.com/v1/projects/spheric-temple-187614/zones/asia-east1-a/clusters/k8s-lab1].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/asia-east1-a/k8s-lab1?project=spheric-temple-187614
kubeconfig entry generated for k8s-lab1.
NAME      LOCATION      MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
k8s-lab1  asia-east1-a  1.7.11-gke.1    35.201.215.156  n1-standard-2  1.7.11-gke.1  3          RUNNING

Verify it on Google Cloud

Cluster

Master version	
1.7.11-gke.1 Upgrade available
Endpoint	
35.201.215.156 Show credentials
Client certificate	
Enabled
Kubernetes alpha features	
Disabled
Total size	
3
Master zone	
...

Connecting to Your GKE Cluster

You can connect to your cluster via command-line or using a dashboard too.

Ajeets-MacBook-Air:~ ajeetraina$ gcloud container clusters get-credentials k8s-lab1 --zone asia-east1-a --project spheric-temple-187614

Fetching cluster endpoint and auth data. kubeconfig entry generated for k8s-lab1.

Listing the Nodes

Ajeets-MacBook-Air:~ ajeetraina$ kubectl get nodes
NAME                                      STATUS    ROLES     AGE       VERSION
gke-k8s-lab1-default-pool-042d2598-591g   Ready     <none>    7m        v1.7.11-gke.1
gke-k8s-lab1-default-pool-042d2598-c633   Ready     <none>    7m        v1.7.11-gke.1
gke-k8s-lab1-default-pool-042d2598-q603   Ready     <none>    7m        v1.7.11-gke.1

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

0
0
WP Facebook Auto Publish Powered By : XYZScripts.com