Under the Hood: Demystifying Docker For Mac CE Edition

Estimated Reading Time: 6 minutes

 

Docker is a full development platform for creating containerized apps, and Docker for Mac is the most efficient way to start and run Docker on your MacBook. It runs on a LinuxKit VM and NOT on VirtualBox or VMware Fusion. It embeds a hypervisor (based on xhyve), a Linux distribution which runs on LinuxKit and filesystem & network sharing that is much more Mac native. It is a Mac native application, that you install in /Applications. At installation time, it creates symlinks in /usr/local/bin for docker & docker-compose and others, to the commands in the application bundle, in /Applications/Docker.app/Contents/Resources/bin.

One of the most amazing feature about Docker for Mac is “drag & Drop” the Mac application to /Applications to run Docker CLI and it just works flawlessly. The way the filesystem sharing maps OSX volumes seamlessly into Linux containers and remapping macOS UIDs into Linux is one of the most anticipated feature.

Few Notables Features of Docker for Mac:

  • Docker for Mac runs in a LinuxKit VM.
  • Docker for Mac uses HyperKit instead of Virtual Box. Hyperkit is a lightweight macOS virtualization solution built on top of Hypervisor.framework in macOS 10.10 Yosemite and higher.
  • Docker for Mac does not use docker-machine to provision its VM. The Docker Engine API is exposed on a socket available to the Mac host at /var/run/docker.sock. This is the default location Docker and Docker Compose clients use to connect to the Docker daemon, so you to use docker and docker-compose CLI commands on your Mac.
  • When you install Docker for Mac, machines created with Docker Machine are not affected.
  • There is no docker0 bridge on macOS. Because of the way networking is implemented in Docker for Mac, you cannot see a docker0 interface on the host. This interface is actually within the virtual machine.
  • Docker for Mac has now Multi-Architectural support. It provides binfmt_misc multi architecture support, so you can run containers for different Linux architectures, such as arm, mips, ppc64le, and even s390x.

Top 5 Exclusive Features of Docker For Mac That You Can’t Afford to Ignore

Under this blog, I will deep dive into Docker for Mac architecture and show how to access service containers running on top of LinuxKit VM.

At the base of architecture, we have hypervisor called Hyperkit which is derived from xhyve. The xhyve hypervisor is a port of bhyve to OS X. It is built on top of Hypervisor.framework in OS X 10.10 Yosemite and higher, runs entirely in userspace, and has no other dependencies. HyperKit is basically a toolkit for embedding hypervisor capabilities in your application. It includes a complete hypervisor optimized for lightweight virtual machines and container deployment. It is designed to be interfaced with higher-level components such as the VPNKit and DataKit.

Just sitting next to HyperKit is Filesystem sharing solution. The osxfs is a new shared file system solution, exclusive to Docker for Mac. osxfs provides a close-to-native user experience for bind mounting macOS file system trees into Docker containers. To this end, osxfs features a number of unique capabilities as well as differences from a classical Linux file system.On macOS Sierra and lower, the default file system is HFS+. On macOS High Sierra, the default file system is APFS.With the recent release, NFS Volume sharing has been enabled both for Swarm & Kubernetes.

There is one more important component sitting next to Hyperkit, rightly called as VPNKit. VPNKit is a part of HyperKit attempts to work nicely with VPN software by intercepting the VM traffic at the Ethernet level, parsing and understanding protocols like NTP, DNS, UDP, TCP and doing the “right thing” with respect to the host’s VPN configuration. VPNKit operates by reconstructing Ethernet traffic from the VM and translating it into the relevant socket API calls on OSX. This allows the host application to generate traffic without requiring low-level Ethernet bridging support.

On top of these open source components, we have LinuxKit VM which runs containerd and service containers which includes Docker Engine to run service containers. LinuxKit VM is built based on YAML file. The docker-for-mac.yml contains an example use of the open source components of Docker for Mac. The example has support for controlling dockerd from the host via vsudd and port forwarding with VPNKit. It requires HyperKit, VPNKit and a Docker client on the host to run.

Sitting next to Docker CE service containers, we have kubelet binaries running inside LinuxKit VM. If you are new to K8s, kubelet is an agent that runs on each node in the cluster. It makes sure that containers are running in a pod. It basically takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.On top of Kubelet, we have kubernetes services running. We can either run Swarm Cluster or Kubernetes Cluster. We can use the same Compose YAML file to bring up both the clusters side by side.

Peeping into LinuxKit VM

Curious about VM and how Docker for Mac CE Edition actually look like?

Below are the list of commands which you can leverage to get into LinuxKit VM and see kubernetes services up and running. Here you go..

How to enter into LinuxKit VM?

Open MacOS terminal and run the below command to enter into LinuxKit VM:

$screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty

Listing out the service containers:

Earlier the ctr tasks ls used to list the service containers running inside LinuxKit VM but in the recent release, namespace concept has been introduced, hence you might need to run the below command to list out the service containers:

$ ctr -n services.linuxkit tasks ls
TASK                    PID     STATUS
acpid                   854     RUNNING
diagnose                898     RUNNING
docker-ce               936     RUNNING
host-timesync-daemon    984     RUNNING
ntpd                    1025    RUNNING
trim-after-delete       1106    RUNNING
vpnkit-forwarder        1157    RUNNING
vsudd                   1198    RUNNING

How to display containerd version?

Under Docker for Mac 18.05 RC1, containerd version 1.0.1 is available as shown below:

linuxkit-025000000001:~# ctr version
Client:
  Version:  v1.0.1
  Revision: 9b55aab90508bd389d7654c4baf173a981477d55

Server:
  Version:  v1.0.1
  Revision: 9b55aab90508bd389d7654c4baf173a981477d55
linuxkit-025000000001:~#

How shall I enter into docker-ce service container using containerd?

ctr -n services.linuxkit tasks exec -t --exec-id 936 docker-ce sh
/ # docker version
Client:
 Version:      18.05.0-ce-rc1
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   33f00ce
 Built:        Thu Apr 26 00:58:14 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.05.0-ce-rc1
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.10.1
  Git commit:   33f00ce
  Built:        Thu Apr 26 01:06:49 2018
  OS/Arch:      linux/amd64
  Experimental: true
/ #

How to verify Kubernetes Single Node Cluster?

/ # kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-23T09:38:59Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
/ # kubectl get nodes
NAME                 STATUS    ROLES     AGE       VERSION
docker-for-desktop   Ready     master    26d       v1.9.6
/ #

 

Interested to read further? Check out my curated list of blog posts –

Docker for Mac is built with LinuxKit. How to access the LinuxKit VM
Top 5 Exclusive Features of Docker for Mac That you can’t afford to ignore
5 Minutes to Bootstrap Kubernetes Cluster on GKE using Docker for Mac 18.03.0
Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0
2-minutes to Kubernetes Cluster on Docker for Mac 18.01 using Swarm CLI
Docker For Mac 1.13.0 brings support for macOS Sierra, now runs ARM & AARCH64 based Docker containers
Docker for Mac 18.03.0 now comes with NFS Volume Sharing Support for Kubernetes

 

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Top 5 Exclusive Features of Docker For Mac That You Can’t Afford to Ignore

Estimated Reading Time: 7 minutes

 

Docker for Mac 18.04.0 CE Edge Release went GA early last month. This was the first time Kubernetes version 1.9.6 & Docker Compose 1.21.0 was introduced under any Docker Desktop edition. ICYMI – Docker for Mac VM is entirely built with LinuxKit, hence this was the first release which enabled the RBD and CephFS kernel modules under LinuxKit VM. In case you’re new to RBD, the linux kernel RBD (rados block device) driver allows striping a linux block device over multiple distributed object store data objects. Usually the libceph module takes care of that.This release brought a number of fixes around upgrades from Docker for Mac 17.12, synchronisation between CLI `docker login` & GUI login, support for AUFS and much more.

Under this blog post, I will talk about top 5 exclusive and very useful features of Docker of Mac that you can’t afford to miss out.

#1: Docker for Mac support Docker Swarm, Swarm Mode & Kubernetes

Starting from Docker for Mac 17.12 CE Edge Release, Docker Inc introduced a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance, is not configurable, and is a single-node cluster.

One of the most anticipated feature introduced with this release was the Kubernetes server running within a Docker container on your local system. When Kubernetes support is enabled, you can deploy your workloads, in parallel, on Kubernetes, Swarm, and as standalone containers. Enabling or disabling the Kubernetes server does not affect your other workloads.

You can use Docker for Mac to test single-node features of swarm mode introduced with Docker Engine 17.12, including initializing a swarm with a single node, creating services, and scaling services. Docker “Moby” on Hyperkit serves as the single swarm node. You can also use Docker Machine, which comes with Docker for Mac, to create and experiment a multi-node swarm.

While testing Kubernetes, you may want to deploy some workloads in swarm mode. You can use the DOCKER_ORCHESTRATOR variable to override the default orchestrator for a given terminal session or a single Docker command. This variable can be unset (the default, in which case Kubernetes is the orchestrator) or set to swarm or kubernetes.

 

Check out my blog post:

2-minutes to Kubernetes Cluster on Docker for Mac 18.01 using Swarm CLI

 

#2:  You can use the same Docker Compose to build Swarm & Kubernetes Cluster

Yes, you read it correct. Starting from Docker for Mac 17.12, Docker introduced a new type called “Stack” under compose.docker.com. This object, that you can create with kubectl or more easily with docker stack deploy, contains the compose file.Behind the scene, a controller watches for stacks and create/update corresponding kubernetes objets (deployments, services, etc). The job of the controller is to reconcile the stacks (stored in the api-server or crd) with k8s native object.

The docker stack deploy manages to deploy to K8s. It convert docker-compose files to k8s manifests (something like kompose) before deployment. Let me showcase an example which shows how one can use the same YAML file to build Swarm Mode as well as K8s cluster.

Clone the Repository

$git clone https://github.com/ajeetraina/docker101

Change to the right location

$cd docker101/play-with-kubernetes/examples/stack-deploy-on-mac/

Example-1 : Demonstrating a Simple Web Application

Building the Web Application Stack

$docker stack deploy -c docker-stack1.yml myapp1

Verifying the Stack

$docker stack ls

Verifying using Kubectl

$kubectl get pods

Verifying if the web application is accessible

$curl localhost:8083

Cleaning up the Stack

$docker stack rm myapp`

Example:2 – Demonstrating ReplicaSet

Building the Web Application Stack

$docker stack deploy -c docker-stack2.yml myapp2

Verifying the Stack

$docker stack ls

Verifying using Kubectl

$kubectl get pods
Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl get stacks
NAME      AGE
myapp2    22m
Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
db1-d977d5f48-l6v9d     1/1       Running   0          22m
db1-d977d5f48-mpd25     1/1       Running   0          22m
web1-6886bb478f-s7mvz   1/1       Running   0          22m
web1-6886bb478f-wh824   1/1       Running   0          22m
Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl get stacks myapp2 -o yaml
apiVersion: compose.docker.com/v1beta2
kind: Stack
metadata:
  creationTimestamp: 2018-01-28T02:55:28Z
  name: myapp2
  namespace: default
  resourceVersion: "3186"
  selfLink: /apis/compose.docker.com/v1beta2/namespaces/default/stacks/myapp2
  uid: b25bf776-03d6-11e8-8d4c-025000000001
spec:
  stack:
    Configs: {}
    Networks: {}
    Secrets: {}
    Services:
    ..
      WorkingDir: ""
    Volumes: {}
status:
  message: Stack is started
  phase: Available

Verifying if the web application is accessible

$curl localhost:8083

Cleaning up the Stack

$docker stack rm myapp2

An Interesting Read:

5 Minutes to Bootstrap Kubernetes Cluster on GKE using Docker for Mac 18.03.0

#3:  Docker for Mac provides Multi-Architecture Support

Docker for Mac provides binfmt_misc multi architecture support. This means that now you can run containers for different Linux architectures, such as arm, mips, ppc64le, and even s390x.

#4:  Support for NFS Volume sharing under Swarm as well as Kubernetes

With Docker for Mac 18.03 release, NFS Volume sharing support for both Swarm & Kubernetes was introduced. To demonstrate this feature, follow the below steps:

Pre-Requisite:

  • Install Docker for Mac 18.03 and future version
  • Enable Kubernetes under Preference Pane UI

Cloning the Repository

git clone https://github.com/ajeetraina/docker101/
cd docker101/for-mac/nfs

Execute the below script on your macOS system

sh env_vars.sh
sh setup_native_nfs_docker_osx.sh

 +-----------------------------+
 | Setup native NFS for Docker |
 +-----------------------------+

WARNING: This script will shut down running containers.

-n Do you wish to proceed? [y]:
y

== Stopping running docker containers...
== Resetting folder permissions...
Password:
== Setting up nfs...
== Restarting nfsd...
The nfsd service does not appear to be running.
Starting the nfsd service
== Restarting docker...

SUCCESS! Now go run your containers 🐳

Bringing up Your Application

docker stack deploy -c docker-compose.yml myapp2
 docker stack ls
NAME                SERVICES
myapp2                1
[Captains-Bay]🚩 >  kubectl get po
NAME      READY     STATUS    RESTARTS   AGE
web-0     1/1       Running   0          3m
[Captains-Bay]🚩 >  kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP     1d
web          ClusterIP   None         <none>        55555/TCP   3m
[Captains-Bay]🚩 >  kubectl describe po web-0
Name:           web-0
Namespace:      default
Node:           docker-for-desktop/192.168.65.3
Start Time:     Wed, 11 Apr 2018 23:00:18 +0530
Labels:         com.docker.service.id=up2u-web
                com.docker.service.name=web
                com.docker.stack.namespace=up2u
                controller-revision-hash=web-7dbbf8689d
                statefulset.kubernetes.io/pod-name=web-0
Annotations:    <none>
Status:         Running
IP:             10.1.0.34
Controlled By:  StatefulSet/web
Containers:
  web:
    Container ID:  docker://ec9ad2a3192bdeb0cc5028453310f40fd0ac3595021b070465c4e7725f626d63
    Image:         alpine:3.6
    Image ID:      docker-pullable://alpine@sha256:3d44fa76c2c83ed9296e4508b436ff583397cac0f4bad85c2b4ecc193ddb5106
    Port:          <none>
    Args:
      ping
      127.0.0.1
    State:          Running
      Started:      Wed, 11 Apr 2018 23:00:19 +0530
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      #{CONTAINER_DIR} from nfsmount (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-n8trf (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  nfsmount:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nfsmount-web-0
    ReadOnly:   false
  default-token-n8trf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-n8trf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                         Message
  ----    ------                 ----  ----                         -------
  Normal  Scheduled              5m    default-scheduler            Successfully assigned web-0 to docker-for-desktop
  Normal  SuccessfulMountVolume  5m    kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "pvc-bbdc7903-3dad-11e8-a612-025000000001"
  Normal  SuccessfulMountVolume  5m    kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "default-token-n8trf"
  Normal  Pulled                 5m    kubelet, docker-for-desktop  Container image "alpine:3.6" already present on machine
  Normal  Created                5m    kubelet, docker-for-desktop  Created container
  Normal  Started                5m    kubelet, docker-for-desktop  Started container

 

 

#5:  Docker for Mac support context switching from docker-for-desktop to Cloud instances in a matter of a Click

Starting from Docker for Mac 18.02 RC release, the context switching feature was introduced which helped developers and operators to switch from docker-for-desktop to any Cloud environment in just a matter of a “toggle”.

 

I have a detailed blog post published early this year which demonstrates this feature with crystal clear examples. Check it out.

Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0

Other attractive features of Docker for Mac 18.04 includes –

  • Docker for Mac VM is entirely built with LinuxKit

How to enter into LinuxKit VM?

$screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
/ # cat /etc/issue

Welcome to LinuxKit

                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""\___/ ===
          {                       /  ===-
           \______ O           __/
             \    \         __/
              \____\_______/
/ # cat /etc/os-release
PRETTY_NAME="Docker for Mac"
/ #
linuxkit-025000000001:~# cat /etc/os-release
PRETTY_NAME="Docker for Mac"
linuxkit-025000000001:~# runc list
ID                 PID         STATUS      BUNDLE                                CREATED                          OWNER
000-metadata       0           stopped     /containers/onboot/000-metadata       2018-05-05T06:27:44.345735031Z   root
001-sysfs          0           stopped     /containers/onboot/001-sysfs          2018-05-05T06:27:44.768313965Z   root
002-binfmt         0           stopped     /containers/onboot/002-binfmt         2018-05-05T06:27:45.630283593Z   root
003-format         0           stopped     /containers/onboot/003-format         2018-05-05T06:27:46.341011253Z   root
004-extend         0           stopped     /containers/onboot/004-extend         2018-05-05T06:27:47.08889973Z    root
005-mount          0           stopped     /containers/onboot/005-mount          2018-05-05T06:27:55.334088074Z   root
006-swap           0           stopped     /containers/onboot/006-swap           2018-05-05T06:27:56.486815308Z   root
007-ip             0           stopped     /containers/onboot/007-ip             2018-05-05T06:28:03.894591249Z   root
008-move-logs      0           stopped     /containers/onboot/008-move-logs      2018-05-05T06:28:05.980232896Z   root
009-sysctl         0           stopped     /containers/onboot/009-sysctl         2018-05-05T06:28:06.15775421Z    root
010-mount-vpnkit   0           stopped     /containers/onboot/010-mount-vpnkit   2018-05-05T06:28:06.356833391Z   root
011-bridge         0           stopped     /containers/onboot/011-bridge         2018-05-05T06:28:06.551619273Z   root
linuxkit-025000000001:~# ctr tasks ls
  • Docker for Mac uses raw format VM disks for systems running APFS on SSD on High Sierra by default
  • DNS name docker.for.mac.host.internal should be used instead of docker.for.mac.localhost (still valid) for host resolution from containers.

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

Top 10 Reasons why LinuxKit is better than the traditional OS distribution

Estimated Reading Time: 9 minutes

 

“LinuxKit is NOT designed with an intention to replace any of traditional OS like Alpine, Ubuntu, Red Hat etc. It is an open-source toolbox for building fine-tuned Linux-based operating systems that run in containers and hence, lean, portable & secure out-of-the-box.”

                                                                                                                     ~ Collabnix

 

How is LinuxKit different from other available GNU operating system? How is it different from Hashicorp Packer & Cloud instances? Why should I consider LinuxKit? I came across these queries in various community forums and discussion and thought to come up a write-up which talks about the factor which differentiate LinuxKit from other available operating system, tools & utilities.

In case you’re new, LinuxKit is a toolkit for building minimal Linux distributions that is lean, secure and portable. LinuxKit is built with an intention to build a distributions that has just enough Linux to accomplish a task and just do it in a way that is more secure. It uses Moby to build the distribution images and the LinuxKit tool to run them on the local hypervisor, cloud and bare metal environment.

Under this blog post, I will go through 10 good reasons why LinuxKit is better than other available GNU / open source distributions:

1. Smaller in size(< 200MB) compared to GBs size of traditional OS

The idea behind LinuxKit is that you start with a minimal Linux kernel — the base distro is only 35MB — and add literally only what you need. Once you have that, you can build your application on it, and run it wherever you need to. 

                                                                                                                     ~ source 

LinuxKit is not a full host operating system. It primarily has two jobs to accomplish- first to  run containerd containers and secondly, be secure. Linuxkit provides the ability to build an extremely small distribution (~50m) stripped of everything but that which is required to run containers.The init system and all other userland system components (e.g. dhcp, sshd) are containers themselves and as such can be swapped out, or others be plugged in. The whole OS is immutable (unless data volumes are configured), and will run anywhere one finds silicon or a hypervisor: Windows, MacOS, IoT devices, the Cloud etc.The system does not contain extraneous packages or drivers by default. Because LinuxKit is customizable, it is up to individual operators to include any additional bits they may require.

Let us try building LinuxKit ISO out of the minimal.yaml (shown below) which is also available under LinuxKit repository and verify its size.

git clone https://github.com/linuxkit/linuxkit
cd linuxkit && make
cp -rf bin/linuxkit /usr/local/linuxkit

Building LinuxKit ISO

Ajeets-MacBook-Air:examples ajeetraina$  linuxkit build -format iso-bios -name kitiso minimal.yml

This will build up minimal LinuxKit ISO. 

Let us verify its size as shown below:

Ajeets-MacBook-Air:examples ajeetraina$ du -sh * | grep linuxkit.iso 
133M linuxkit.iso
Ajeets-MacBook-Air:examples ajeetraina$ file linuxkit.iso 
linuxkit.iso: DOS/MBR boot sector; partition 1 : ID=0x17, active, start-CHS (0x0,0,1), end-CHS (0x84,63,32), startsector 0, 272384 sectors
Ajeets-MacBook-Air:examples ajeetraina$

 

A Traditional OS, in the other hand, is a complete operating system and includes hundreds and thousands of application programs. 

2. Minimal Provisioning/Boot Time(1.5 – 3 minutes)

LinuxKit hardly takes 1.5 to 3 minutes to get up and running on either local hypervisor/cloud or bare metal system. This is way better than 3 to 5 minutes of provisioning time as compared to traditional OS distribution. LinuxKit is an entirely immutable system, coming in at around 50MB with nothing extraneous to that which is needed to run containers. The root filesystem is read-only, making it stateless and tamper-proof. LinuxKit runs almost everything – onboot processes and continuous services- in a container which makes the boot time quite fast.  Even the init phase – where the OS image will be booted – is configured by copying files from OCI images.

While booting up the minimal LinuxKit ISO, it just took 0.034515sec for the containerd to get booted up.(shown below)

 

3. Build-time Package Updates(LinuxKit) Vs Run-time Package Updates(OS)

LinuxKit uses Moby tool which assembles a set of containerized components into an image. It uses YAML configuration which specifies the components used to build up an image . All components are downloaded at build time to create an image. The image is self-contained and immutable, so it can be tested reliably for continuous delivery. Most importantly – The build itself takes a matter of seconds, and is eminently reproducible, making it an ideal candidate to pass through a CI system. This is definitely an advantage over traditional GNU operating system.

4. Built-in Security(LinuxKit) Vs Base Security(Traditional OS)

LinuxKit allows users to create very secure Linux subsystems because it is designed around containers. All of the processes, including system daemons, run in containers, enabling users to assemble a Linux subsystem with only the needed services. As a result, systems created with LinuxKit have a smaller attack surface than general purpose systems.

LinuxKit is architected to be secure by default. LinuxKit’s build process leverages Alpine Linux’s hardened userspace tools such as Musl libc, and compiler options that include -fstack-protector and position-independent executable output. 

Most importantly, LinuxKit uses modern kernels, and updates frequently following new releases.  LinuxKit tracks new kernel releases very closely, and also follows best practice settings for the kernel configuration from the Kernel Self Protection Project and elsewhere.

The core system components included in LinuxKit userspace are written in type safe languages, such as Rust, Go and OCaml, and run with maximum privilege separation and isolation. LinuxKit’s build process heavily leverages Docker images for packaging. Of note, all intermediate build images are referenced by digest to ensures reproducibility across LinuxKit builds. Tags are mutable, and thus subject to override (intentionally or maliciously) – referencing by digest mitigates classes of registry poisoning attacks in LinuxKit’s buildchain. Certain images, such as the kernel image, is usually signed by LinuxKit maintainers using Docker Content Trust, which guarantees authenticity, integrity, and freshness of the image.

If you compare it with the traditional OS, many kernel bugs usually lurk in the codebase for years. Therefore, it is imperative to not only patch the kernel to fix individual vulnerabilities but also benefit from the upstream security measures designed to prevent classes of kernel bugs.

5. Container & Cloud Native at the same time

It is important to note that Linuxkit is built with containers, for running containers. Linuxkit today is supported on local hypervisor, cloud and bare metal system. The same minimal LinuxKit ISO image which is container native runs well on Cloud platform like Google Cloud Platform, Amazon Web Services & Microsoft Azure flawlessly.

If we talk about the traditional OS available, the distribution is usually customized by Cloud vendors to fit into the Cloud which is quite different from the one which is available for bare metal system. For example, An Amazon Machine Image (AMI) or preemptive Google Cloud instances etc.

6. Batteries included but removable/swappable

LinuxKit is built on the philosophy of “batteries included but swappable”.  Everything is replaceable and customizable under LinuxKit. That’s one of the unique feature in LinuxKit. The YAML format specifies the components used to build an ISO image. This is made possible via the moby tool which assembles a set of containerized components into an image. Let us look at the various YAML file which talks about how this philosophy really works:

  • Minimal YAML file to build LinuxKit OS
  • YAML file which builds LinuxKit OS with SSH Enabled

If you compare the above YAML file with the minimal YAML, just SSHD service containers has been added and hence new LinuxKit ISO can be booted up. You can find various other YAML files under this link.

7. Immutable in Production

Lots of companies – small or very large – have used immutability as a core concept of their infrastructure.Immutable infrastructure basically consists of immutable components which are actually  replaced for every deployment, rather than being updated. These components are usually started from a common image that is built once per deployment and can be tested and validated. The common image can be built through automation, but doesn’t have to be. Important to note that immutability is independent of any tool or workflow for building the images. Its best use case is in a cloud or virtualized environment.

Why is it important?

Immutable components as part of your infrastructure are a way to reduce inconsistency in your infrastructure and improve the trust into your deployment process. 

Toolkit like LinuxKit have made building immutable components very easy. Together with existing cloud infrastructure, it is a powerful concept to help you build better and safer infrastructure. 

8. Designed to be managed by external tooling

LinuxKit today can be managed flawlessly with popular tooling like Infrakit, Terraform or CloudFormation. It uses external tooling such as Infrakit or CloudFormation templates to manage the update process externally from LinuxKit, including doing rolling cluster upgrades to make sure distributed applications stay up and responsive.

Updates may preserve the state disk used by applications if needed, either on the same physical node, or by reattaching a virtual cloud volume to a new node.

Soon after Dockercon 2017 Austin, I wrote a blog post on how Infrakit & LinuxKit work together for building immutable infrastructure.

Why Infrakit & LinuxKit are better together for Building Immutable Infrastructure?

If you want to know how LinuxKit & Terraform can work together, you should look at this link too.

 

9. Enough to bootstrap distributed applications

LinuxKit is designed for building and running clustered applications, including but not limited to container orchestration such as Docker or Kubernetes. It is obvious that in the production environment, most of the users will use a cluster of instances, usually running distributed applications, such as etcdDockerKubernetes or distributed databases. LinuxKit provides examples of how to run these effectively, largely using the tooling likeinfrakit, although other machine orchestration systems can equally well be used, for example Terraform or VMWare.

LinuxKit is gaining momentum in terms of  a toolkit for building custom minimal, immutable Linux distributions. Integration of Infrakit with LinuxKit help users  to build and deploy custom OS images to a variety of targets – from a single vm instance on the mac (via xhyve / hyperkit, no virtualbox) to a cluster of them, as well as booting a remote ARM host on Packet.net from the local laptop via a ngrok tunnel.

 

10. Eliminates Cloud-provides specific Base Image variance

If we talk about Cloud, one serious challenge is that each vendor has differentiated IP invested in how they handle certain aspects of their cloud infrastructure. It is not enough to just provision n-number of servers;what IT ops teams need is a simple and consistent way to declare the number of servers, what size they should be, and what sort of base software configuration is required. Linuxkit solves this problem of base software configuration issue by allowing users to build a stripped down version of traditional OS based on their own requirement.

As said earlier, LinuxKit is not here to replace Alpine OS. LinuxKit uses the minimalist Alpine Linux distribution by default as the foundation of its official container images, and will continue to do so in future. Alpine is seen as an ideal all-purpose generic base for running heavy-lifting software like Redis and Nginx within containers. You can also switch Alpine for Debian/ubuntu/centos, if you wish.

Interested to learn more about LinuxKit, head over to these series of articles.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Running LinuxKit locally on Oracle VirtualBox Platform Made Easy

Estimated Reading Time: 6 minutes

LinuxKit GITHUB repository has already crossed 1800 commits, 3600+ stars &  been forked 420+ times since April 2017 when it was open sourced by Docker Inc for the first time. LinuxKit today support dozens of platforms which falls under Cloud, local hypervisor & Bare metal systems categories. Recently, arm64 support was added and I published a blog post which talks about building LinuxKit for the minimalist Raspberry Pi 3.  Improved Kubernetes support has been another enhancement and you can follow my blog to build Multi-Node Kubernetes  cluster using LinuxKit. In case you want to build Multi-Node Kubernetes cluster using the newly added CRI-containerd & Moby, don’t miss out this blog post.

 

 

In case you’re very new to LinuxKit , it is a toolkit for building secure, portable & lean operating systems for containers. It uses moby tooling to build system images. Everything runs in a container. LinuxKit uses the linuxkit tool for building, pushing and running Virtual Machine images. The moby tool assembles a set of containerized components into in image. The simplest type of image is just a tar file of the contents.The YAML configuration specifies the components used to build up an image . All components are downloaded at build time to create an image. The image is self-contained and immutable, so it can be tested reliably for continuous delivery.

A Look at LinuxKit Architecture

At the base of LinuxKit, there is a modern Linux kernel which specifies a kernel Docker image, containing a kernel and a filesystem tarball, eg containing modules. The minimal init is the base init process Docker image, which is unpacked as the base system, containing initcontainerd and a few tools. It is basically built from pkg/init/.  The onboot  containers are the system containers, executed sequentially in order. They should terminate quickly when done. The services is the system services, which normally run for the whole time the system is up. The .files are additional files to add to the image

 

What’s New in LinuxKit?

Below are the list of new features which has been introduced in LinuxKit recently –

 

 

Early this year, I wrote a blog post which talks about how to manually create LinuxKit ISO image and then mount it to run it under Oracle VirtualBox. The method was complicated as it required converting VMDK file into .VDI format first and then registering the VM using VBoxManage CLI.

Test-Drive LinuxKit OS on Oracle VirtualBox running on macOS Sierra

Now with the introduction of linuxkit run vbox CLI, it is just a matter of 2-3 minutes to get it  on VirtualBox up and running.

Under this blog post, we will see how LinuxKit OS can be built and run on Oracle VirtualBox in just 2 minutes.

Pre-requisites:

  • MacOS Sierra 
  • Docker for Mac installed on MacOS
  • Docker Up and Running
  • Oracle VirtualBox 

Clone the LinuxKit Repository:

git clone https://github.com/linuxkit/linuxkit

Building the LinuxKit Tool

cd linuxkit
make

Place LinuxKit under the right executable PATH:

cp bin/linuxkit /usr/local/bin/

Building ISO image for VirtulBox.

Before we go ahead and build ISO for Virtualbox, let us look at the newly introduced command line option:

 

Now you can use `LinuxKit build` option to build the ISO image. Let us look into this sub-command:

 

Let’s run the below command to build iso-bios format of docker.yml which can be found under linuxkit/examples directory under LinuxKit repository.

linuxkit build -format iso-bios --name testbox docker.yml

This builds up ISO image as shown below:

 

Running the ISO for VirtualBox

Justin Cormack, a LinuxKit maintainer did a great job in introducing a new CLI option linuxkit run box  as shown below:

Run the below command to initiate  LinuxKit OS on VirtualBox in the form of VM:

linuxkit run vbox --iso testbox.iso

This will initiate a VM called testbox under Virtualbox as shown below:

You can verify under VirtualBox Manager:

 

Open up Console to see LinuxKit running under this new VM:

 

So, you can access either through terminal or directly under the console but NOT both at the same time.

Accessing Docker Service Container

To access the Docker service container, first list out the running service containers:

ctr tasks ls

This will list out the running service containers as shown below:

 

Lets us enter into docker service container and verify Docker release version:

ctr tasks exec -t --exec-id 502 docker sh

This will allow it to enter into shell as shown. Run docker version command to verify the currently available Docker release.

 

 

Please note that networking doesn’t get enabled by default for these service container. You will need to  manually enable “Cable Connected” option under VirtualBox > Settings > Network > Advanced to get the IP address assigned to the network interface.

 

I have raised this issue with LinuxKit Team and you can track it here.

Let us go back to the terminal and try to pull few Docker images as shown below:

 

Wow ! So we have Docker service container running  inside LinuxKit OS on top of Oracle VirtualBox platform flawlessly.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

Interested to read more on LinuxKit? Check this out – 

Building a secure Docker Host VM on VMware ESXi using LinuxKit & Moby

Building a Secure VM based on LinuxKit on Microsoft Azure Platform

Building Docker For Mac 17.06 Community Edition using Moby & LinuxKit

Running LinuxKit on AWS Platform made easy

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

Building a minimalistic LinuxKit OS on Raspberry Pi 3 using Moby

Estimated Reading Time: 4 minutes

LinuxKit GITHUB repository recently crossed 3495 stars, forked around 410+ times and added 80+ contributors. Just 7 months old project and it has already gained lot of momentum across the Moby community. In case you’re new, LinuxKit is the first use case for the Moby Project, which is basically a toolkit used for building secure, portable, and lean operating systems for containers. It uses YAML files to describe the complete system, and Moby uses it to assemble the boot image and verify the signature. LinuxKit uses the Moby tool for image builds, and the LinuxKit tool for pushing and running VM images.

In the last Dockercon 2017 EU, Docker Inc. announced the latest additions which includes – 

  • Support for arm64 architecture
  • Improved platform support (Azure, IBM Bluemix, AWS, Google Cloud Platform, VMware, Hyper-V, Packets.net etc.)
  • Upcoming Kubernetes support (I tried it and believe me it was a great experience)
  • Linux containers on Windows (LCOW) preview support
  • Multi-arch build system
  • Fully immutable system images
  • Namespace sharing
  • Support for the latest Linux kernels

LinuxKit today supports building & booting images on a Raspberry Pi 3 using the mainline arm64 bit kernels. Under this blog post, I will show how to build Linuxkit OS on Raspberry Pi 3.

Pre-requisites:

  1. OpenSUSE 42.2 Leap 
  2. Raspberry Pi 3
  3. SD card
  4. Win32 Disk Imager 
  5. SD Formatter(Windows Client) or Etcher(for MacOS)

[PLEASE NOTE – In case you’re trying to build LinuxKit on Raspbian OS running on your Pi box, it is NOT going to work. Reason – There is no 32-bit kernel built for Linuxkit. As of today, only 64-bit based kernel is available for LinuxKit OS. If you’re planning to build Linuxkit with 32-bit kernel, feel free to share your findings.I picked up openSUSE OS as it has a arm64 flash for Raspberry Pi 3 ]

Early this year I wrote a blog on how to test-Drive Docker 1.12 on first 64-bit ARM OpenSUSE running on Raspberry Pi 3. You can refer it to get started.

  1. Download OpenSUSE 42.2 Leap from this link and follow the blog thoroughly to get it ready on SD card.

 

  

    2. Install Docker on OpenSUSE 42.2 

By default, OpenSUSE Leap comes with Docker 1.12 edition. Though it is pretty old, we can still build LinuxKit on the operating system. Ensure that Docker is up and running on your Raspberry Pi 3 box:

systemctl restart docker 

 

      3.  Building Moby tool on Raspberry Pi

You will need Go version 1.8 atleast to get Moby and Linuxkit built on OpenSUSE 11.3.

go get -u github.com/moby/tool/cmd/moby

 

      4.  Building LinuxKit 

 

linux:~/linuxkit # go get -u github.com/linuxkit/linuxkit/src/cmd/linuxkit
github.com/linuxkit/linuxkit/src/cmd/linuxkit
/usr/lib64/go/1.8/pkg/tool/linux_arm64/link: running gcc failed: fork/exec /usr/bin/gcc: cannot allocate memory

In case you encounter the above error message, you need to fix this issue by adding more space to swap partition.

Step:1 – Create a 1GB File

We use the DD command to create the 1G file basically writing “0″ into the file swap_1 in the “/” file system.

linux:~/linuxkit # dd if=/dev/zero of=/swap_1 bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB, 977 MiB) copied, 64.4194 s, 15.9 MB/s

Step-2 – Setting up File as a swap area.

linux:~/linuxkit # mkswap /swap_1
mkswap: /swap_1: insecure permissions 0644, 0600 suggested.
Setting up swapspace version 1, size = 976.6 MiB (1023995904 bytes)
no label, UUID=e2f577a5-ff87-40ee-adfc-78e8feef3a36

Step-3 – Enabling Swap File for system to start paging physical memory pages onto it.

linux:~/linuxkit # swapon /swap_1
swapon: /swap_1: insecure permissions 0644, 0600 suggested.

Step-4: Verify the new free memory:

linux:~/linuxkit # free -tom
             total       used       free     shared    buffers     cached
Mem:           785        745         40          3          1        688
Swap:         1462         65       1396
Total:        2247        810       1437

 

Now, you can build up LinuxKit tool flawlessly. Once installed, you can verify it with the below command:

 

Let us build minimal LinuxKit OS using the below command:

moby build -format rpi3 minimal.yml

 

 

Now it’s time to convert minimal.tar into ISO format. You can leverage the following tool:

bash-3.2# hdiutil makehybrid -o /Users/ajeetraina/Desktop/linuxkit.iso minimal -iso -joliet
Creating hybrid image…

By now, you should be able to boot the minimal LinuxKit ISO on your Raspberry Pi system via uboot tool. It can also be directly be extracted onto a FAT32 formatted SD card to boot your Raspberry Pi. 

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

How to Build Kubernetes Cluster using CRI-containerd & Moby

Estimated Reading Time: 5 minutes

Let’s talk about CRI Vs CRI-Containerd…

Container Runtime Interface(a.ka. CRI) is a standard way to integrate Container Runtime with Kubernetes. It is new plugin interface for container runtimes. It is a plugin interface which enables kubelet to use a wide variety of container runtimes, without the need to recompile.Prior to the existence of CRI, container runtimes (e.g., docker, rkt ) were integrated with kubelet through implementing an internal, high-level interface in kubelet. Formerly known as OCID, CRI-O is strictly focused on OCI-compliant runtimes and container images. 

Last month, the new CRI-O version 1.2 got announced. This is a minor release of the v1.0.x cycle supporting Kubernetes 1.7.With CRI, Kubernetes can be container runtime-agnostic. Now this provides flexibility to the providers of container runtimes who don’t need to implement features that Kubernetes already provides. CRI-O allows you to run containers directly from Kubernetes – without any unnecessary code or tooling.

For those viewers who compare Docker Runtime Engine Vs CRI-O, here is an important note – 

CRI-O is not really a competition to the docker project – in fact it shares the same OCI runC container runtime used by docker engine, the same image format, and allows for the use of docker build and related tooling. Through this new runtime, it is expected to bring developers more flexibility by adding other image builders and tooling in the future.Please remember that CRI is not an interface for full-fledge, all inclusive container runtime.

How does CRI-O workflow look like ?

When Kubernetes needs to run a container, it talks to CRI-O and the CRI-O daemon works with container runtime  to start the container. When Kubernetes needs to stop the container, CRI-O handles that. Everything  just works behind the scenes to manage Linux containers.

 

 

Kubelet is a node agent which has gRPC client which talks to gRPC server rightly called as shim. The shim then talks to container runtime. Today the default implementation is Docker Shim.Docker shim then talks to Docker daemon using the classical APIs. This works really well.

CRI consists of a protocol buffers and gRPC API, and libraries, with additional specifications and tools under active development.

Introducing CRI-containerd

CRI-containerd is containerd based implementation of CRI.  This project started in April 2017. In order to have Kubernetes consume containerd for its container runtime, containerd team implemented the CRI interface.  CRI is responsible for distribution and the lifecycle of pods and containers running on a cluster.The scope of containerd 1.0 aligns with the requirement of CRI. In case you want to deep-dive into it, don’t miss out this link.

Below is how CRI-containerd architecture look like:

                                                                                                             

In my last blog post, I talked about how to setup Multi-Node Kubernetes cluster using LinuxKit. It basically used Docker Engine to build up minimal & immutable Kubernetes OS images with LinuxKit. Under this blog post, we will see how to build it using CRI-containerd.

Infrastructure Setup:

  • OS – Ubuntu 17.04 
  • System – ESXi VM
  • Memory – 8 GB
  • SSH Key generated using ssh-keygen -t rsa and put under $HOME/.ssh/ folder

Caution – Please note that this is still under experimental version. It is currently under active development and hence don’t expect it to work as full-fledge K8s cluster.

Copy the below script to your Linux System and execute it flawlessly –

 

 

Did you notice the parameter –  KUBE_RUNTIME=cri-containerd ?

The above parameter is used to specify that we want to build minimal and immutable K8s ISO image using CRI-containerd. If you don’t specify the parameter, it will use Docker Engine to build this ISO image.

The above script is going to take sometime to finish.

It’s always a good idea to see what CRI-containerd specific files are present. 

Let us look into CRI-Containerd directory – 

Looking at the content of cri-containerd.yml where it defines a service called as critical-containerd.

The below list of files gets created with the output files like Kube-master-efi.iso right under Kubernetes directory – 

 

 

 

By the end of this script you should be able to see Kubernetes LinuxKit OS booting up – 

 

 

CRI-containerd let the user containers in the same sandbox share the network namespace and hence you will see the message ” This system is namespaced”.

You can find the overall screen logs to watch how it boots up –

 

Next, let us try to see what container services are running:

 

You will notice that cri-containerd service is up and running. 

Let us enter into one of tasks containers and initialize the master node with kubeadm-init script which comes by default –

(ns: getty) linuxkit-ee342f3aebd6:~# ctr tasks exec -t --exec-id 654 kubelet sh
/ # ls

Execute the below script –

/ # kubeadm-init.sh
/ # kubeadm-init.sh
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [linuxkit-ee342f3aebd6 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig] Wrote KubeConfig file to disk: “admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “scheduler.conf”
[controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
…..

Now you should be able to join the other worker nodes( I have discussed the further steps under  this link ).

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

Getting Started with Multi-Node Kubernetes Cluster using LinuxKit

Estimated Reading Time: 5 minutes

Here’s a BIG news for the entire Container Community – “Kubernetes Support is coming to Docker Platform“. What does this mean? This means that developers and operators can now build apps with Docker and seamlessly test and deploy them using both Docker Swarm and Kubernetes. This announcement was made on the first day of Dockercon EU 2017 Copenhagen where Solomon Hykes, Founder of Docker invited Kubernetes Co-Founder  Tim Hockin on the stage for the first time. Tim welcomed the Docker family to the vibrant Kubernetes world. 

In case you missed out the news, here are the important takeaways from Docker-Kubernetes announcement –

  • Kubernetes support will be added to Docker Enterprise Edition for the first time.
  • Kubernetes support will be added optionally to Docker Community Edition for Mac and Windows.
  • Kubernetes and Docker Swarm both orchestrators will be available under Docker Platform.
  • Kubernetes integration will be made available under Moby projects too.

Docker Inc firmly believes that bringing Kubernetes to Docker will simplify and advance the management of Kubernetes for developers, enterprise IT and hackers to deliver the advanced capabilities of Docker to a broader set of applications.

 

 

In case you want to be first to test drive, click here to sign up and be notified when the beta is ready.

Please be aware that the beta program is expected to be ready at the end of 2017.

Can’t wait for beta program? Here’s a quick guide on how to get Multi-Node Kubernetes cluster up and running using LinuxKit.

Under this blog post, we will see how to build 5-Node Kubernetes Cluster using LinuxKit. Here’s the bonus – We will try to deploy WordPress application on top of Kubernetes Cluster. I will be building it on my macOS Sierra 10.12.6 system running the latest Docker for Mac 17.09.0 Community Edition up and running.

Pre-requisite:

  1. Ensure that the latest Docker release is up and running.

       2. Clone the LinuxKit repository:

sudo git clone https://github.com/ajeetraina/linuxkit
cd projects/kubernetes/

3. Execute the below script to build Kubernetes OS images using LinuxKit & Moby:

sudo chmod +x script4k8s.sh

All I have done is put all the manual steps under a script as shown below:

 

 

This should build up Kubernetes OS image.

Run the below command to know the IP address of the kubelet container:

ip adds show dev eth0

 

You can check the list of tasks running as shown below:

Once this kubelet is ready, it’s time to login into it directly from a new terminal:

sudo ./ssh_into_kubelet.sh 192.168.65.3

So you have already SSH into the kubelet container rightly named “LinuxKit Kubernetes Project”.

We are going to mark it as “Master Node”.

Initializing the Kubernetes Master Node

Now it’s time to manually initialize master node with Kubeadm command as shown below:

sudo kubeadm-init.sh

Wait for few minutes till kubeadm command completes. Once done, you can use kubeadm join argument as shown below:

 

Your Kubernetes master node gets initialized successfully as shown below:

 

 

It’s time to run the below list of commands command to join the list of worker nodes:

pwd
/Users/ajeetraina/linuxkit/projects/kubernetes
sudo ./boot.sh 1 --token 4cec03.1a7ccb44115f427a 192.168.65.3:6443
sudo ./boot.sh 2 --token 4cec03.1a7ccb44115f427a 192.168.65.3:6443
sudo ./boot.sh 3 --token 4cec03.1a7ccb44115f427a 192.168.65.3:6443
sudo ./boot.sh 4 --token 4cec03.1a7ccb44115f427a 192.168.65.3:6443

 

It brings up 5-Node Kubernetes cluster as shown below:

 

Deploying WordPress Application on Kubernetes Cluster:

Let us try to deploy a WordPress application on top of this Kubernetes Cluster.

Head over to the below directory:

cd linuxkit/projects/kubernetes/wordpress

 

Creating a Persistent Volume

kubectl create -f local-volumes.yaml

The content of local-volumes.yml look like:


This creates 2 persistent volumes:

Verifying the volumes:

 

Creating a Secret for MySQL Password

kubectl create secret generic mysql-pass --from-literal=password=mysql123

 
Verifying the secrets using the below command:
kubectl get secrets

 

Deploying MySQL:

kubectl create -f mysql-deployment.yaml

 

Verify that the Pod is running by running the following command:

kubectl get pods

 

Deploying WordPress:

kubectl create -f wordpress-deployment.yaml

By now, you should be able to access WordPress UI using the web browser.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

Building a secure Docker Host VM on VMware ESXi using LinuxKit & Moby

Estimated Reading Time: 4 minutes

Post Dockercon 2017 @ Austin TX,  I raised a feature request titled “LinuxKit command to push vmware.vmdk to remote ESXi datastore”. Within few weeks time, the feature was introduced by LinuxKit team. A Special thanks goes to Daniel Finneran who worked hard to get this feature merged into the LinuxKit main branch.

 

LinuxKit project is 5 month old now. It has already bagged 3100+ stars, added up 69 contributors and 350+ forks till date. If you are pretty new, LinuxKit is not a full host operating system, as it primarily has two jobs: run containerd containers, and be secure. It uses modern kernels, and updates frequently following new releases. As such, the system does not contain extraneous packages or drivers by default. Because LinuxKit is customizable, it is up to individual operators to include any additional bits they may require.

LinuxKit is undoubtedly Secure

The core system components included in LinuxKit userspace are key to security, and written in type safe languages, such as RustGo and OCaml, and run with maximum privilege separation and isolation. The project is currently leveraging MirageOS to construct unikernels to achieve this, and that progress can be tracked here: as of this writing, dhcp is the first such type safe program. There is ongoing work to remove more C components, and to improve, fuzz test and isolate the base daemons. Further rationale about the decision to rewrite system daemons in MirageOS is explained at length in this document. I am planning to come up with blog post to brief on “LinuxKit Security” aspect. Keep an eye on this space in future..

Let’s talk about building a secure Docker Host VM…

I am a great fan of VMware PowerCLI. I have been using it since the time I was working full time in VMware Inc.(during 2010-2011 timeframe). Today the most quickest way to get VMware PowerCLI up and running is by using PhotonOS based Docker Image. Just one Docker CLI and you are already inside Photon OS running PowerShell & PowerCLI to connect to remote ESXi to build up VMware Infrastructure. Still this mightn’t give you a secure Docker host environment. If you are really interested to build a secure, portable and lean Docker Host operating system, LinuxKit is the right tool. But how?

Under this blog post, I am going to show how Moby & LinuxKit can help you in building a secure Docker 17.07 Host VM on top of VMware ESXi.

Pre-requisites:

  • VMware vSphere ESXi 6.x
  • Linux or MacOS with Go packages installed
  • Docker 17.06/17.07 installed on the system

The below commands has been executed on one of my local Ubuntu 16.04 LTS system which can reach out to ESXi system flawlessly.

Cloning the LinuxKit Repository:

git clone https://github.com/linuxkit/linuxkit

Building Moby & LinuxKit:

cd linuxkit
make

Configuring the right PATH for Moby & LinuxKit

cp bin/moby /usr/local/bin
cd bin/linuxkit /usr/local/bin

A Peep into vmware.yml File

The first 3 lines shows modern, securely configured kernel. The init section spins up containerd to run services. The onboot section allows dhcpd for networking. The services includes getty service container for shell, runs nginx service container. The trust section indicates all images signed and verified.

Building VMware ISO Image using Moby

moby build -output iso-bios -name vmware vmware.yml

 

Pushing VMware ISO Image to remote ESXi datastore

linuxkit push vcenter -datastore=datastore1 -hostname=myesxi.dell.com -url https://root:xxx@100.98.x.x/sdk -folder=linuxkit vmware.iso

Usage of linuxkit push vcenter :-

 

Running a secure VM directly from the ESXi datastore using LinuxKit

dell@redfish-ubuntu:~/linuxkit/examples$ sudo linuxkit run vcenter -cpus 8 -datastore datastore1 -mem 2048 -network ‘VM Network’ -hostname myesxi.dell.com -powerOn -url  https://root:xxx@100.98.x.x/sdk vmware.iso
Creating new LinuxKit Virtual Machine
Adding ISO to the Virtual Machine
Adding VM Networking
Powering on LinuxKit VM

Now let us verify that VM is up and running using either VMware vSphere Client or SDK URL.

You will find that VM is already booted up with the latest Docker 17.07 platform up and running.

Building a Docker Host VM using Moby & LinuxKit

In case you want to build a Docker Host VM, you can refer to the below vmware.yml file:

Just re-run the below command to get the new VM image:

moby build -output iso-bios -name vmware docker-vmware.yml

Follow the above steps to push it to remote datastore and run it using LinuxKit. Hence, you have a secured Docker 17.07 Host ready to build Docker Images and build up application stack.

How about building Photon OS based Docker Image using Moby & LinuxKit? Once you build it and push it to VM , its all ready to build Virtual Infrastructure. Interesting, Isn’t it?

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

 

Building a Secure VM based on LinuxKit on Microsoft Azure Platform

Estimated Reading Time: 4 minutes

LinuxKit GITHUB repository recently crossed 3000 stars, forked around 300+ times and added 60+ contributors. Just 5 months old project and it has already gained lot of momentum across the Docker community. Built with a purpose that enables community to create secure, immutable, and minimal Linux distributions, LinuxKit is matured enough to support number of Cloud Platforms like Azure, AWS, Google Cloud Platform, VMware, Packets.net and many more..

 

In my recent blogs, I showcased how to get LinuxKit OS built for Google Cloud Platform, Amazon Web Services and VirtualBox. ICYMI, I recently published few of the the video on LinuxKit too. Check it out.

 

Under this blog post, I will walkthrough how to build secure and portal VM based on LinuxKit image on Microsoft Azure Platform.

Pre-requisite:

I will be leveraging macOS Sierra running Docker 17.06.1-ce-rc1-mac20 version. I tested it on Ubuntu 16.04 LTS edition too running on one of Azure VM and it went fine. Prior knowledge of Microsoft Azure / Azure CLI 2.0 will be required to configure Service Principle for VHD image to get uploaded to Azure smoothly.

 

Step-1: Pulling the latest LinuxKit repository

Pull the LinuxKit repository using the below command:

git clone https://github.com/linuxkit/linuxkit

 

Step-2: Build Moby & LinuxKit tool

cd linuxkit
make

 

Step-3: Copying the tools into the right PATH

cp -rf bin/moby /usr/local/bin/
cp -rf bin/linuxkit /usr/local/bin/

 

Step-4: Preparing Azure CLI tool

curl -L https://aka.ms/InstallAzureCli | bash

 

Step-5: Run the below command to restart your shell

exec -l $SHELL

 

Step-6: Building LinuxKit OS for Azure Platform

cd linuxkit/examples/
moby build -output vhd azure.yml

This will build up VHD image which now has to be pushed to Azure Platform.

In order to push the VHD image to Azure, you need to authenticate LinuxKit with your Azure subscription, hence you  will need to set up the following environment variables:

   export AZURE_SUBSCRIPTION_ID=43b263f8-XXXX--XXXX--XXXX--XXXXXXXX
   export AZURE_TENANT_ID=633df679-XXXX--XXXX--XXXX--XXXXXXXX
   export AZURE_CLIENT_ID=c7e4631a-XXXX--XXXX--XXXX--XXXXXXXX
   export AZURE_CLIENT_SECRET=XXXXXXXXXXXXXXXXXXXXXXXX=

Alternatively, the easy way to get all the above details is through the below command:

az login
To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code XXXXXX to authenticate.

The above command lists out Subscription ID and tenant ID which can be exported therein.

Next, follow this link to create an Azure Active Directory application and service principal that can access resources. If you want to stick to CLI rather than UI, you can follow the below steps:

Step-7: Pushing the VHD image to Azure Platform

linuxkit run azure --resourceGroupName mylinuxkit --accountName mylinuxkitstore -location eastasia azure.vhd
Creating resource group in eastasia
Creating storage account in eastasia, resource group mylinuxkit

The command will end up with the below message:

 

 Completed: 100% [     68.00 MB] RemainingTime: 00h:00m:00s Throughput: 0 Mb/sec    

Creating virtual network in resource group mylinuxkitresource, in eastasia

Creating subnet linuxkitsubnet468 in resource group mylinuxkitresource,

within virtual network linuxkitvirtualnetwork702

Creating public IP Address in resource group mylinuxkitresource, with name publicip159

Started deployment of virtual machine linuxkitvm941 in resource group mylinuxkitresource

Creating virtual machine in resource group mylinuxkitresource, with name linuxkitvm941, in location eastasia

NOTE: Since you created a minimal VM without the Azure Linux Agent,

the portal will notify you that the deployment failed. After around 50 seconds try connecting to the VM

ssh -i path-to-key root@publicip159.eastasia.cloudapp.azure.com

 

By this time, you should be able to see LinuxKit VM coming up under Azure Platform as shown below:

Wait for next 2-3 minutes till you try SSHing to this Azure instance and its all set to be up an running smoothly.

Known Issue:

  • Since the image currently does not contain the Azure Linux Agent, the Azure Portal will report the creation as failed.
  • The main workaround is the way the VHD is uploaded, specifically by using a Docker container based on Azure VHD Utils. This is mainly because the tool manages fast and efficient uploads, leveraging parallelism
  • There is work in progress to specify what ports to open on the VM (more specifically on a network security group)
  • The metadata package does not yet support the Azure metadata.

 

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Further Reference:

Building Docker For Mac 17.06 Community Edition using Moby & LinuxKit

Estimated Reading Time: 6 minutes

Docker For Mac 17.06 CE edition is the first Docker version built entirely on the Moby Project. In case you’re new, Moby is an open framework created by Docker, Inc to assemble specialised container systems. It comprises of 3 basic elements: a library of containerised backend components (e.g., a low-level builder, logging facility, volume management, networking, image management, containerd, SwarmKit), a framework for assembling the components into a standalone container platform, and tooling to build, test and deploy artifacts for these assemblies and a reference assembly, called Moby Origin, which is the open base for the Docker container platform, as well as examples of container systems using various components from the Moby library or from other projects.

Docker for Mac is a Docker Community Edition (CE) app and aims for a native OSX experience that works with existing developer workflows. The Docker for Mac install package includes everything you need to run Docker on a Mac. Few of the attractive features it includes: 

  • Easy drag and drop installation, and auto-updates to get latest Docker.
  • Secure, sandboxed virtualisation architecture without elevated privileges. 
  • Native networking support, with VPN and network sharing compatibility. 
  • File sharing between container and host: uid mapping, inotify events, etc

The core building blocks for Docker for Mac includes –

  • Virtualisation
  • Networking
  • Filesystem

Some notable components include:

  • HyperKit, a toolkit for embedding hypervisor capabilities in your application
  • DataKit, a tool to orchestrate applications using a 9P dataflow
  • VPNKit, a set of tools and services for helping HyperKit VMs interoperate with host VPN configurations

Screen Shot 2017-07-13 at 10.01.33 PM

 

Screen Shot 2017-07-13 at 10.05.21 PM

 

Screen Shot 2017-07-13 at 10.08.09 PM

                                                                                                                                                                                            source ~ Docker Inc.

If you want to learn more details about these components, this should be the perfect guide.

LinuxKit today support multiple Cloud platforms like AWS, Google Cloud Platform, Microsoft Azure, VMware  etc. In terms of Local hypervisor, it supports HyperKit, VMware, KVM and Microsoft Hyper-V too. 

 

Screen Shot 2017-07-13 at 10.16.48 PM

 

If you have closely watched LinuxKit repository, a new directory called blueprint has been introduced which will contain the blueprints for base systems on the platforms that will be supported with LinuxKit.These has been targeted to include all the platforms that Docker has editions on, and all platforms that Docker community supports. All the initial testing work will be done under examples/ and then pushed to blueprints/ directory as shown. 

Currently, the blueprint/ directory holds  essential files for Docker For Mac 17.06 CE – 

  • base.yml => which contains the open source components for Docker for Mac.
  • docker-17.06.ce.yml => necessary YAML file to build up VM Image

The blueprint has support for controlling dockerd from the host via vsudd and port forwarding with VPNKit. It requires HyperKit, VPNKit and a Docker client on the host to run.

Screen Shot 2017-07-13 at 8.55.29 AM

File: docker-17.06-ce.yml

Screen Shot 2017-07-13 at 9.00.10 AM

The VPNKIT specific enablement comes from the below YAML code:

Screen Shot 2017-07-13 at 10.40.42 PM

File: base.yml

Screen Shot 2017-07-13 at 9.03.49 AM

Use the Moby tool to build it with Docker 17.06:

moby build -name docker4mac base.yml docker-17.06-ce.yml

Screen Shot 2017-07-13 at 10.09.33 AM

 

This will produce couple of files under docker4mac-state directory as shown below:

 

Screen Shot 2017-07-13 at 11.59.04 AM

 

Next, we can now run the LinuxKit command to run VM with 1024M disk

linuxkit run hyperkit -networking=vpnkit -vsock-ports=2376 -disk size=1024M docker4mac

By now, you should be able to see docker4mac VM booting up smoothly:

Screen Shot 2017-07-13 at 10.11.28 AM

Screen Shot 2017-07-14 at 10.07.12 PM

You can open up a new terminal to see the overall directory/files tree structure:

Screen Shot 2017-07-13 at 10.28.18 AM

 

Let us try listing the service containers using  ctr containers ls command. It should show up Docker For Mac 17.06 service container as shown below:

Screen Shot 2017-07-14 at 10.13.40 PM

Run the ctr tasks ls command to get the list of service containers:

Screen Shot 2017-07-16 at 10.30.49 AM

Now its easy to enter into docker-ddm service container with the below command:

ctr exec -t --exec-id 861 docker-dfm sh

Screen Shot 2017-07-16 at 10.33.07 AM

You can verify further information with docker info command:

Screen Shot 2017-07-16 at 10.36.49 AM

 

How to connect to docker-dfm` from another terminal?

Using another terminal, it is pretty easy to access docker via the socket guest.00000948 in the state directory (docker4mac-state/ by default) with the below command:

docker -H unix://docker4mac-state/guest.00000948 images

 

Let us create a Nginx docker container and see if it is accessible from Safari browser:

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more what’s happening in LinuxKit project by visiting this link.