Running LinuxKit locally on Oracle VirtualBox Platform Made Easy

Estimated Reading Time: 5 minutes

LinuxKit GITHUB repository has already crossed 1800 commits, 3600+ stars &  been forked 420+ times since April 2017 when it was open sourced by Docker Inc for the first time. LinuxKit today support dozens of platforms which falls under Cloud, local hypervisor & Bare metal systems categories. Recently, arm64 support was added and I published a blog post which talks about building LinuxKit for the minimalist Raspberry Pi 3.  Improved Kubernetes support has been another enhancement and you can follow my blog to build Multi-Node Kubernetes  cluster using LinuxKit. In case you want to build Multi-Node Kubernetes cluster using the newly added CRI-containerd & Moby, don’t miss out this blog post.

 

 

In case you’re very new to LinuxKit , it is a toolkit for building secure, portable & lean operating systems for containers. It uses moby tooling to build system images. Everything runs in a container. LinuxKit uses the linuxkit tool for building, pushing and running Virtual Machine images. The moby tool assembles a set of containerized components into in image. The simplest type of image is just a tar file of the contents.The YAML configuration specifies the components used to build up an image . All components are downloaded at build time to create an image. The image is self-contained and immutable, so it can be tested reliably for continuous delivery.

A Look at LinuxKit Architecture

At the base of LinuxKit, there is a modern Linux kernel which specifies a kernel Docker image, containing a kernel and a filesystem tarball, eg containing modules. The minimal init is the base init process Docker image, which is unpacked as the base system, containing initcontainerd and a few tools. It is basically built from pkg/init/.  The onboot  containers are the system containers, executed sequentially in order. They should terminate quickly when done. The services is the system services, which normally run for the whole time the system is up. The .files are additional files to add to the image

 

What’s New in LinuxKit?

Below are the list of new features which has been introduced in LinuxKit recently –

 

 

Early this year, I wrote a blog post which talks about how to manually create LinuxKit ISO image and then mount it to run it under Oracle VirtualBox. The method was complicated as it required converting VMDK file into .VDI format first and then registering the VM using VBoxManage CLI.

Test-Drive LinuxKit OS on Oracle VirtualBox running on macOS Sierra

Now with the introduction of linuxkit run vbox CLI, it is just a matter of 2-3 minutes to get it  on VirtualBox up and running.

Under this blog post, we will see how LinuxKit OS can be built and run on Oracle VirtualBox in just 2 minutes.

Pre-requisites:

  • MacOS Sierra 
  • Docker for Mac installed on MacOS
  • Docker Up and Running
  • Oracle VirtualBox 

Clone the LinuxKit Repository:

git clone https://github.com/linuxkit/linuxkit

Building the LinuxKit Tool

cd linuxkit
make

Place LinuxKit under the right executable PATH:

cp bin/linuxkit /usr/local/bin/

Building ISO image for VirtulBox.

Before we go ahead and build ISO for Virtualbox, let us look at the newly introduced command line option:

 

Now you can use `LinuxKit build` option to build the ISO image. Let us look into this sub-command:

 

Let’s run the below command to build iso-bios format of docker.yml which can be found under linuxkit/examples directory under LinuxKit repository.

linuxkit build -format iso-bios --name testbox docker.yml

This builds up ISO image as shown below:

 

Running the ISO for VirtualBox

Justin Cormack, a LinuxKit maintainer did a great job in introducing a new CLI option linuxkit run box  as shown below:

Run the below command to initiate  LinuxKit OS on VirtualBox in the form of VM:

linuxkit run vbox --iso testbox.iso

This will initiate a VM called testbox under Virtualbox as shown below:

You can verify under VirtualBox Manager:

 

Open up Console to see LinuxKit running under this new VM:

 

So, you can access either through terminal or directly under the console but NOT both at the same time.

Accessing Docker Service Container

To access the Docker service container, first list out the running service containers:

ctr tasks ls

This will list out the running service containers as shown below:

 

Lets us enter into docker service container and verify Docker release version:

ctr tasks exec -t --exec-id 502 docker sh

This will allow it to enter into shell as shown. Run docker version command to verify the currently available Docker release.

 

 

Please note that networking doesn’t get enabled by default for these service container. You will need to  manually enable “Cable Connected” option under VirtualBox > Settings > Network > Advanced to get the IP address assigned to the network interface.

 

I have raised this issue with LinuxKit Team and you can track it here.

Let us go back to the terminal and try to pull few Docker images as shown below:

 

Wow ! So we have Docker service container running  inside LinuxKit OS on top of Oracle VirtualBox platform flawlessly.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

Interested to read more on LinuxKit? Check this out – 

Building a secure Docker Host VM on VMware ESXi using LinuxKit & Moby

Building a Secure VM based on LinuxKit on Microsoft Azure Platform

Building Docker For Mac 17.06 Community Edition using Moby & LinuxKit

Running LinuxKit on AWS Platform made easy

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

Building a minimalistic LinuxKit OS on Raspberry Pi 3 using Moby

Estimated Reading Time: 4 minutes

LinuxKit GITHUB repository recently crossed 3495 stars, forked around 410+ times and added 80+ contributors. Just 7 months old project and it has already gained lot of momentum across the Moby community. In case you’re new, LinuxKit is the first use case for the Moby Project, which is basically a toolkit used for building secure, portable, and lean operating systems for containers. It uses YAML files to describe the complete system, and Moby uses it to assemble the boot image and verify the signature. LinuxKit uses the Moby tool for image builds, and the LinuxKit tool for pushing and running VM images.

In the last Dockercon 2017 EU, Docker Inc. announced the latest additions which includes – 

  • Support for arm64 architecture
  • Improved platform support (Azure, IBM Bluemix, AWS, Google Cloud Platform, VMware, Hyper-V, Packets.net etc.)
  • Upcoming Kubernetes support (I tried it and believe me it was a great experience)
  • Linux containers on Windows (LCOW) preview support
  • Multi-arch build system
  • Fully immutable system images
  • Namespace sharing
  • Support for the latest Linux kernels

LinuxKit today supports building & booting images on a Raspberry Pi 3 using the mainline arm64 bit kernels. Under this blog post, I will show how to build Linuxkit OS on Raspberry Pi 3.

Pre-requisites:

  1. OpenSUSE 42.2 Leap 
  2. Raspberry Pi 3
  3. SD card
  4. Win32 Disk Imager 
  5. SD Formatter(Windows Client) or Etcher(for MacOS)

[PLEASE NOTE – In case you’re trying to build LinuxKit on Raspbian OS running on your Pi box, it is NOT going to work. Reason – There is no 32-bit kernel built for Linuxkit. As of today, only 64-bit based kernel is available for LinuxKit OS. If you’re planning to build Linuxkit with 32-bit kernel, feel free to share your findings.I picked up openSUSE OS as it has a arm64 flash for Raspberry Pi 3 ]

Early this year I wrote a blog on how to test-Drive Docker 1.12 on first 64-bit ARM OpenSUSE running on Raspberry Pi 3. You can refer it to get started.

  1. Download OpenSUSE 42.2 Leap from this link and follow the blog thoroughly to get it ready on SD card.

 

  

    2. Install Docker on OpenSUSE 42.2 

By default, OpenSUSE Leap comes with Docker 1.12 edition. Though it is pretty old, we can still build LinuxKit on the operating system. Ensure that Docker is up and running on your Raspberry Pi 3 box:

systemctl restart docker 

 

      3.  Building Moby tool on Raspberry Pi

You will need Go version 1.8 atleast to get Moby and Linuxkit built on OpenSUSE 11.3.

go get -u github.com/moby/tool/cmd/moby

 

      4.  Building LinuxKit 

 

linux:~/linuxkit # go get -u github.com/linuxkit/linuxkit/src/cmd/linuxkit
github.com/linuxkit/linuxkit/src/cmd/linuxkit
/usr/lib64/go/1.8/pkg/tool/linux_arm64/link: running gcc failed: fork/exec /usr/bin/gcc: cannot allocate memory

In case you encounter the above error message, you need to fix this issue by adding more space to swap partition.

Step:1 – Create a 1GB File

We use the DD command to create the 1G file basically writing “0″ into the file swap_1 in the “/” file system.

linux:~/linuxkit # dd if=/dev/zero of=/swap_1 bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB, 977 MiB) copied, 64.4194 s, 15.9 MB/s

Step-2 – Setting up File as a swap area.

linux:~/linuxkit # mkswap /swap_1
mkswap: /swap_1: insecure permissions 0644, 0600 suggested.
Setting up swapspace version 1, size = 976.6 MiB (1023995904 bytes)
no label, UUID=e2f577a5-ff87-40ee-adfc-78e8feef3a36

Step-3 – Enabling Swap File for system to start paging physical memory pages onto it.

linux:~/linuxkit # swapon /swap_1
swapon: /swap_1: insecure permissions 0644, 0600 suggested.

Step-4: Verify the new free memory:

linux:~/linuxkit # free -tom
             total       used       free     shared    buffers     cached
Mem:           785        745         40          3          1        688
Swap:         1462         65       1396
Total:        2247        810       1437

 

Now, you can build up LinuxKit tool flawlessly. Once installed, you can verify it with the below command:

 

Let us build minimal LinuxKit OS using the below command:

moby build -format rpi3 minimal.yml

 

 

Now it’s time to convert minimal.tar into ISO format. You can leverage the following tool:

bash-3.2# hdiutil makehybrid -o /Users/ajeetraina/Desktop/linuxkit.iso minimal -iso -joliet
Creating hybrid image…

By now, you should be able to boot the minimal LinuxKit ISO on your Raspberry Pi system via uboot tool. It can also be directly be extracted onto a FAT32 formatted SD card to boot your Raspberry Pi. 

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

How to Build Kubernetes Cluster using CRI-containerd & Moby

Estimated Reading Time: 5 minutes

Let’s talk about CRI Vs CRI-Containerd…

Container Runtime Interface(a.ka. CRI) is a standard way to integrate Container Runtime with Kubernetes. It is new plugin interface for container runtimes. It is a plugin interface which enables kubelet to use a wide variety of container runtimes, without the need to recompile.Prior to the existence of CRI, container runtimes (e.g., docker, rkt ) were integrated with kubelet through implementing an internal, high-level interface in kubelet. Formerly known as OCID, CRI-O is strictly focused on OCI-compliant runtimes and container images. 

Last month, the new CRI-O version 1.2 got announced. This is a minor release of the v1.0.x cycle supporting Kubernetes 1.7.With CRI, Kubernetes can be container runtime-agnostic. Now this provides flexibility to the providers of container runtimes who don’t need to implement features that Kubernetes already provides. CRI-O allows you to run containers directly from Kubernetes – without any unnecessary code or tooling.

For those viewers who compare Docker Runtime Engine Vs CRI-O, here is an important note – 

CRI-O is not really a competition to the docker project – in fact it shares the same OCI runC container runtime used by docker engine, the same image format, and allows for the use of docker build and related tooling. Through this new runtime, it is expected to bring developers more flexibility by adding other image builders and tooling in the future.Please remember that CRI is not an interface for full-fledge, all inclusive container runtime.

How does CRI-O workflow look like ?

When Kubernetes needs to run a container, it talks to CRI-O and the CRI-O daemon works with container runtime  to start the container. When Kubernetes needs to stop the container, CRI-O handles that. Everything  just works behind the scenes to manage Linux containers.

 

 

Kubelet is a node agent which has gRPC client which talks to gRPC server rightly called as shim. The shim then talks to container runtime. Today the default implementation is Docker Shim.Docker shim then talks to Docker daemon using the classical APIs. This works really well.

CRI consists of a protocol buffers and gRPC API, and libraries, with additional specifications and tools under active development.

Introducing CRI-containerd

CRI-containerd is containerd based implementation of CRI.  This project started in April 2017. In order to have Kubernetes consume containerd for its container runtime, containerd team implemented the CRI interface.  CRI is responsible for distribution and the lifecycle of pods and containers running on a cluster.The scope of containerd 1.0 aligns with the requirement of CRI. In case you want to deep-dive into it, don’t miss out this link.

Below is how CRI-containerd architecture look like:

                                                                                                             

In my last blog post, I talked about how to setup Multi-Node Kubernetes cluster using LinuxKit. It basically used Docker Engine to build up minimal & immutable Kubernetes OS images with LinuxKit. Under this blog post, we will see how to build it using CRI-containerd.

Infrastructure Setup:

  • OS – Ubuntu 17.04 
  • System – ESXi VM
  • Memory – 8 GB
  • SSH Key generated using ssh-keygen -t rsa and put under $HOME/.ssh/ folder

Caution – Please note that this is still under experimental version. It is currently under active development and hence don’t expect it to work as full-fledge K8s cluster.

Copy the below script to your Linux System and execute it flawlessly –

 

 

Did you notice the parameter –  KUBE_RUNTIME=cri-containerd ?

The above parameter is used to specify that we want to build minimal and immutable K8s ISO image using CRI-containerd. If you don’t specify the parameter, it will use Docker Engine to build this ISO image.

The above script is going to take sometime to finish.

It’s always a good idea to see what CRI-containerd specific files are present. 

Let us look into CRI-Containerd directory – 

Looking at the content of cri-containerd.yml where it defines a service called as critical-containerd.

The below list of files gets created with the output files like Kube-master-efi.iso right under Kubernetes directory – 

 

 

 

By the end of this script you should be able to see Kubernetes LinuxKit OS booting up – 

 

 

CRI-containerd let the user containers in the same sandbox share the network namespace and hence you will see the message ” This system is namespaced”.

You can find the overall screen logs to watch how it boots up –

 

Next, let us try to see what container services are running:

 

You will notice that cri-containerd service is up and running. 

Let us enter into one of tasks containers and initialize the master node with kubeadm-init script which comes by default –

(ns: getty) linuxkit-ee342f3aebd6:~# ctr tasks exec -t --exec-id 654 kubelet sh
/ # ls

Execute the below script –

/ # kubeadm-init.sh
/ # kubeadm-init.sh
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [linuxkit-ee342f3aebd6 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig] Wrote KubeConfig file to disk: “admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “scheduler.conf”
[controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
…..

Now you should be able to join the other worker nodes( I have discussed the further steps under  this link ).

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.