Top 10 Reasons why LinuxKit is better than the traditional OS distribution

 

“LinuxKit is NOT designed with an intention to replace any of traditional OS like Alpine, Ubuntu, Red Hat etc. It is an open-source toolbox for building fine-tuned Linux-based operating systems that run in containers and hence, lean, portable & secure out-of-the-box.”

                                                                                                                     ~ Collabnix

 

How is LinuxKit different from other available GNU operating system? How is it different from Hashicorp Packer & Cloud instances? Why should I consider LinuxKit? I came across these queries in various community forums and discussion and thought to come up a write-up which talks about the factor which differentiate LinuxKit from other available operating system, tools & utilities.

In case you’re new, LinuxKit is a toolkit for building minimal Linux distributions that is lean, secure and portable. LinuxKit is built with an intention to build a distributions that has just enough Linux to accomplish a task and just do it in a way that is more secure. It uses Moby to build the distribution images and the LinuxKit tool to run them on the local hypervisor, cloud and bare metal environment.

Under this blog post, I will go through 10 good reasons why LinuxKit is better than other available GNU / open source distributions:

1. Smaller in size(< 200MB) compared to GBs size of traditional OS

The idea behind LinuxKit is that you start with a minimal Linux kernel — the base distro is only 35MB — and add literally only what you need. Once you have that, you can build your application on it, and run it wherever you need to. 

                                                                                                                     ~ source 

LinuxKit is not a full host operating system. It primarily has two jobs to accomplish- first to  run containerd containers and secondly, be secure. Linuxkit provides the ability to build an extremely small distribution (~50m) stripped of everything but that which is required to run containers.The init system and all other userland system components (e.g. dhcp, sshd) are containers themselves and as such can be swapped out, or others be plugged in. The whole OS is immutable (unless data volumes are configured), and will run anywhere one finds silicon or a hypervisor: Windows, MacOS, IoT devices, the Cloud etc.The system does not contain extraneous packages or drivers by default. Because LinuxKit is customizable, it is up to individual operators to include any additional bits they may require.

Let us try building LinuxKit ISO out of the minimal.yaml (shown below) which is also available under LinuxKit repository and verify its size.

git clone https://github.com/linuxkit/linuxkit
cd linuxkit && make
cp -rf bin/linuxkit /usr/local/linuxkit

Building LinuxKit ISO

Ajeets-MacBook-Air:examples ajeetraina$  linuxkit build -format iso-bios -name kitiso minimal.yml

This will build up minimal LinuxKit ISO. 

Let us verify its size as shown below:

Ajeets-MacBook-Air:examples ajeetraina$ du -sh * | grep linuxkit.iso 
133M linuxkit.iso
Ajeets-MacBook-Air:examples ajeetraina$ file linuxkit.iso 
linuxkit.iso: DOS/MBR boot sector; partition 1 : ID=0x17, active, start-CHS (0x0,0,1), end-CHS (0x84,63,32), startsector 0, 272384 sectors
Ajeets-MacBook-Air:examples ajeetraina$

 

A Traditional OS, in the other hand, is a complete operating system and includes hundreds and thousands of application programs. 

2. Minimal Provisioning/Boot Time(1.5 – 3 minutes)

LinuxKit hardly takes 1.5 to 3 minutes to get up and running on either local hypervisor/cloud or bare metal system. This is way better than 3 to 5 minutes of provisioning time as compared to traditional OS distribution. LinuxKit is an entirely immutable system, coming in at around 50MB with nothing extraneous to that which is needed to run containers. The root filesystem is read-only, making it stateless and tamper-proof. LinuxKit runs almost everything – onboot processes and continuous services- in a container which makes the boot time quite fast.  Even the init phase – where the OS image will be booted – is configured by copying files from OCI images.

While booting up the minimal LinuxKit ISO, it just took 0.034515sec for the containerd to get booted up.(shown below)

 

3. Build-time Package Updates(LinuxKit) Vs Run-time Package Updates(OS)

LinuxKit uses Moby tool which assembles a set of containerized components into an image. It uses YAML configuration which specifies the components used to build up an image . All components are downloaded at build time to create an image. The image is self-contained and immutable, so it can be tested reliably for continuous delivery. Most importantly – The build itself takes a matter of seconds, and is eminently reproducible, making it an ideal candidate to pass through a CI system. This is definitely an advantage over traditional GNU operating system.

4. Built-in Security(LinuxKit) Vs Base Security(Traditional OS)

LinuxKit allows users to create very secure Linux subsystems because it is designed around containers. All of the processes, including system daemons, run in containers, enabling users to assemble a Linux subsystem with only the needed services. As a result, systems created with LinuxKit have a smaller attack surface than general purpose systems.

LinuxKit is architected to be secure by default. LinuxKit’s build process leverages Alpine Linux’s hardened userspace tools such as Musl libc, and compiler options that include -fstack-protector and position-independent executable output. 

Most importantly, LinuxKit uses modern kernels, and updates frequently following new releases.  LinuxKit tracks new kernel releases very closely, and also follows best practice settings for the kernel configuration from the Kernel Self Protection Project and elsewhere.

The core system components included in LinuxKit userspace are written in type safe languages, such as Rust, Go and OCaml, and run with maximum privilege separation and isolation. LinuxKit’s build process heavily leverages Docker images for packaging. Of note, all intermediate build images are referenced by digest to ensures reproducibility across LinuxKit builds. Tags are mutable, and thus subject to override (intentionally or maliciously) – referencing by digest mitigates classes of registry poisoning attacks in LinuxKit’s buildchain. Certain images, such as the kernel image, is usually signed by LinuxKit maintainers using Docker Content Trust, which guarantees authenticity, integrity, and freshness of the image.

If you compare it with the traditional OS, many kernel bugs usually lurk in the codebase for years. Therefore, it is imperative to not only patch the kernel to fix individual vulnerabilities but also benefit from the upstream security measures designed to prevent classes of kernel bugs.

5. Container & Cloud Native at the same time

It is important to note that Linuxkit is built with containers, for running containers. Linuxkit today is supported on local hypervisor, cloud and bare metal system. The same minimal LinuxKit ISO image which is container native runs well on Cloud platform like Google Cloud Platform, Amazon Web Services & Microsoft Azure flawlessly.

If we talk about the traditional OS available, the distribution is usually customized by Cloud vendors to fit into the Cloud which is quite different from the one which is available for bare metal system. For example, An Amazon Machine Image (AMI) or preemptive Google Cloud instances etc.

6. Batteries included but removable/swappable

LinuxKit is built on the philosophy of “batteries included but swappable”.  Everything is replaceable and customizable under LinuxKit. That’s one of the unique feature in LinuxKit. The YAML format specifies the components used to build an ISO image. This is made possible via the moby tool which assembles a set of containerized components into an image. Let us look at the various YAML file which talks about how this philosophy really works:

  • Minimal YAML file to build LinuxKit OS
  • YAML file which builds LinuxKit OS with SSH Enabled

If you compare the above YAML file with the minimal YAML, just SSHD service containers has been added and hence new LinuxKit ISO can be booted up. You can find various other YAML files under this link.

7. Immutable in Production

Lots of companies – small or very large – have used immutability as a core concept of their infrastructure.Immutable infrastructure basically consists of immutable components which are actually  replaced for every deployment, rather than being updated. These components are usually started from a common image that is built once per deployment and can be tested and validated. The common image can be built through automation, but doesn’t have to be. Important to note that immutability is independent of any tool or workflow for building the images. Its best use case is in a cloud or virtualized environment.

Why is it important?

Immutable components as part of your infrastructure are a way to reduce inconsistency in your infrastructure and improve the trust into your deployment process. 

Toolkit like LinuxKit have made building immutable components very easy. Together with existing cloud infrastructure, it is a powerful concept to help you build better and safer infrastructure. 

8. Designed to be managed by external tooling

LinuxKit today can be managed flawlessly with popular tooling like Infrakit, Terraform or CloudFormation. It uses external tooling such as Infrakit or CloudFormation templates to manage the update process externally from LinuxKit, including doing rolling cluster upgrades to make sure distributed applications stay up and responsive.

Updates may preserve the state disk used by applications if needed, either on the same physical node, or by reattaching a virtual cloud volume to a new node.

Soon after Dockercon 2017 Austin, I wrote a blog post on how Infrakit & LinuxKit work together for building immutable infrastructure.

Why Infrakit & LinuxKit are better together for Building Immutable Infrastructure?

If you want to know how LinuxKit & Terraform can work together, you should look at this link too.

 

9. Enough to bootstrap distributed applications

LinuxKit is designed for building and running clustered applications, including but not limited to container orchestration such as Docker or Kubernetes. It is obvious that in the production environment, most of the users will use a cluster of instances, usually running distributed applications, such as etcdDockerKubernetes or distributed databases. LinuxKit provides examples of how to run these effectively, largely using the tooling likeinfrakit, although other machine orchestration systems can equally well be used, for example Terraform or VMWare.

LinuxKit is gaining momentum in terms of  a toolkit for building custom minimal, immutable Linux distributions. Integration of Infrakit with LinuxKit help users  to build and deploy custom OS images to a variety of targets – from a single vm instance on the mac (via xhyve / hyperkit, no virtualbox) to a cluster of them, as well as booting a remote ARM host on Packet.net from the local laptop via a ngrok tunnel.

 

10. Eliminates Cloud-provides specific Base Image variance

If we talk about Cloud, one serious challenge is that each vendor has differentiated IP invested in how they handle certain aspects of their cloud infrastructure. It is not enough to just provision n-number of servers;what IT ops teams need is a simple and consistent way to declare the number of servers, what size they should be, and what sort of base software configuration is required. Linuxkit solves this problem of base software configuration issue by allowing users to build a stripped down version of traditional OS based on their own requirement.

As said earlier, LinuxKit is not here to replace Alpine OS. LinuxKit uses the minimalist Alpine Linux distribution by default as the foundation of its official container images, and will continue to do so in future. Alpine is seen as an ideal all-purpose generic base for running heavy-lifting software like Redis and Nginx within containers. You can also switch Alpine for Debian/ubuntu/centos, if you wish.

Interested to learn more about LinuxKit, head over to these series of articles.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

2
0

Running LinuxKit locally on Oracle VirtualBox Platform Made Easy

LinuxKit GITHUB repository has already crossed 1800 commits, 3600+ stars &  been forked 420+ times since April 2017 when it was open sourced by Docker Inc for the first time. LinuxKit today support dozens of platforms which falls under Cloud, local hypervisor & Bare metal systems categories. Recently, arm64 support was added and I published a blog post which talks about building LinuxKit for the minimalist Raspberry Pi 3.  Improved Kubernetes support has been another enhancement and you can follow my blog to build Multi-Node Kubernetes  cluster using LinuxKit. In case you want to build Multi-Node Kubernetes cluster using the newly added CRI-containerd & Moby, don’t miss out this blog post.

 

 

In case you’re very new to LinuxKit , it is a toolkit for building secure, portable & lean operating systems for containers. It uses moby tooling to build system images. Everything runs in a container. LinuxKit uses the linuxkit tool for building, pushing and running Virtual Machine images. The moby tool assembles a set of containerized components into in image. The simplest type of image is just a tar file of the contents.The YAML configuration specifies the components used to build up an image . All components are downloaded at build time to create an image. The image is self-contained and immutable, so it can be tested reliably for continuous delivery.

A Look at LinuxKit Architecture

At the base of LinuxKit, there is a modern Linux kernel which specifies a kernel Docker image, containing a kernel and a filesystem tarball, eg containing modules. The minimal init is the base init process Docker image, which is unpacked as the base system, containing initcontainerd and a few tools. It is basically built from pkg/init/.  The onboot  containers are the system containers, executed sequentially in order. They should terminate quickly when done. The services is the system services, which normally run for the whole time the system is up. The .files are additional files to add to the image

 

What’s New in LinuxKit?

Below are the list of new features which has been introduced in LinuxKit recently –

 

 

Early this year, I wrote a blog post which talks about how to manually create LinuxKit ISO image and then mount it to run it under Oracle VirtualBox. The method was complicated as it required converting VMDK file into .VDI format first and then registering the VM using VBoxManage CLI.

Test-Drive LinuxKit OS on Oracle VirtualBox running on macOS Sierra

Now with the introduction of linuxkit run vbox CLI, it is just a matter of 2-3 minutes to get it  on VirtualBox up and running.

Under this blog post, we will see how LinuxKit OS can be built and run on Oracle VirtualBox in just 2 minutes.

Pre-requisites:

  • MacOS Sierra 
  • Docker for Mac installed on MacOS
  • Docker Up and Running
  • Oracle VirtualBox 

Clone the LinuxKit Repository:

git clone https://github.com/linuxkit/linuxkit

Building the LinuxKit Tool

cd linuxkit
make

Place LinuxKit under the right executable PATH:

cp bin/linuxkit /usr/local/bin/

Building ISO image for VirtulBox.

Before we go ahead and build ISO for Virtualbox, let us look at the newly introduced command line option:

 

Now you can use `LinuxKit build` option to build the ISO image. Let us look into this sub-command:

 

Let’s run the below command to build iso-bios format of docker.yml which can be found under linuxkit/examples directory under LinuxKit repository.

linuxkit build -format iso-bios --name testbox docker.yml

This builds up ISO image as shown below:

 

Running the ISO for VirtualBox

Justin Cormack, a LinuxKit maintainer did a great job in introducing a new CLI option linuxkit run box  as shown below:

Run the below command to initiate  LinuxKit OS on VirtualBox in the form of VM:

linuxkit run vbox --iso testbox.iso

This will initiate a VM called testbox under Virtualbox as shown below:

You can verify under VirtualBox Manager:

 

Open up Console to see LinuxKit running under this new VM:

 

So, you can access either through terminal or directly under the console but NOT both at the same time.

Accessing Docker Service Container

To access the Docker service container, first list out the running service containers:

ctr tasks ls

This will list out the running service containers as shown below:

 

Lets us enter into docker service container and verify Docker release version:

ctr tasks exec -t --exec-id 502 docker sh

This will allow it to enter into shell as shown. Run docker version command to verify the currently available Docker release.

 

 

Please note that networking doesn’t get enabled by default for these service container. You will need to  manually enable “Cable Connected” option under VirtualBox > Settings > Network > Advanced to get the IP address assigned to the network interface.

 

I have raised this issue with LinuxKit Team and you can track it here.

Let us go back to the terminal and try to pull few Docker images as shown below:

 

Wow ! So we have Docker service container running  inside LinuxKit OS on top of Oracle VirtualBox platform flawlessly.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

Interested to read more on LinuxKit? Check this out – 

Building a secure Docker Host VM on VMware ESXi using LinuxKit & Moby

Building a Secure VM based on LinuxKit on Microsoft Azure Platform

Building Docker For Mac 17.06 Community Edition using Moby & LinuxKit

Running LinuxKit on AWS Platform made easy

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

1
0

Building a minimalistic LinuxKit OS on Raspberry Pi 3 using Moby

LinuxKit GITHUB repository recently crossed 3495 stars, forked around 410+ times and added 80+ contributors. Just 7 months old project and it has already gained lot of momentum across the Moby community. In case you’re new, LinuxKit is the first use case for the Moby Project, which is basically a toolkit used for building secure, portable, and lean operating systems for containers. It uses YAML files to describe the complete system, and Moby uses it to assemble the boot image and verify the signature. LinuxKit uses the Moby tool for image builds, and the LinuxKit tool for pushing and running VM images.

In the last Dockercon 2017 EU, Docker Inc. announced the latest additions which includes – 

  • Support for arm64 architecture
  • Improved platform support (Azure, IBM Bluemix, AWS, Google Cloud Platform, VMware, Hyper-V, Packets.net etc.)
  • Upcoming Kubernetes support (I tried it and believe me it was a great experience)
  • Linux containers on Windows (LCOW) preview support
  • Multi-arch build system
  • Fully immutable system images
  • Namespace sharing
  • Support for the latest Linux kernels

LinuxKit today supports building & booting images on a Raspberry Pi 3 using the mainline arm64 bit kernels. Under this blog post, I will show how to build Linuxkit OS on Raspberry Pi 3.

Pre-requisites:

  1. OpenSUSE 42.2 Leap 
  2. Raspberry Pi 3
  3. SD card
  4. Win32 Disk Imager 
  5. SD Formatter(Windows Client) or Etcher(for MacOS)

[PLEASE NOTE – In case you’re trying to build LinuxKit on Raspbian OS running on your Pi box, it is NOT going to work. Reason – There is no 32-bit kernel built for Linuxkit. As of today, only 64-bit based kernel is available for LinuxKit OS. If you’re planning to build Linuxkit with 32-bit kernel, feel free to share your findings.I picked up openSUSE OS as it has a arm64 flash for Raspberry Pi 3 ]

Early this year I wrote a blog on how to test-Drive Docker 1.12 on first 64-bit ARM OpenSUSE running on Raspberry Pi 3. You can refer it to get started.

  1. Download OpenSUSE 42.2 Leap from this link and follow the blog thoroughly to get it ready on SD card.

 

  

    2. Install Docker on OpenSUSE 42.2 

By default, OpenSUSE Leap comes with Docker 1.12 edition. Though it is pretty old, we can still build LinuxKit on the operating system. Ensure that Docker is up and running on your Raspberry Pi 3 box:

systemctl restart docker 

 

      3.  Building Moby tool on Raspberry Pi

You will need Go version 1.8 atleast to get Moby and Linuxkit built on OpenSUSE 11.3.

go get -u github.com/moby/tool/cmd/moby

 

      4.  Building LinuxKit 

 

linux:~/linuxkit # go get -u github.com/linuxkit/linuxkit/src/cmd/linuxkit
github.com/linuxkit/linuxkit/src/cmd/linuxkit
/usr/lib64/go/1.8/pkg/tool/linux_arm64/link: running gcc failed: fork/exec /usr/bin/gcc: cannot allocate memory

In case you encounter the above error message, you need to fix this issue by adding more space to swap partition.

Step:1 – Create a 1GB File

We use the DD command to create the 1G file basically writing “0″ into the file swap_1 in the “/” file system.

linux:~/linuxkit # dd if=/dev/zero of=/swap_1 bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB, 977 MiB) copied, 64.4194 s, 15.9 MB/s

Step-2 – Setting up File as a swap area.

linux:~/linuxkit # mkswap /swap_1
mkswap: /swap_1: insecure permissions 0644, 0600 suggested.
Setting up swapspace version 1, size = 976.6 MiB (1023995904 bytes)
no label, UUID=e2f577a5-ff87-40ee-adfc-78e8feef3a36

Step-3 – Enabling Swap File for system to start paging physical memory pages onto it.

linux:~/linuxkit # swapon /swap_1
swapon: /swap_1: insecure permissions 0644, 0600 suggested.

Step-4: Verify the new free memory:

linux:~/linuxkit # free -tom
             total       used       free     shared    buffers     cached
Mem:           785        745         40          3          1        688
Swap:         1462         65       1396
Total:        2247        810       1437

 

Now, you can build up LinuxKit tool flawlessly. Once installed, you can verify it with the below command:

 

Let us build minimal LinuxKit OS using the below command:

moby build -format rpi3 minimal.yml

 

 

Now it’s time to convert minimal.tar into ISO format. You can leverage the following tool:

bash-3.2# hdiutil makehybrid -o /Users/ajeetraina/Desktop/linuxkit.iso minimal -iso -joliet
Creating hybrid image…

By now, you should be able to boot the minimal LinuxKit ISO on your Raspberry Pi system via uboot tool. It can also be directly be extracted onto a FAT32 formatted SD card to boot your Raspberry Pi. 

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

0
0

How to Build Kubernetes Cluster using CRI-containerd & Moby

Let’s talk about CRI Vs CRI-Containerd…

Container Runtime Interface(a.ka. CRI) is a standard way to integrate Container Runtime with Kubernetes. It is new plugin interface for container runtimes. It is a plugin interface which enables kubelet to use a wide variety of container runtimes, without the need to recompile.Prior to the existence of CRI, container runtimes (e.g., docker, rkt ) were integrated with kubelet through implementing an internal, high-level interface in kubelet. Formerly known as OCID, CRI-O is strictly focused on OCI-compliant runtimes and container images. 

Last month, the new CRI-O version 1.2 got announced. This is a minor release of the v1.0.x cycle supporting Kubernetes 1.7.With CRI, Kubernetes can be container runtime-agnostic. Now this provides flexibility to the providers of container runtimes who don’t need to implement features that Kubernetes already provides. CRI-O allows you to run containers directly from Kubernetes – without any unnecessary code or tooling.

For those viewers who compare Docker Runtime Engine Vs CRI-O, here is an important note – 

CRI-O is not really a competition to the docker project – in fact it shares the same OCI runC container runtime used by docker engine, the same image format, and allows for the use of docker build and related tooling. Through this new runtime, it is expected to bring developers more flexibility by adding other image builders and tooling in the future.Please remember that CRI is not an interface for full-fledge, all inclusive container runtime.

How does CRI-O workflow look like ?

When Kubernetes needs to run a container, it talks to CRI-O and the CRI-O daemon works with container runtime  to start the container. When Kubernetes needs to stop the container, CRI-O handles that. Everything  just works behind the scenes to manage Linux containers.

 

 

Kubelet is a node agent which has gRPC client which talks to gRPC server rightly called as shim. The shim then talks to container runtime. Today the default implementation is Docker Shim.Docker shim then talks to Docker daemon using the classical APIs. This works really well.

CRI consists of a protocol buffers and gRPC API, and libraries, with additional specifications and tools under active development.

Introducing CRI-containerd

CRI-containerd is containerd based implementation of CRI.  This project started in April 2017. In order to have Kubernetes consume containerd for its container runtime, containerd team implemented the CRI interface.  CRI is responsible for distribution and the lifecycle of pods and containers running on a cluster.The scope of containerd 1.0 aligns with the requirement of CRI. In case you want to deep-dive into it, don’t miss out this link.

Below is how CRI-containerd architecture look like:

                                                                                                             

In my last blog post, I talked about how to setup Multi-Node Kubernetes cluster using LinuxKit. It basically used Docker Engine to build up minimal & immutable Kubernetes OS images with LinuxKit. Under this blog post, we will see how to build it using CRI-containerd.

Infrastructure Setup:

  • OS – Ubuntu 17.04 
  • System – ESXi VM
  • Memory – 8 GB
  • SSH Key generated using ssh-keygen -t rsa and put under $HOME/.ssh/ folder

Caution – Please note that this is still under experimental version. It is currently under active development and hence don’t expect it to work as full-fledge K8s cluster.

Copy the below script to your Linux System and execute it flawlessly –

 

 

Did you notice the parameter –  KUBE_RUNTIME=cri-containerd ?

The above parameter is used to specify that we want to build minimal and immutable K8s ISO image using CRI-containerd. If you don’t specify the parameter, it will use Docker Engine to build this ISO image.

The above script is going to take sometime to finish.

It’s always a good idea to see what CRI-containerd specific files are present. 

Let us look into CRI-Containerd directory – 

Looking at the content of cri-containerd.yml where it defines a service called as critical-containerd.

The below list of files gets created with the output files like Kube-master-efi.iso right under Kubernetes directory – 

 

 

 

By the end of this script you should be able to see Kubernetes LinuxKit OS booting up – 

 

 

CRI-containerd let the user containers in the same sandbox share the network namespace and hence you will see the message ” This system is namespaced”.

You can find the overall screen logs to watch how it boots up –

 

Next, let us try to see what container services are running:

 

You will notice that cri-containerd service is up and running. 

Let us enter into one of tasks containers and initialize the master node with kubeadm-init script which comes by default –

(ns: getty) linuxkit-ee342f3aebd6:~# ctr tasks exec -t --exec-id 654 kubelet sh
/ # ls

Execute the below script –

/ # kubeadm-init.sh
/ # kubeadm-init.sh
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [linuxkit-ee342f3aebd6 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig] Wrote KubeConfig file to disk: “admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “scheduler.conf”
[controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
…..

Now you should be able to join the other worker nodes( I have discussed the further steps under  this link ).

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

0
0

Getting Started with Multi-Node Kubernetes Cluster using LinuxKit

Here’s a BIG news for the entire Container Community – “Kubernetes Support is coming to Docker Platform“. What does this mean? This means that developers and operators can now build apps with Docker and seamlessly test and deploy them using both Docker Swarm and Kubernetes. This announcement was made on the first day of Dockercon EU 2017 Copenhagen where Solomon Hykes, Founder of Docker invited Kubernetes Co-Founder  Tim Hockin on the stage for the first time. Tim welcomed the Docker family to the vibrant Kubernetes world. 

In case you missed out the news, here are the important takeaways from Docker-Kubernetes announcement –

  • Kubernetes support will be added to Docker Enterprise Edition for the first time.
  • Kubernetes support will be added optionally to Docker Community Edition for Mac and Windows.
  • Kubernetes and Docker Swarm both orchestrators will be available under Docker Platform.
  • Kubernetes integration will be made available under Moby projects too.

Docker Inc firmly believes that bringing Kubernetes to Docker will simplify and advance the management of Kubernetes for developers, enterprise IT and hackers to deliver the advanced capabilities of Docker to a broader set of applications.

 

 

In case you want to be first to test drive, click here to sign up and be notified when the beta is ready.

Please be aware that the beta program is expected to be ready at the end of 2017.

Can’t wait for beta program? Here’s a quick guide on how to get Multi-Node Kubernetes cluster up and running using LinuxKit.

Under this blog post, we will see how to build 5-Node Kubernetes Cluster using LinuxKit. Here’s the bonus – We will try to deploy WordPress application on top of Kubernetes Cluster. I will be building it on my macOS Sierra 10.12.6 system running the latest Docker for Mac 17.09.0 Community Edition up and running.

Pre-requisite:

  1. Ensure that the latest Docker release is up and running.

       2. Clone the LinuxKit repository:

sudo git clone https://github.com/ajeetraina/linuxkit
cd projects/kubernetes/

3. Execute the below script to build Kubernetes OS images using LinuxKit & Moby:

sudo chmod +x script4k8s.sh

All I have done is put all the manual steps under a script as shown below:

 

 

This should build up Kubernetes OS image.

Run the below command to know the IP address of the kubelet container:

ip adds show dev eth0

 

You can check the list of tasks running as shown below:

Once this kubelet is ready, it’s time to login into it directly from a new terminal:

sudo ./ssh_into_kubelet.sh 192.168.65.3

So you have already SSH into the kubelet container rightly named “LinuxKit Kubernetes Project”.

We are going to mark it as “Master Node”.

Initializing the Kubernetes Master Node

Now it’s time to manually initialize master node with Kubeadm command as shown below:

sudo kubeadm-init.sh

Wait for few minutes till kubeadm command completes. Once done, you can use kubeadm join argument as shown below:

 

Your Kubernetes master node gets initialized successfully as shown below:

 

 

It’s time to run the below list of commands command to join the list of worker nodes:

pwd
/Users/ajeetraina/linuxkit/projects/kubernetes
sudo ./boot.sh 1 --token 4cec03.1a7ccb44115f427a 192.168.65.3:6443
sudo ./boot.sh 2 --token 4cec03.1a7ccb44115f427a 192.168.65.3:6443
sudo ./boot.sh 3 --token 4cec03.1a7ccb44115f427a 192.168.65.3:6443
sudo ./boot.sh 4 --token 4cec03.1a7ccb44115f427a 192.168.65.3:6443

 

It brings up 5-Node Kubernetes cluster as shown below:

 

Deploying WordPress Application on Kubernetes Cluster:

Let us try to deploy a WordPress application on top of this Kubernetes Cluster.

Head over to the below directory:

cd linuxkit/projects/kubernetes/wordpress

 

Creating a Persistent Volume

kubectl create -f local-volumes.yaml

The content of local-volumes.yml look like:


This creates 2 persistent volumes:

Verifying the volumes:

 

Creating a Secret for MySQL Password

kubectl create secret generic mysql-pass --from-literal=password=mysql123

 
Verifying the secrets using the below command:
kubectl get secrets

 

Deploying MySQL:

kubectl create -f mysql-deployment.yaml

 

Verify that the Pod is running by running the following command:

kubectl get pods

 

Deploying WordPress:

kubectl create -f wordpress-deployment.yaml

By now, you should be able to access WordPress UI using the web browser.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

0
0

Building a secure Docker Host VM on VMware ESXi using LinuxKit & Moby

Post Dockercon 2017 @ Austin TX,  I raised a feature request titled “LinuxKit command to push vmware.vmdk to remote ESXi datastore”. Within few weeks time, the feature was introduced by LinuxKit team. A Special thanks goes to Daniel Finneran who worked hard to get this feature merged into the LinuxKit main branch.

 

LinuxKit project is 5 month old now. It has already bagged 3100+ stars, added up 69 contributors and 350+ forks till date. If you are pretty new, LinuxKit is not a full host operating system, as it primarily has two jobs: run containerd containers, and be secure. It uses modern kernels, and updates frequently following new releases. As such, the system does not contain extraneous packages or drivers by default. Because LinuxKit is customizable, it is up to individual operators to include any additional bits they may require.

LinuxKit is undoubtedly Secure

The core system components included in LinuxKit userspace are key to security, and written in type safe languages, such as RustGo and OCaml, and run with maximum privilege separation and isolation. The project is currently leveraging MirageOS to construct unikernels to achieve this, and that progress can be tracked here: as of this writing, dhcp is the first such type safe program. There is ongoing work to remove more C components, and to improve, fuzz test and isolate the base daemons. Further rationale about the decision to rewrite system daemons in MirageOS is explained at length in this document. I am planning to come up with blog post to brief on “LinuxKit Security” aspect. Keep an eye on this space in future..

Let’s talk about building a secure Docker Host VM…

I am a great fan of VMware PowerCLI. I have been using it since the time I was working full time in VMware Inc.(during 2010-2011 timeframe). Today the most quickest way to get VMware PowerCLI up and running is by using PhotonOS based Docker Image. Just one Docker CLI and you are already inside Photon OS running PowerShell & PowerCLI to connect to remote ESXi to build up VMware Infrastructure. Still this mightn’t give you a secure Docker host environment. If you are really interested to build a secure, portable and lean Docker Host operating system, LinuxKit is the right tool. But how?

Under this blog post, I am going to show how Moby & LinuxKit can help you in building a secure Docker 17.07 Host VM on top of VMware ESXi.

Pre-requisites:

  • VMware vSphere ESXi 6.x
  • Linux or MacOS with Go packages installed
  • Docker 17.06/17.07 installed on the system

The below commands has been executed on one of my local Ubuntu 16.04 LTS system which can reach out to ESXi system flawlessly.

Cloning the LinuxKit Repository:

git clone https://github.com/linuxkit/linuxkit

Building Moby & LinuxKit:

cd linuxkit
make

Configuring the right PATH for Moby & LinuxKit

cp bin/moby /usr/local/bin
cd bin/linuxkit /usr/local/bin

A Peep into vmware.yml File

The first 3 lines shows modern, securely configured kernel. The init section spins up containerd to run services. The onboot section allows dhcpd for networking. The services includes getty service container for shell, runs nginx service container. The trust section indicates all images signed and verified.

Building VMware ISO Image using Moby

moby build -output iso-bios -name vmware vmware.yml

 

Pushing VMware ISO Image to remote ESXi datastore

linuxkit push vcenter -datastore=datastore1 -hostname=myesxi.dell.com -url https://root:xxx@100.98.x.x/sdk -folder=linuxkit vmware.iso

Usage of linuxkit push vcenter :-

 

Running a secure VM directly from the ESXi datastore using LinuxKit

dell@redfish-ubuntu:~/linuxkit/examples$ sudo linuxkit run vcenter -cpus 8 -datastore datastore1 -mem 2048 -network ‘VM Network’ -hostname myesxi.dell.com -powerOn -url  https://root:xxx@100.98.x.x/sdk vmware.iso
Creating new LinuxKit Virtual Machine
Adding ISO to the Virtual Machine
Adding VM Networking
Powering on LinuxKit VM

Now let us verify that VM is up and running using either VMware vSphere Client or SDK URL.

You will find that VM is already booted up with the latest Docker 17.07 platform up and running.

Building a Docker Host VM using Moby & LinuxKit

In case you want to build a Docker Host VM, you can refer to the below vmware.yml file:

Just re-run the below command to get the new VM image:

moby build -output iso-bios -name vmware docker-vmware.yml

Follow the above steps to push it to remote datastore and run it using LinuxKit. Hence, you have a secured Docker 17.07 Host ready to build Docker Images and build up application stack.

How about building Photon OS based Docker Image using Moby & LinuxKit? Once you build it and push it to VM , its all ready to build Virtual Infrastructure. Interesting, Isn’t it?

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

 

0
0

Building a Secure VM based on LinuxKit on Microsoft Azure Platform

LinuxKit GITHUB repository recently crossed 3000 stars, forked around 300+ times and added 60+ contributors. Just 5 months old project and it has already gained lot of momentum across the Docker community. Built with a purpose that enables community to create secure, immutable, and minimal Linux distributions, LinuxKit is matured enough to support number of Cloud Platforms like Azure, AWS, Google Cloud Platform, VMware, Packets.net and many more..

 

In my recent blogs, I showcased how to get LinuxKit OS built for Google Cloud Platform, Amazon Web Services and VirtualBox. ICYMI, I recently published few of the the video on LinuxKit too. Check it out.

 

Under this blog post, I will walkthrough how to build secure and portal VM based on LinuxKit image on Microsoft Azure Platform.

Pre-requisite:

I will be leveraging macOS Sierra running Docker 17.06.1-ce-rc1-mac20 version. I tested it on Ubuntu 16.04 LTS edition too running on one of Azure VM and it went fine. Prior knowledge of Microsoft Azure / Azure CLI 2.0 will be required to configure Service Principle for VHD image to get uploaded to Azure smoothly.

 

Step-1: Pulling the latest LinuxKit repository

Pull the LinuxKit repository using the below command:

git clone https://github.com/linuxkit/linuxkit

 

Step-2: Build Moby & LinuxKit tool

cd linuxkit
make

 

Step-3: Copying the tools into the right PATH

cp -rf bin/moby /usr/local/bin/
cp -rf bin/linuxkit /usr/local/bin/

 

Step-4: Preparing Azure CLI tool

curl -L https://aka.ms/InstallAzureCli | bash

 

Step-5: Run the below command to restart your shell

exec -l $SHELL

 

Step-6: Building LinuxKit OS for Azure Platform

cd linuxkit/examples/
moby build -output vhd azure.yml

This will build up VHD image which now has to be pushed to Azure Platform.

In order to push the VHD image to Azure, you need to authenticate LinuxKit with your Azure subscription, hence you  will need to set up the following environment variables:

   export AZURE_SUBSCRIPTION_ID=43b263f8-XXXX--XXXX--XXXX--XXXXXXXX
   export AZURE_TENANT_ID=633df679-XXXX--XXXX--XXXX--XXXXXXXX
   export AZURE_CLIENT_ID=c7e4631a-XXXX--XXXX--XXXX--XXXXXXXX
   export AZURE_CLIENT_SECRET=XXXXXXXXXXXXXXXXXXXXXXXX=

Alternatively, the easy way to get all the above details is through the below command:

az login
To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code XXXXXX to authenticate.

The above command lists out Subscription ID and tenant ID which can be exported therein.

Next, follow this link to create an Azure Active Directory application and service principal that can access resources. If you want to stick to CLI rather than UI, you can follow the below steps:

Step-7: Pushing the VHD image to Azure Platform

linuxkit run azure --resourceGroupName mylinuxkit --accountName mylinuxkitstore -location eastasia azure.vhd
Creating resource group in eastasia
Creating storage account in eastasia, resource group mylinuxkit

The command will end up with the below message:

 

 Completed: 100% [     68.00 MB] RemainingTime: 00h:00m:00s Throughput: 0 Mb/sec    

Creating virtual network in resource group mylinuxkitresource, in eastasia

Creating subnet linuxkitsubnet468 in resource group mylinuxkitresource,

within virtual network linuxkitvirtualnetwork702

Creating public IP Address in resource group mylinuxkitresource, with name publicip159

Started deployment of virtual machine linuxkitvm941 in resource group mylinuxkitresource

Creating virtual machine in resource group mylinuxkitresource, with name linuxkitvm941, in location eastasia

NOTE: Since you created a minimal VM without the Azure Linux Agent,

the portal will notify you that the deployment failed. After around 50 seconds try connecting to the VM

ssh -i path-to-key root@publicip159.eastasia.cloudapp.azure.com

 

By this time, you should be able to see LinuxKit VM coming up under Azure Platform as shown below:

Wait for next 2-3 minutes till you try SSHing to this Azure instance and its all set to be up an running smoothly.

Known Issue:

  • Since the image currently does not contain the Azure Linux Agent, the Azure Portal will report the creation as failed.
  • The main workaround is the way the VHD is uploaded, specifically by using a Docker container based on Azure VHD Utils. This is mainly because the tool manages fast and efficient uploads, leveraging parallelism
  • There is work in progress to specify what ports to open on the VM (more specifically on a network security group)
  • The metadata package does not yet support the Azure metadata.

 

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Further Reference:

0
0

Building Docker For Mac 17.06 Community Edition using Moby & LinuxKit

Docker For Mac 17.06 CE edition is the first Docker version built entirely on the Moby Project. In case you’re new, Moby is an open framework created by Docker, Inc to assemble specialised container systems. It comprises of 3 basic elements: a library of containerised backend components (e.g., a low-level builder, logging facility, volume management, networking, image management, containerd, SwarmKit), a framework for assembling the components into a standalone container platform, and tooling to build, test and deploy artifacts for these assemblies and a reference assembly, called Moby Origin, which is the open base for the Docker container platform, as well as examples of container systems using various components from the Moby library or from other projects.

Docker for Mac is a Docker Community Edition (CE) app and aims for a native OSX experience that works with existing developer workflows. The Docker for Mac install package includes everything you need to run Docker on a Mac. Few of the attractive features it includes: 

  • Easy drag and drop installation, and auto-updates to get latest Docker.
  • Secure, sandboxed virtualisation architecture without elevated privileges. 
  • Native networking support, with VPN and network sharing compatibility. 
  • File sharing between container and host: uid mapping, inotify events, etc

The core building blocks for Docker for Mac includes –

  • Virtualisation
  • Networking
  • Filesystem

Some notable components include:

  • HyperKit, a toolkit for embedding hypervisor capabilities in your application
  • DataKit, a tool to orchestrate applications using a 9P dataflow
  • VPNKit, a set of tools and services for helping HyperKit VMs interoperate with host VPN configurations

Screen Shot 2017-07-13 at 10.01.33 PM

 

Screen Shot 2017-07-13 at 10.05.21 PM

 

Screen Shot 2017-07-13 at 10.08.09 PM

                                                                                                                                                                                            source ~ Docker Inc.

If you want to learn more details about these components, this should be the perfect guide.

LinuxKit today support multiple Cloud platforms like AWS, Google Cloud Platform, Microsoft Azure, VMware  etc. In terms of Local hypervisor, it supports HyperKit, VMware, KVM and Microsoft Hyper-V too. 

 

Screen Shot 2017-07-13 at 10.16.48 PM

 

If you have closely watched LinuxKit repository, a new directory called blueprint has been introduced which will contain the blueprints for base systems on the platforms that will be supported with LinuxKit.These has been targeted to include all the platforms that Docker has editions on, and all platforms that Docker community supports. All the initial testing work will be done under examples/ and then pushed to blueprints/ directory as shown. 

Currently, the blueprint/ directory holds  essential files for Docker For Mac 17.06 CE – 

  • base.yml => which contains the open source components for Docker for Mac.
  • docker-17.06.ce.yml => necessary YAML file to build up VM Image

The blueprint has support for controlling dockerd from the host via vsudd and port forwarding with VPNKit. It requires HyperKit, VPNKit and a Docker client on the host to run.

Screen Shot 2017-07-13 at 8.55.29 AM

File: docker-17.06-ce.yml

Screen Shot 2017-07-13 at 9.00.10 AM

The VPNKIT specific enablement comes from the below YAML code:

Screen Shot 2017-07-13 at 10.40.42 PM

File: base.yml

Screen Shot 2017-07-13 at 9.03.49 AM

Use the Moby tool to build it with Docker 17.06:

moby build -name docker4mac base.yml docker-17.06-ce.yml

Screen Shot 2017-07-13 at 10.09.33 AM

 

This will produce couple of files under docker4mac-state directory as shown below:

 

Screen Shot 2017-07-13 at 11.59.04 AM

 

Next, we can now run the LinuxKit command to run VM with 1024M disk

linuxkit run hyperkit -networking=vpnkit -vsock-ports=2376 -disk size=1024M docker4mac

By now, you should be able to see docker4mac VM booting up smoothly:

Screen Shot 2017-07-13 at 10.11.28 AM

Screen Shot 2017-07-14 at 10.07.12 PM

You can open up a new terminal to see the overall directory/files tree structure:

Screen Shot 2017-07-13 at 10.28.18 AM

 

Let us try listing the service containers using  ctr containers ls command. It should show up Docker For Mac 17.06 service container as shown below:

Screen Shot 2017-07-14 at 10.13.40 PM

Run the ctr tasks ls command to get the list of service containers:

Screen Shot 2017-07-16 at 10.30.49 AM

Now its easy to enter into docker-ddm service container with the below command:

ctr exec -t --exec-id 861 docker-dfm sh

Screen Shot 2017-07-16 at 10.33.07 AM

You can verify further information with docker info command:

Screen Shot 2017-07-16 at 10.36.49 AM

 

How to connect to docker-dfm` from another terminal?

Using another terminal, it is pretty easy to access docker via the socket guest.00000948 in the state directory (docker4mac-state/ by default) with the below command:

docker -H unix://docker4mac-state/guest.00000948 images

 

Let us create a Nginx docker container and see if it is accessible from Safari browser:

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more what’s happening in LinuxKit project by visiting this link.

0
0

Walkthrough: How to build your own customised LinuxKit kernel?

 

“..Its Time to Talk about Bring Your Own Components (BYOC) now..”

The Moby Project  is gaining momentum day by day. If you are System Builder looking out for building your own container based systems, Moby is for you. You have freedom to choose from the library of components derived from Docker or you can elect to “bring your own components” packaged as containers with mix-&match options among all of the components to create your own customised container system. Moby as a tool promises to convert Docker from a monolithic engine into a toolkit of reusable components and these individual components results in the building blocks for custom solutions.

Moby_1

I was a speaker at Docker Bangalore Meetup last weekend and talked about “Introduction to LinuxKit”. I walked through the need of LinuxKit for immutable infrastructure, platform it supports and finally demoed PWM(Play with Moby). I spent considerable amount of time talking about “How to Build Your Own LinuxKit OS” and showcased them how to build it using a single YAML file. One of the interesting question raised was – ” How shall I build Ubuntu or Debian based OS as our infrastructure runs most of these distributions?” I promised them to write a blog post which is easy to follow and help them build their own customised kernel with LinuxKit.

A Brief about LinuxKit Kernel..

LinuxKit Kernel images are distributed as hub images as shown below:

Screen Shot 2017-07-09 at 7.25.10 AM

 

It contains the kernel, kernel modules, kernel config file, and optionally, kernel headers to compile kernel modules against. It is important to note that LinuxKit kernels are based on the latest stable releases.Each kernel image is tagged with the full kernel version (e.g.,linuxkit/kernel:4.10.x) and with the full kernel version plus the hash of the files it was created from.

LinuxKit offers the ability to build bootable Linux images with kernels from various distributions like CentOS, Debian, Ubuntu. As of today, LinuxKit offers a choice of the following kernels:

Moby uses YAML file as an input to build LinuxKit OS image. By default, the YAML file uses the default linuxkit/kernel:<version> as an official image as shown below in the first 3 lines.

Screen Shot 2017-07-09 at 7.34.01 AM

 

Let us see how to build our own distribution Linux Kernel(Ubuntu) through the below steps:

Under LinuxKit project repository, there is a kernel build directory and holds essential Dockerfile, configuration files and patches available to build your own Kernel for LinuxKit OS.

Screen Shot 2017-07-09 at 7.46.57 AM

If you want to include your own new patch, you can fork the repository and get the patch added under the right patch directory(shown above). For example, if you need to add patch to 4.11.x release, you can add it under patch-4.11.x directory as shown below:

Screen Shot 2017-07-09 at 7.57.35 AM

Below is the snippet of kernel_config-4.11.x file. The few lines below shows that the new built kernel will be pushed to Dockerhub under ORG/kernel:<version>.

Screen Shot 2017-07-09 at 8.33.36 AM

Next, all you need to do is run the below command to build your own customised kernel with required patches and modules loaded:

make build_4.11.x ORG=ajeetraina

PLEASE NOTE – ORG should be replaced by your Dockerhub ORG (it’s basically Dockerhub ID). If you are planning to raise PR for your patch to be included, you can use LinuxKit ORG and then get it approved.

 

Screen Shot 2017-07-09 at 7.52.21 AM

This will take sometime to create a local kernel image called ajeetraina/kernel:4.11.8-8bcfec8e1f86bab7a642082aa383696c182732f5 assuming you haven’t committed you local changes.

Screen Shot 2017-07-09 at 9.24.57 AM

 

You can then use this kernel in your YAML file as:

kernel:
   image: "ajeetraina/kernel:4.11.9-2fd6982c78b66dbdef84ca55935f8f2ab3d2d3e6"

Screen Shot 2017-07-09 at 9.45.42 AM

 

Run the Moby tool to build up this new kernel:

moby build linuxkit.yml

Screen Shot 2017-07-09 at 9.46.57 AM

 

How to Build Ubuntu based Kernel Image

If you are looking out for building distribution specific Kernel images, then this is the right section for you to refer. Docker Team has done a great job in putting it altogether under linuxkit/scripts/kernel directory as shown below:

Screen Shot 2017-07-09 at 8.12.28 AM

Let us look inside Dockerfile.deb:

Screen Shot 2017-07-09 at 8.50.35 AM

The script ubuntu.sh picks up Dockerfile.deb to build and push the kernel-ubuntu image as shown below in the snippet:

Screen Shot 2017-07-09 at 8.53.07 AM

All you need to do is execute the script:

sh ubuntu.sh

It will take sometime and you should see the below output:

Screen Shot 2017-07-09 at 8.56.10 AM

Now you can leverage this kernel into YAML file as shown below in the output:

Screen Shot 2017-07-09 at 10.01.27 AM

Let us pick up linuxkit/linuxkit.yml and add the above kernel entry to build LinuxKit OS running Docker containers as a service.

Screen Shot 2017-07-09 at 10.31.12 AM

 

Use the moby build command to build LinuxKit OS image based on Ubuntu kernel:

moby build linuxkit.yml

Screen Shot 2017-07-09 at 10.36.47 AM

 

 

Next, execute the linuxkit run command to boot up LinuxKit.

linuxkit run linuxkit

Screen Shot 2017-07-09 at 10.40.18 AM

 

Let us verify if the services are up and running:

Screen Shot 2017-07-09 at 10.48.50 AM

It’s easy to enter into one of service container as shown below:

Screen Shot 2017-07-09 at 10.52.40 AM

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more what’s happening in LinuxKit project by visiting this link.

0
0

Talking about Moby & LinuxKit Awesomeness at Docker Bangalore Meetup

Today  I spoke at Docker Bangalore Meetup which took place in IBM India Systems Development Lab(ISL) – an R&D Division located at Manyata Embassy Business Park. Docker Bangalore Meetup Group is one of the most highest registered (holds around 5000+ registered Docker users currently) in India. Founded in November 22, 2013, it is now 4 year old and has been led by a Docker Captain & Founder of CloudYuga, Neependra Khare who has been actively organising this event since its birth. This event was Live streamed using Google HangOut and most of the sessions are available under YouTube here.

main-qimg-6f34ef3b09b5ef8de99d871243f291e6

 

This time I chose  “An Introduction to LinuxKit” topic as I spent considerable amount of time writing
blogs around LinuxKit, InfraKit, Moby and it’s supported platforms. Since DockerCon 2017, I have written around 6-7 blogs primarily on LinuxKit on various platform and this was one great chance to meet with Docker enthusiasts and clarify around Docker  Vs Moby Vs LinuxKit.

ajeet2

 

Neependra started the Meetup talking about Moby. He clarified most of the facts around “Docker ! = Moby” and talked around Moby tool. He touched upon LinuxKit YAML file and demonstrated docker-17.06-ce container service built by Moby tool & LinuxKit toolkit. It was overall very interactive session and there were couple of great questions from the audience related to Moby assemblies. Docker Captain, Sreenivas Makam from Cisco delivered a great deep-dive talk around “Docker networking – Common issues and troubleshooting techniques”.  He brought up interesting troubleshooting techniques and common issues around Docker Networking and you can find his impressive slide here.

Screen Shot 2017-07-02 at 11.04.13 PM

 smakam

I was the 2nd speaker on the row. I started my talk with “Why LinuxKit?” and spent considerable amount of time talking about the problem statement which led to the birth of LinuxKit. I walked through the audience on LinuxKit & introduced new Moby playground called as “Play with Moby“. In case you’re new, Play-with-moby (PWM) is a site made by Docker captains Marcos Nils and Jonathan Leibiusky as an extension of PWD. PWM is a Moby playground which allows you to try different components of the platform in seconds. It gives you the experience of having a free Alpine Linux Virtual Machine in the cloud where you can build and run Moby projects and even create clusters to experiment. Under the hood DIND or Docker-in-Docker is used to give the effect of multiple VMs/PCs.

ajeet1

At the end of the session, I demonstrated around “Building Docker containers using LinuxKit”. I enjoyed talking around LinuxKit Packaging System” during which there were numerous interesting queries from the audience. You can refer the below slides to get the glimpse of the talk.

 

 

 

Few of interesting questions around Moby & LinuxKit:

  1. How does LinuxKit implementation works for bare metal system?
  2. Can I build CentOS based LinuxKit Operating System as we use CentOS primarily on IBM Power System S822LC?
  3. What’s happening around LinuxKit Security areas? How secure is LinuxKit today?
  4. Can I use my own customised Kernel and get it work under LinuxKit YAML?
  5. Is LinuxKit completely open source?
  6. What is the story of Volume mount in terms of LinuxKit? What about the data persistence?

Special thanks goes to all the speakers for the valuable sessions. I have been closely watching the feedback and I could read that the audience liked the event and enjoyed the informative session. Thanks to IBM ISL Team for your sponsorship and a great lunch at the end of the day.

If you are keen to learn what’s happening in LinuxKit, Moby & Docker space, don’t miss out the below links –

 

0
0