Let’s talk about CRI Vs CRI-Containerd…
Container Runtime Interface(a.ka. CRI) is a standard way to integrate Container Runtime with Kubernetes. It is new plugin interface for container runtimes. It is a plugin interface which enables kubelet to use a wide variety of container runtimes, without the need to recompile.Prior to the existence of CRI, container runtimes (e.g., docker, rkt ) were integrated with kubelet through implementing an internal, high-level interface in kubelet. Formerly known as OCID, CRI-O is strictly focused on OCI-compliant runtimes and container images.
Last month, the new CRI-O version 1.2 got announced. This is a minor release of the v1.0.x cycle supporting Kubernetes 1.7.With CRI, Kubernetes can be container runtime-agnostic. Now this provides flexibility to the providers of container runtimes who don’t need to implement features that Kubernetes already provides. CRI-O allows you to run containers directly from Kubernetes – without any unnecessary code or tooling.
For those viewers who compare Docker Runtime Engine Vs CRI-O, here is an important note –
CRI-O is not really a competition to the docker project – in fact it shares the same OCI runC container runtime used by docker engine, the same image format, and allows for the use of docker build and related tooling. Through this new runtime, it is expected to bring developers more flexibility by adding other image builders and tooling in the future.Please remember that CRI is not an interface for full-fledge, all inclusive container runtime.
How does CRI-O workflow look like ?
When Kubernetes needs to run a container, it talks to CRI-O and the CRI-O daemon works with container runtime to start the container. When Kubernetes needs to stop the container, CRI-O handles that. Everything just works behind the scenes to manage Linux containers.
Kubelet is a node agent which has gRPC client which talks to gRPC server rightly called as shim. The shim then talks to container runtime. Today the default implementation is Docker Shim.Docker shim then talks to Docker daemon using the classical APIs. This works really well.
CRI-containerd is containerd based implementation of CRI. This project started in April 2017. In order to have Kubernetes consume containerd for its container runtime, containerd team implemented the CRI interface. CRI is responsible for distribution and the lifecycle of pods and containers running on a cluster.The scope of containerd 1.0 aligns with the requirement of CRI. In case you want to deep-dive into it, don’t miss out this link.
Below is how CRI-containerd architecture look like:
In my last blog post, I talked about how to setup Multi-Node Kubernetes cluster using LinuxKit. It basically used Docker Engine to build up minimal & immutable Kubernetes OS images with LinuxKit. Under this blog post, we will see how to build it using CRI-containerd.
- OS – Ubuntu 17.04
- System – ESXi VM
- Memory – 8 GB
- SSH Key generated using
ssh-keygen -t rsaand put under $HOME/.ssh/ folder
Caution – Please note that this is still under experimental version. It is currently under active development and hence don’t expect it to work as full-fledge K8s cluster.
Copy the below script to your Linux System and execute it flawlessly –
Did you notice the parameter – KUBE_RUNTIME=cri-containerd ?
The above parameter is used to specify that we want to build minimal and immutable K8s ISO image using CRI-containerd. If you don’t specify the parameter, it will use Docker Engine to build this ISO image.
The above script is going to take sometime to finish.
It’s always a good idea to see what CRI-containerd specific files are present.
Let us look into CRI-Containerd directory –
Looking at the content of cri-containerd.yml where it defines a service called as critical-containerd.
The below list of files gets created with the output files like Kube-master-efi.iso right under Kubernetes directory –
By the end of this script you should be able to see Kubernetes LinuxKit OS booting up –
CRI-containerd let the user containers in the same sandbox share the network namespace and hence you will see the message ” This system is namespaced”.
You can find the overall screen logs to watch how it boots up –
Next, let us try to see what container services are running:
You will notice that cri-containerd service is up and running.
Let us enter into one of tasks containers and initialize the master node with kubeadm-init script which comes by default –
(ns: getty) linuxkit-ee342f3aebd6:~# ctr tasks exec -t –exec-id 654 kubelet sh
/ # ls
Execute the below script –
/ # kubeadm-init.sh
/ # kubeadm-init.sh
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use –token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [linuxkit-ee342f3aebd6 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig] Wrote KubeConfig file to disk: “admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “scheduler.conf”
[controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
Now you should be able to join the other worker nodes( I have discussed the further steps under this link ).
Did you find this blog helpful? Feel free to share your experience. Get in touch @ajeetsraina.
If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.