Multi-Node K3s Cluster on NVIDIA Jetson Nano in 5 Minutes

Estimated Reading Time: 7 minutes

If you are looking out for lightweight Kubernetes which is easy to install and perfect for Edge, IoT, CI and ARM, then look no further. K3s is the right solution for you. K3s is a certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.

The K3s GITHUB repository has already crossed 9000+ stars. With over 600+ forks & 50+ contributors, this project is gaining a lot of momentum across the developers. Few of notable feature offered by k3s are:

  • Single-liner installation
  • A binary of less than 40 MB
  • Both ARM64 and ARMv7 are supported with binaries and multiarch images available for both
  • Can be deployed seamlessly over Raspberry Pi 3 & 4
  • SQLite3 as the default storage mechanism. etcd3 is still available, but not the default.

Early this year, I wrote a blog post around how to build up K3s cluster on Raspberry Pi 3.

Under this blog, I will showcase how to build 2-node K3s cluster on NVIDIA Jetson Nano without any compilation pain.

Prerequisite:

  • Unboxing Jetson Nano Pack
  • Preparing your microSD card

To prepare your microSD card, you’ll need a computer with Internet connection and the ability to read and write SD cards, either via a built-in SD card slot or adapter.

  1. Download the Jetson Nano Developer Kit SD Card Image, and note where it was saved on the computer.
  2. Write the image to your microSD card( atleast 16GB size) by following the instructions below according to the type of computer you are using: Windows, Mac, or Linux. If you are using Windows laptop, you can use SDFormatter software for formatting your microSD card and Win32DiskImager to flash Jetson Nano Image. In case you are using Mac, you will need Etcher software.
  1. To prepare your microSD card, you’ll need a computer with Internet connection and the ability to read and write SD cards, either via a built-in SD card slot or adapter

The Jetson Nano SD card image is of 12GB(uncompressed size).

Next, It’s time to remove this tiny SD card from SD card reader and plugin it to Jetson Board to let it boot.

Verifying OS running on Jetson Nano

jetson@jetson-desktop:~$ sudo cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
jetson@jetson-desktop:~$

Setting up hostname

sudo vi /etc/hostname
master1.dell.com

Reboot the system.

Installing K3s

jetson@master1:~$ sudo curl -sfL https://get.k3s.io | sh -
[INFO]  Finding latest release
[INFO]  Using v0.9.1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.9.1/sha256sum-arm64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.9.1/k3s-arm64
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

Listing the Nodes

jetson@master1:~$ sudo k3s kubectl get node -o wide
NAME             STATUS   ROLES    AGE     VERSION         INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
jetson-desktop   Ready    master   3m55s   v1.15.4-k3s.1   192.168.1.3   <none>        Ubuntu 18.04.2 LTS   4.9.140-tegra    containerd://1.2.8-k3s.1
jetson@master1:~$

Verifying Kubernetes Cluster Information

 sudo k3s kubectl cluster-info

Kubernetes master is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy 
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. 

Listing out all Kubernetes Namespaces

$ sudo k3s kubectl get all --all-namespaces
NAMESPACE     NAME                             READY   STATUS              RESTARTS   AGE
kube-system   pod/coredns-66f496764-bf7qn      1/1     Running             0          79s
kube-system   pod/traefik-d869575c8-wsq2b      0/1     ContainerCreating   0          12s
kube-system   pod/svclb-traefik-gtqpd          0/3     ContainerCreating   0          12s
kube-system   pod/helm-install-traefik-4tjpc   0/1     Completed           0          79s


NAMESPACE     NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                     AGE
kube-system   service/kube-dns     ClusterIP      10.43.0.10      <none>        53/UDP,53/TCP,9153/TCP                      99s
default       service/kubernetes   ClusterIP      10.43.0.1       <none>        443/TCP                                     97s
kube-system   service/traefik      LoadBalancer   10.43.140.218   <pending>     80:31655/TCP,443:31667/TCP,8080:31486/TCP   13s

NAMESPACE     NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   daemonset.apps/svclb-traefik   1         1         0       1            0           <none>          13s

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   1/1     1            1           99s
kube-system   deployment.apps/traefik   0/1     1            0           13s

NAMESPACE     NAME                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-66f496764   1         1         1       80s
kube-system   replicaset.apps/traefik-d869575c8   1         1         0       13s



NAMESPACE     NAME                             COMPLETIONS   DURATION   AGE
kube-system   job.batch/helm-install-traefik   1/1           69s        98s

jetson@jetson-desktop:~$

Deploying NGINX on k3s

$ sudo k3s kubectl run mynginx --image=nginx --replicas=3 --port=80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/mynginx created

Listing the NGINX Pods

sudo k3s kubectl get po
NAME                       READY   STATUS              RESTARTS   AGE
mynginx-568f57494d-8jpwq   0/1     ContainerCreating   0          69s
mynginx-568f57494d-czl9x   0/1     ContainerCreating   0          69s
mynginx-568f57494d-pnphb   0/1     ContainerCreating   0          69s

Viewing Nginx Pod details

sudo k3s kubectl describe po mynginx-568f57494d-8jpwq
Name:           mynginx-568f57494d-8jpwq
Namespace:      default
Priority:       0
Node:           jetson-desktop/192.168.1.3
Start Time:     Mon, 07 Oct 2019 20:57:14 +0530
Labels:         pod-template-hash=568f57494d
                run=mynginx
Annotations:    <none>
Status:         Pending
IP:
Controlled By:  ReplicaSet/mynginx-568f57494d
Containers:
  mynginx:
    Container ID:
    Image:          nginx
    Image ID:
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zjsrt (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-zjsrt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-zjsrt
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                     Message
  ----    ------     ----  ----                     -------
  Normal  Scheduled  98s   default-scheduler        Successfully assigned default/mynginx-568f57494d-8jpwq to jetson-desktop
  Normal  Pulling    94s   kubelet, jetson-desktop  Pulling image "nginx"

Ensuring that Pods are in Running State

jetson@jetson-desktop:~$ sudo k3s kubectl get po
NAME                       READY   STATUS    RESTARTS   AGE
mynginx-568f57494d-pnphb   1/1     Running   0          113s
mynginx-568f57494d-czl9x   1/1     Running   0          113s
mynginx-568f57494d-8jpwq   1/1     Running   0          113s
jetson@jetson-desktop:~$

Exposing NGINX Port

$ sudo k3s kubectl expose deployment mynginx --port 80
service/mynginx exposed

Verifying the Endpoints

:~$ sudo k3s kubectl get endpoints mynginx
NAME      ENDPOINTS                                AGE
mynginx   10.42.0.6:80,10.42.0.7:80,10.42.0.8:80   27s

Verifying if Nginx is accessible

$ sudo curl 10.42.0.6
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
jetson@jetson

Joining Nodes

I assume that the worker node hostname is “worker1.dell.com”

 sudo cat /var/lib/rancher/k3s/server/node-token

password for jetson: K1062243e4f7c5bef777a082ae9919d3d2bf13d446bff423275XXXXd08::node:a761a49be1XXXXXXc7c9cc7ccd9baaf

Installing K3s on Worker Node and joining to the Master Node

jetson@worker1:~$ sudo curl -sfL https://get.k3s.io | K3S_URL=https://master1.dell.com:6443 K3S_TOKEN=K10a2fa03edec41872f9b4068ddcfb9afaf329e24bf25ca33a0dc879b3e02ea9805::node:f9613e3b73d8184277493911ddca4e6a  sh -
[INFO]  Finding latest release
[INFO]  Using v0.9.1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.9.1/sha256sum-arm64.txt
[INFO]  Skipping binary downloaded, installed k3s matches hash
[INFO]  Skipping /usr/local/bin/kubectl symlink to k3s, already exists
[INFO]  Skipping /usr/local/bin/crictl symlink to k3s, already exists
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO]  systemd: Starting k3s-agent

jetson@master1:~$ sudo k3s kubectl get nodes
NAME               STATUS   ROLES    AGE     VERSION
master1.dell.com   Ready    master   7m45s   v1.15.4-k3s.1
worker1.dell.com   Ready    worker   24s     v1.15.4-k3s.1
jetson@master1:~$

Uninstalling k3s

/usr/local/bin/k3s-uninstall.sh 

Setting up Kubernetes Dashboard

jetson@master1:~$ cat k3s-dashboard.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
jetson@master1:~$ sudo k3s kubectl apply -f k3s-dashboard.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
jetson@master1:~$ cat k3s-dashboard.yaml



Verifying if Kubernetes Dashboard Pods are up and Running

jetson@master1:~$ sudo k3s kubectl get po,svc,deploy -n kube-system
NAME                             READY   STATUS      RESTARTS   AGE
pod/coredns-66f496764-dgftj      1/1     Running     0          12m
pod/helm-install-traefik-skl9c   0/1     Completed   0          12m
pod/svclb-traefik-bxxpf          3/3     Running     0          11m
pod/traefik-d869575c8-wfnfx      1/1     Running     0          11m
pod/svclb-traefik-nkfms          3/3     Running     0          5m39s

NAME               TYPE           CLUSTER-IP      EXTERNAL-IP               PORT(S)                                     AGE
service/kube-dns   ClusterIP      10.43.0.10      <none>                    53/UDP,53/TCP,9153/TCP                      13m
service/traefik    LoadBalancer   10.43.213.124   192.168.1.3,192.168.1.6   80:32651/TCP,443:32650/TCP,8080:30917/TCP   11m

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/coredns   1/1     1            1           13m
deployment.extensions/traefik   1/1     1            1           11m

Enabling Kubeproxy

sudo k3s kubectl proxy
Starting to serve on 127.0.0.1:8001

Verifying if all Pods are successfully running

 sudo kubectl get po,deploy,svc -n kube-system
NAME                             READY   STATUS      RESTARTS   AGE
pod/coredns-66f496764-dgftj      1/1     Running     0          17m
pod/helm-install-traefik-skl9c   0/1     Completed   0          17m
pod/svclb-traefik-bxxpf          3/3     Running     0          16m
pod/traefik-d869575c8-wfnfx      1/1     Running     0          16m
pod/svclb-traefik-nkfms          3/3     Running     0          10m

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/coredns   1/1     1            1           17m
deployment.extensions/traefik   1/1     1            1           16m

NAME               TYPE           CLUSTER-IP      EXTERNAL-IP               PORT(S)                                     AGE
service/kube-dns   ClusterIP      10.43.0.10      <none>                    53/UDP,53/TCP,9153/TCP                      17m
service/traefik    LoadBalancer   10.43.213.124   192.168.1.3,192.168.1.6   80:32651/TCP,443:32650/TCP,8080:30917/TCP 

In my next blog post, I will showcase how GPU-enabled Kubernetes Pods can be deploy. Stay tuned !

Docker workshop on Raspberry Pi – University of Petroleum and Energy Studies, Dehradun

Estimated Reading Time: 3 minutes

On the 3rd of October, I travelled to UPES Dehradun(around 1500 miles) for 1 day session on ” The Pico project” as well as conducting Docker workshop on Raspberry Pi. It was an amazing experience where I got chance to interact with the University students for the first time.

With over 82+ students, the session all began at around 11:00 AM where I started my talk on IoT devices and how it has transformed everything from Human Life to the business capabilities. I was amazed to find that the major of the audience have brought their own Raspberry Pi 3 Model B for the workshop.

After 30-minutes of talk, I conducted a workshop where I invited around 6-7 university students to bring their Raspberry Pi boxes so as to install Docker Engine 19.03.2 and showcased how to create Docker Swarm cluster out of those 7 Raspberry Pi boxes. We ran Portainer and Swarm Visualizer on those 7 Node Swarm Cluster.

Link for Docker on Raspberry Pi workshop:
http://dockerlabs.collabnix.com/beginners/install/raspberrypi3/

After finishing the lunch at around 1:45 PM, we started with Docker Workshop at sharp 2:00 PM. First I asked all the students to create DockerHub account so as to carry out the task of building the first Docker Image and pushing it to DockerHub. We covered Beginners track
http://dockerlabs.collabnix.com/workshop/docker/ which comprised of the below sections:

Pre-requisite:

Getting Started with Docker Image

Accessing & Managing Docker Container

Almost all the students were really quick in following the tutorials and finishing their workshop. Special thanks to
 Marcos Liljedhal , Founder of Play with Docker Platform for ensuring that the platform is up and running, Jenny Burcio, Docker Inc and Karen Bazja, Docker Inc for all the support in making this happen. At the end of the workshop, I was handed over momento for sharing knowledge around Docker & IoT.

It was day well spent interacting with UPES students & Faculties. I got chance to roam around the city and capturing few of the amazing scenetic around the UPES campus.

Outcomes & Initiatives…

This trip brought an effective outcome and initiatives around collaboration to build Docker Community. First of all, Docker Dehradun Meetup community is BORN. If you are in Dehradun during the 2nd week of November, don’t miss out Docker Dehradun Meetup #1 event which is slated to happen inside UPES campus.

Docker Dehradun Meetup #1 – Docker, Kubernetes & Container Security

Wednesday, Nov 13, 2019, 3:00 PM

No location yet.

12 Members Attending

THIS IS A FREE EVENT – PLEASE RSVP USING THIS LINK https://events.docker.com/events/details/docker-dehradun-presents-docker-dehradun-meetup-1-docker-kubernetes-container-security/ <p>Docker containers have rapidly evolved in the past few years into a perfect core technology to enable all of these technology changes. This year, IDC forecas…

Check out this Meetup →

We have started Docker UPES whatsapp chat group for the students in case they have any queries around Docker & Kubernetes. It grew to 150+ members within few hours. We started a dedicated Collabnix Slack channel for UPES students so as to collaborate and conduct future Meetup events.

Docker 19.03 comes to NVIDIA Jetson Nano

Estimated Reading Time: 9 minutes

Did you know? In the latest Docker 19.03 Release, a new flag –gpus have been added for docker run which allows to specify GPU resources to be passed through to the Docker Image(NVIDIA GPUs). The latest nvidia-docker has already adopted this feature (see github), but deprecated --runtime=nvidia.

Last Dockercon, I met with a four-wheeled knee-high tiny cute food delivery robot called Kiwibot. Built by KiwiCampus Inc., Kiwibot is the robot counterpart of the pizza delivery person. Thanks to Dockercon for bringing this amazing piece of technology in-house for distributing swags, stickers, chocolates and goodies.

What’s so cool about Kiwibots?

Just open up your Wiki app and place an order for food. When you place an order online(facility to choose a participating restaurant), you get the option of delivery via Kiwi. Once you choose, one of the company’s fleet of super cool robots with insulated, locking storage compartments will swing by the place, your order is put within, and it brings it to your location. You can even watch the last bit live from the robot’s perspective as it rolls up to your place. Amazing, isn’t it?

Well, the super cute make and structure of these kiwibots looked interesting to me BUT what fascinated me really is the technology behind these small robots. A Kiwibot is equipped with six cameras and GPS to deliver the order at the right place. Nice ! Now here comes the best part. Only the person who has ordered will be able to open the Kiwibot and retrieve the order through the app which means it is intelligent enough to do object detection and analytics too. Interesting !!

Tell me more…

Source:https://www.kiwicampus.com/technology

I went through the technology stack and was really amazed to learn that it uses NVIDIA Jetson TX2 system for  all of the AI processing, imaging, and related computing tasks(see picture below). Jetson TX2 is a credit card-sized platform that puts AI computing to work in the world all around us. Obviously, GPU-based deep learning has given computers the ability to understand — and react to — the data streaming in from all these devices in uncanny new ways. Both through training — which creates smart systems — and through inference — which creates systems that are able to react intelligently to the world around them in real time.

Before I loose my patience any further, the day finally arrived when I was lucky enough to hold the most powerful AI platform in my hand.

The NVIDIA® Jetson Nano™ Developer Kit is purely an AI computer. It is a small, powerful computer that lets you run multiple neural networks in parallel for applications like image classification, object detection, segmentation, and speech processing. All in an easy-to-use platform that runs in as little as 5 watts. It is perfect for makers, learners, and developers that brings the power of modern artificial intelligence to a low-power, easy-to-use platform.

Some Really Useful Facts around NVIDIA Jetson Nano..

  • There are 2 ways to power a Jetson Nano – either using Micro USB power supplies Or by using Barrel Jack 5V 4A(20W) power supplies. I don’t recommend using the normal micro-USB adapter which you use for mobile. Buy here if you are in India
  • Jetson Nano works in 2 model – MAXN and 5 Watt. By default the Nano works at MAXN(10 Watt mode). If using a Micro USB adapter you should change that immediately to 5W mode. When using the barrel jack to connect a 5V 4A (20W) power supply, you should set the Nano into 10 Watt mode to allow maximum power usage. Learn more
  • No WiFi module is shipped with this board. A user can easily get a wifi dongle or module that’s already been certified and plug it in for the usage
  • Jetson Nano is supported by the comprehensive NVIDIA® JetPack™ SDK, and has the performance and capabilities needed to run modern AI workloads. JetPack includes:
  • Jetson Nano comes with Full desktop Linux with NVIDIA driver, AI and Computer Vision libraries and APIs. developer tools & documentation and sample code.

Installing Docker 19.03 on NVIDIA Jetson Nano

Early this May 2019, I wrote a blog post around Docker 19.03 which comes with a new –gpus CLI plugin capability. With the recent 19.03 GA Release, now you don’t need to spend time in downloading the NVIDIA-DOCKER plugin and rely on nvidia-wrapper to launch GPU containers. All you can now use –gpus option with docker run CLI to allow containers to use GPU devices seamlessly.

As I wanted to try running CUDA containers on Jetson Nano, I couldn’t wait to update it to 19.03 release so as to see how containers can leverage the existing GPU device. Under this blog post, I will showcase how to get started with Docker 19.03 on Jetson Nano.

Preparing Jetson Nano

  • Unboxing Jetson Nano Pack
  • Preparing your microSD card

To prepare your microSD card, you’ll need a computer with Internet connection and the ability to read and write SD cards, either via a built-in SD card slot or adapter.

  1. Download the Jetson Nano Developer Kit SD Card Image, and note where it was saved on the computer.
  2. Write the image to your microSD card( atleast 16GB size) by following the instructions below according to the type of computer you are using: Windows, Mac, or Linux. If you are using Windows laptop, you can use SDFormatter software for formatting your microSD card and Win32DiskImager to flash Jetson Nano Image. In case you are using Mac, you will need Etcher software.
  1. To prepare your microSD card, you’ll need a computer with Internet connection and the ability to read and write SD cards, either via a built-in SD card slot or adapter

The Jetson Nano SD card image is of 12GB(uncompressed size).

Next, It’s time to remove this tiny SD card from SD card reader and plugin it to Jetson Board to let it boot.

Wow ! Jetson Nano comes with 18.09 by default

Yes, you read it correct. Let us try it once. First we will verify OS version running on Jetson Nano.

Verifying OS running on Jetson Nano

jetson@jetson-desktop:~$ sudo cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
jetson@jetson-desktop:~$

Verifying Docker

jetson@jetson-desktop:~$ sudo docker version
Client:
 Version:           18.09.2
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        6247962
 Built:             Tue Feb 26 23:51:35 2019
 OS/Arch:           linux/arm64
 Experimental:      false

Server:
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       6247962
  Built:            Wed Feb 13 00:24:14 2019
  OS/Arch:          linux/arm64
  Experimental:     false
jetson@jetson-desktop:~$

Updating OS Repository

sudo apt update

Installing Docker 19.03 Binaries

You will need curl command to update Docker 18.09 to 19.03 flawlessly.

sudo apt install curl
curl -sSL https://get.docker.com/ | sh
jetson@jetson-desktop:~$ sudo docker version
Client: Docker Engine - Community
 Version:           19.03.2
 API version:       1.40
 Go version:        go1.12.8
 Git commit:        6a30dfc
 Built:             Thu Aug 29 05:32:21 2019
 OS/Arch:           linux/arm64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.2
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.8
  Git commit:       6a30dfc
  Built:            Thu Aug 29 05:30:53 2019
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
jetson@jetson-desktop:~$

Installing Docker Compose

root@jetson-desktop:/home/jetson# /usr/bin/docker-compose version
docker-compose version 1.17.1, build unknown
docker-py version: 2.5.1
CPython version: 2.7.15+
OpenSSL version: OpenSSL 1.1.1  11 Sep 2018
root@jetson-desktop:/home/jetson#

Turn Your Jetson Nano into CCTV Camera

You can connect USB camera module directly into Jetson Nano camera slot and it should work flawlessly.

All you need to do is clone the below GITHUB repository and run the script.

git clone https://github.com/ajeetraina/docker-cctv-raspbian
cd docker-cctv-raspbian
sh run.sh

The script will pull the Docker Image from DockerHub and run the container to turn your Jetson Nano into CCTV camera.


root@jetson-desktop:~/docker-cctv-raspbian# docker ps
CONTAINER ID        IMAGE                             COMMAND             CREATED             STATUS              PORTS                    NAMES
b6ff860d4f2a        ajeetraina/docker-cctv-raspbian   "motion"            6 seconds ago       Up 2 seconds        0.0.0.0:8081->8081/tcp   hopeful_newton
root@jetson-desktop:~/docker-cctv-raspbian#

Just one liner CLI and I was able to see my logitech webcam in action.(see below)

Figure: This image is captured by Logitech webcam mounted on Jetson Nano board

Running Hello World Example with Jetson Nano

jetson@jetson-desktop:~$ docker run arm64v8/hello-world
Unable to find image 'arm64v8/hello-world:latest' locally
latest: Pulling from arm64v8/hello-world
3b4173355427: Pull complete
Digest: sha256:5970f71561c8ff01d1d97782f37b0142315c53f31ad23c22883488e36a6dcbcb
Status: Downloaded newer image for arm64v8/hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (arm64v8)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

jetson@jetson-desktop:~$

Verifying NVIDIA Container Runtime

NVIDIA Container Runtime with Docker integration (via the nvidia-docker2 packages) is included as part of NVIDIA JetPack. It is available for install via the NVIDIA SDK Manager along with other JetPack components as shown below:

jetson@jetson-desktop:~$ sudo docker info | grep nvidia
 Runtimes: nvidia runc
jetson@jetson-desktop:~$ sudo dpkg --get-selections | grep nvidia
libnvidia-container-tools                       install
libnvidia-container0:arm64                      install
nvidia-container-runtime                        install
nvidia-container-runtime-hook                   install
nvidia-docker2                                  deinstall
nvidia-l4t-3d-core                              install
nvidia-l4t-apt-source                           install
nvidia-l4t-bootloader                           install
nvidia-l4t-camera                               install
nvidia-l4t-ccp-t210ref                          install
nvidia-l4t-configs                              install
nvidia-l4t-core                                 install
nvidia-l4t-cuda                                 install
nvidia-l4t-firmware                             install
nvidia-l4t-graphics-demos                       install
nvidia-l4t-gstreamer                            install
nvidia-l4t-init                                 install
nvidia-l4t-kernel                               install
nvidia-l4t-kernel-dtbs                          install
nvidia-l4t-kernel-headers                       install
nvidia-l4t-multimedia                           install
nvidia-l4t-multimedia-utils                     install
nvidia-l4t-oem-config                           install
nvidia-l4t-tools                                install
nvidia-l4t-wayland                              install
nvidia-l4t-weston                               install
nvidia-l4t-x11                                  install
nvidia-l4t-xusb-firmware                        install
jetson@jetson-desktop:~$

Running CUDA on Containers on Jetson Nano

jetson@jetson-desktop:~$ sudo docker run -it --runtime nvidia devicequery
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA Tegra X1"
  CUDA Driver Version / Runtime Version          10.0 / 10.0
  CUDA Capability Major/Minor version number:    5.3
  Total amount of global memory:                 3956 MBytes 
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
....
  
  

What’s Next?

Integrating Pico – A Deep Learning Platform with Jetson & Docker is an exciting project I am working on. If you’re completely new, do check out
https://github.com/collabnix/pico for further details.