The Rise of Pico: At the Grace Hopper Celebration India

Estimated Reading Time: 4 minutes

The Grace Hopper Celebration of Women in Computing (GHC) is a series of conferences designed to bring the research and career interests of women in computing to the forefront. It is the world’s largest gathering of women in computing. The celebration, named after computer scientist Grace Hopper, is organized every year by the Anita Borg Institute for Women and Technology and the Association for Computing Machinery.

This year, the conference took place in Bangalore International Exhibition Center( the third consecutive year at this venue) from November 6-8, 2019. The conference accommodated around 5000+ attendees. It was a combination of technical sessions and workshops. It included career sessions, expos, poster session, career fair, awards ceremony, and much more.

What’s Pico all about??

Pico is an open source project which helps in implementing object detection & analytics(Deep Learning) using Docker on IoT devices like Raspberry Pi & Jetson Nano in just simple 3 steps. Imagine you are able to capture live video streams, identify objects using deep learning, and then trigger actions or notifications based on the identified objects – all using Docker containers. With Pico, you will be able to setup and run a live video capture, analysis, and alerting solution prototype. A camera surveils a particular area, streaming video over the network to a video capture client. The client samples video frames and sends them over to AWS, where they are analyzed and stored along with metadata. If certain objects are detected in the analyzed video frames, SMS alerts are sent out. Once a person receives an SMS alert, they will likely want to know what caused it. For that, sampled video frames can be monitored with low latency using a web-based user interface.

alt text

The Pico framework uses Kafka cluster to acquire data in real-time. Kafka is a message-based distributed publish-subscribe system, which has the advantages of high throughput and perfect fault-tolerant mechanism. The type of data source is the video that generated by the cameras attached to Raspberry Pi.

This year I submitted proposal for workshop on Pico during June-July along with 3 other DellEMC Engineers. The submission was selected for GHCI conference during the initial week of August 2019. This was our first workshop submission and we were super happy about the news. We confirmed our participation during the early week for September. There were multiple submissions for final Program Material during 2nd week of September. The final Program Materials was submitted on Linklings. This included the profiles and headshots of all the speakers in the session.

During the mid-September, I started building workshop material for Pico. The workshop consisted of 4 separate modules:

  • Installing Docker on Raspberry Pi 4
  • Setting up Apache Kafka on Cloud
  • Setting up Pico
  • Testing Object Detection

You can get access to this workshop under
https://github.com/collabnix/pico/tree/master/workshop

Finally… It’s Pico Day

The workshop was scheduled on 8th November and around 80+ attendees attended this workshop. Over 8 Raspberry Pis were arranged per table and it helped attendees to try out CLIs to install Docker on Raspberry Pi. Apache Kafka was pre-installed on AWS Cloud along with Image Processor and consumer scripts. I showcased a LIVE demo where camera module attached to Raspberry Pi was able to detect the audience as “Boxes of People”.

Overall, it was an amazing experience being part of GHCI and running workshop for such a huge audience. It was great to receive loads of positive feedback from the organizers too. Representing DellEMC in such a BIG conference made us feel proud and we went back home collecting lots of appreciation to prepare ourself for the forthcoming events in the near future.

Accreditations…

Thanks to Prashant Ksr, Priti Parate & Varalakshmi for all the support for making this happen. Thanks to GHCI organizer for this opportunity and believing on this promising project work. Thank you.

Want to learn more about Pico?

Head over to https://github.com/collabnix/pico

If you want to collaborate around Pico project, visit https://dockerlabs.collabnix.com and join 1600+ community Slack members to discuss further. Looking forward to see you there.

Multi-Node K3s Cluster on NVIDIA Jetson Nano in 5 Minutes

Estimated Reading Time: 7 minutes

If you are looking out for lightweight Kubernetes which is easy to install and perfect for Edge, IoT, CI and ARM, then look no further. K3s is the right solution for you. K3s is a certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.

The K3s GITHUB repository has already crossed 9000+ stars. With over 600+ forks & 50+ contributors, this project is gaining a lot of momentum across the developers. Few of notable feature offered by k3s are:

  • Single-liner installation
  • A binary of less than 40 MB
  • Both ARM64 and ARMv7 are supported with binaries and multiarch images available for both
  • Can be deployed seamlessly over Raspberry Pi 3 & 4
  • SQLite3 as the default storage mechanism. etcd3 is still available, but not the default.

Early this year, I wrote a blog post around how to build up K3s cluster on Raspberry Pi 3.

Under this blog, I will showcase how to build 2-node K3s cluster on NVIDIA Jetson Nano without any compilation pain.

Prerequisite:

  • Unboxing Jetson Nano Pack
  • Preparing your microSD card

To prepare your microSD card, you’ll need a computer with Internet connection and the ability to read and write SD cards, either via a built-in SD card slot or adapter.

  1. Download the Jetson Nano Developer Kit SD Card Image, and note where it was saved on the computer.
  2. Write the image to your microSD card( atleast 16GB size) by following the instructions below according to the type of computer you are using: Windows, Mac, or Linux. If you are using Windows laptop, you can use SDFormatter software for formatting your microSD card and Win32DiskImager to flash Jetson Nano Image. In case you are using Mac, you will need Etcher software.
  1. To prepare your microSD card, you’ll need a computer with Internet connection and the ability to read and write SD cards, either via a built-in SD card slot or adapter

The Jetson Nano SD card image is of 12GB(uncompressed size).

Next, It’s time to remove this tiny SD card from SD card reader and plugin it to Jetson Board to let it boot.

Verifying OS running on Jetson Nano

jetson@jetson-desktop:~$ sudo cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
jetson@jetson-desktop:~$

Setting up hostname

sudo vi /etc/hostname
master1.dell.com

Reboot the system.

Installing K3s

jetson@master1:~$ sudo curl -sfL https://get.k3s.io | sh -
[INFO]  Finding latest release
[INFO]  Using v0.9.1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.9.1/sha256sum-arm64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.9.1/k3s-arm64
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

Listing the Nodes

jetson@master1:~$ sudo k3s kubectl get node -o wide
NAME             STATUS   ROLES    AGE     VERSION         INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
jetson-desktop   Ready    master   3m55s   v1.15.4-k3s.1   192.168.1.3   <none>        Ubuntu 18.04.2 LTS   4.9.140-tegra    containerd://1.2.8-k3s.1
jetson@master1:~$

Verifying Kubernetes Cluster Information

 sudo k3s kubectl cluster-info

Kubernetes master is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy 
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. 

Listing out all Kubernetes Namespaces

$ sudo k3s kubectl get all --all-namespaces
NAMESPACE     NAME                             READY   STATUS              RESTARTS   AGE
kube-system   pod/coredns-66f496764-bf7qn      1/1     Running             0          79s
kube-system   pod/traefik-d869575c8-wsq2b      0/1     ContainerCreating   0          12s
kube-system   pod/svclb-traefik-gtqpd          0/3     ContainerCreating   0          12s
kube-system   pod/helm-install-traefik-4tjpc   0/1     Completed           0          79s


NAMESPACE     NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                     AGE
kube-system   service/kube-dns     ClusterIP      10.43.0.10      <none>        53/UDP,53/TCP,9153/TCP                      99s
default       service/kubernetes   ClusterIP      10.43.0.1       <none>        443/TCP                                     97s
kube-system   service/traefik      LoadBalancer   10.43.140.218   <pending>     80:31655/TCP,443:31667/TCP,8080:31486/TCP   13s

NAMESPACE     NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   daemonset.apps/svclb-traefik   1         1         0       1            0           <none>          13s

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   1/1     1            1           99s
kube-system   deployment.apps/traefik   0/1     1            0           13s

NAMESPACE     NAME                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-66f496764   1         1         1       80s
kube-system   replicaset.apps/traefik-d869575c8   1         1         0       13s



NAMESPACE     NAME                             COMPLETIONS   DURATION   AGE
kube-system   job.batch/helm-install-traefik   1/1           69s        98s

jetson@jetson-desktop:~$

Deploying NGINX on k3s

$ sudo k3s kubectl run mynginx --image=nginx --replicas=3 --port=80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/mynginx created

Listing the NGINX Pods

sudo k3s kubectl get po
NAME                       READY   STATUS              RESTARTS   AGE
mynginx-568f57494d-8jpwq   0/1     ContainerCreating   0          69s
mynginx-568f57494d-czl9x   0/1     ContainerCreating   0          69s
mynginx-568f57494d-pnphb   0/1     ContainerCreating   0          69s

Viewing Nginx Pod details

sudo k3s kubectl describe po mynginx-568f57494d-8jpwq
Name:           mynginx-568f57494d-8jpwq
Namespace:      default
Priority:       0
Node:           jetson-desktop/192.168.1.3
Start Time:     Mon, 07 Oct 2019 20:57:14 +0530
Labels:         pod-template-hash=568f57494d
                run=mynginx
Annotations:    <none>
Status:         Pending
IP:
Controlled By:  ReplicaSet/mynginx-568f57494d
Containers:
  mynginx:
    Container ID:
    Image:          nginx
    Image ID:
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zjsrt (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-zjsrt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-zjsrt
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                     Message
  ----    ------     ----  ----                     -------
  Normal  Scheduled  98s   default-scheduler        Successfully assigned default/mynginx-568f57494d-8jpwq to jetson-desktop
  Normal  Pulling    94s   kubelet, jetson-desktop  Pulling image "nginx"

Ensuring that Pods are in Running State

jetson@jetson-desktop:~$ sudo k3s kubectl get po
NAME                       READY   STATUS    RESTARTS   AGE
mynginx-568f57494d-pnphb   1/1     Running   0          113s
mynginx-568f57494d-czl9x   1/1     Running   0          113s
mynginx-568f57494d-8jpwq   1/1     Running   0          113s
jetson@jetson-desktop:~$

Exposing NGINX Port

$ sudo k3s kubectl expose deployment mynginx --port 80
service/mynginx exposed

Verifying the Endpoints

:~$ sudo k3s kubectl get endpoints mynginx
NAME      ENDPOINTS                                AGE
mynginx   10.42.0.6:80,10.42.0.7:80,10.42.0.8:80   27s

Verifying if Nginx is accessible

$ sudo curl 10.42.0.6
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
jetson@jetson

Joining Nodes

I assume that the worker node hostname is “worker1.dell.com”

 sudo cat /var/lib/rancher/k3s/server/node-token

password for jetson: K1062243e4f7c5bef777a082ae9919d3d2bf13d446bff423275XXXXd08::node:a761a49be1XXXXXXc7c9cc7ccd9baaf

Installing K3s on Worker Node and joining to the Master Node

jetson@worker1:~$ sudo curl -sfL https://get.k3s.io | K3S_URL=https://master1.dell.com:6443 K3S_TOKEN=K10a2fa03edec41872f9b4068ddcfb9afaf329e24bf25ca33a0dc879b3e02ea9805::node:f9613e3b73d8184277493911ddca4e6a  sh -
[INFO]  Finding latest release
[INFO]  Using v0.9.1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.9.1/sha256sum-arm64.txt
[INFO]  Skipping binary downloaded, installed k3s matches hash
[INFO]  Skipping /usr/local/bin/kubectl symlink to k3s, already exists
[INFO]  Skipping /usr/local/bin/crictl symlink to k3s, already exists
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO]  systemd: Starting k3s-agent

jetson@master1:~$ sudo k3s kubectl get nodes
NAME               STATUS   ROLES    AGE     VERSION
master1.dell.com   Ready    master   7m45s   v1.15.4-k3s.1
worker1.dell.com   Ready    worker   24s     v1.15.4-k3s.1
jetson@master1:~$

Uninstalling k3s

/usr/local/bin/k3s-uninstall.sh 

Setting up Kubernetes Dashboard

jetson@master1:~$ cat k3s-dashboard.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
jetson@master1:~$ sudo k3s kubectl apply -f k3s-dashboard.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
jetson@master1:~$ cat k3s-dashboard.yaml



Verifying if Kubernetes Dashboard Pods are up and Running

jetson@master1:~$ sudo k3s kubectl get po,svc,deploy -n kube-system
NAME                             READY   STATUS      RESTARTS   AGE
pod/coredns-66f496764-dgftj      1/1     Running     0          12m
pod/helm-install-traefik-skl9c   0/1     Completed   0          12m
pod/svclb-traefik-bxxpf          3/3     Running     0          11m
pod/traefik-d869575c8-wfnfx      1/1     Running     0          11m
pod/svclb-traefik-nkfms          3/3     Running     0          5m39s

NAME               TYPE           CLUSTER-IP      EXTERNAL-IP               PORT(S)                                     AGE
service/kube-dns   ClusterIP      10.43.0.10      <none>                    53/UDP,53/TCP,9153/TCP                      13m
service/traefik    LoadBalancer   10.43.213.124   192.168.1.3,192.168.1.6   80:32651/TCP,443:32650/TCP,8080:30917/TCP   11m

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/coredns   1/1     1            1           13m
deployment.extensions/traefik   1/1     1            1           11m

Enabling Kubeproxy

sudo k3s kubectl proxy
Starting to serve on 127.0.0.1:8001

Verifying if all Pods are successfully running

 sudo kubectl get po,deploy,svc -n kube-system
NAME                             READY   STATUS      RESTARTS   AGE
pod/coredns-66f496764-dgftj      1/1     Running     0          17m
pod/helm-install-traefik-skl9c   0/1     Completed   0          17m
pod/svclb-traefik-bxxpf          3/3     Running     0          16m
pod/traefik-d869575c8-wfnfx      1/1     Running     0          16m
pod/svclb-traefik-nkfms          3/3     Running     0          10m

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/coredns   1/1     1            1           17m
deployment.extensions/traefik   1/1     1            1           16m

NAME               TYPE           CLUSTER-IP      EXTERNAL-IP               PORT(S)                                     AGE
service/kube-dns   ClusterIP      10.43.0.10      <none>                    53/UDP,53/TCP,9153/TCP                      17m
service/traefik    LoadBalancer   10.43.213.124   192.168.1.3,192.168.1.6   80:32651/TCP,443:32650/TCP,8080:30917/TCP 

In my next blog post, I will showcase how GPU-enabled Kubernetes Pods can be deploy. Stay tuned !

Docker workshop on Raspberry Pi – University of Petroleum and Energy Studies, Dehradun

Estimated Reading Time: 3 minutes

On the 3rd of October, I travelled to UPES Dehradun(around 1500 miles) for 1 day session on ” The Pico project” as well as conducting Docker workshop on Raspberry Pi. It was an amazing experience where I got chance to interact with the University students for the first time.

With over 82+ students, the session all began at around 11:00 AM where I started my talk on IoT devices and how it has transformed everything from Human Life to the business capabilities. I was amazed to find that the major of the audience have brought their own Raspberry Pi 3 Model B for the workshop.

After 30-minutes of talk, I conducted a workshop where I invited around 6-7 university students to bring their Raspberry Pi boxes so as to install Docker Engine 19.03.2 and showcased how to create Docker Swarm cluster out of those 7 Raspberry Pi boxes. We ran Portainer and Swarm Visualizer on those 7 Node Swarm Cluster.

Link for Docker on Raspberry Pi workshop:
http://dockerlabs.collabnix.com/beginners/install/raspberrypi3/

After finishing the lunch at around 1:45 PM, we started with Docker Workshop at sharp 2:00 PM. First I asked all the students to create DockerHub account so as to carry out the task of building the first Docker Image and pushing it to DockerHub. We covered Beginners track
http://dockerlabs.collabnix.com/workshop/docker/ which comprised of the below sections:

Pre-requisite:

Getting Started with Docker Image

Accessing & Managing Docker Container

Almost all the students were really quick in following the tutorials and finishing their workshop. Special thanks to
 Marcos Liljedhal , Founder of Play with Docker Platform for ensuring that the platform is up and running, Jenny Burcio, Docker Inc and Karen Bazja, Docker Inc for all the support in making this happen. At the end of the workshop, I was handed over momento for sharing knowledge around Docker & IoT.

It was day well spent interacting with UPES students & Faculties. I got chance to roam around the city and capturing few of the amazing scenetic around the UPES campus.

Outcomes & Initiatives…

This trip brought an effective outcome and initiatives around collaboration to build Docker Community. First of all, Docker Dehradun Meetup community is BORN. If you are in Dehradun during the 2nd week of November, don’t miss out Docker Dehradun Meetup #1 event which is slated to happen inside UPES campus.

Docker Dehradun Meetup #1 – Docker, Kubernetes & Container Security

Wednesday, Nov 13, 2019, 1:00 PM

University Centre for Innovation and Entrepreneurship
Room Number 8007 , University of Petroleum and Energy Studies, Energy Acres, Bidholi Campus Dehra Dun, IN

28 Members Went

https://events.docker.com/events/details/docker-dehradun-presents-docker-dehradun-meetup-1-docker-kubernetes-container-security/ <p>Docker containers have rapidly evolved in the past few years into a perfect core technology to enable all of these technology changes. This year, IDC forecasted that there will be over 1.8 billion enterprise …

Check out this Meetup →

We have started Docker UPES whatsapp chat group for the students in case they have any queries around Docker & Kubernetes. It grew to 150+ members within few hours. We started a dedicated Collabnix Slack channel for UPES students so as to collaborate and conduct future Meetup events.