Object Detection with Yolo Made Simple using Docker on NVIDIA Jetson Nano

Object Detection using Dockerized Yolo

If you are looking out for the most effective real-time object detection algorithm which is open source and free to use, then YOLO(You Only Look Once) is the perfect answer. YOLO encompasses many of the most innovative ideas coming out of the computer vision research community. Object detection has become a critical capability of autonomous vehicle technology. Kiwibot is one such interesting example which I have been talking about. A Kiwibot is a food delivery robot equipped with six cameras and GPS to deliver the food order at the right place & at the right time. Last year it served around 40,000 food deliveries. Only the person who has ordered will be able to open the Kiwibot and retrieve the order through the app – all using Object detection algorithm. Isn’t it amazing?

YOLO is really very clever convolutional neural network (CNN) for doing object detection and that too in real-time. YOLO learns generalizable representations of objects so that when trained on natural images and tested on artwork, the algorithm outperforms other top detection methods. It is extremely very fast. It sees the entire image during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Today it is popularly being adopted due to the reason that it achieves high accuracy while also being able to run in real-time. If you want to deep dive into YOLO, I recommend reading
https://arxiv.org/pdf/1506.02640v5.pdf

Let’s talk about Pico for On-Premises..

Pico is one interesting project I have been working since past 3 months. Its all about implementing object detection & analytics(Deep Learning) using Docker on IoT devices like Raspberry Pi & Jetson Nano in just simple 3 steps. Imagine you are able to capture live video streams, identify objects using deep learning, and then trigger actions or notifications based on the identified objects – all using Docker containers. With Pico, you will be able to setup and run a live video capture, analysis, and alerting solution prototype. A camera surveils a particular area, streaming video over the network to a video capture client. The client samples video frames and sends them over to AWS, where they are analyzed and stored along with metadata. If certain objects are detected in the analyzed video frames, SMS alerts are sent out. Once a person receives an SMS alert, they will likely want to know what caused it. For that, sampled video frames can be monitored with low latency using a web-based user interface. One of the limitation with the above approach was that it uses Cloud Deep Learning Platform called Amazon Rekognition Service which is free for the first 5,000 API calls only. Once you cross that limit, you will be charged. Hence, I started looking out for an AI platform which can run modern Deep Learning algorithm fast.

Why Yolo on Jetson Nano?

Deep learning is a field with intense computational requirements and the choice of GPU will fundamentally determine your deep learning experience. Having a fast GPU is a very important aspect when one begins to learn deep learning as this allows for rapid gain in practical experience which is key to building the expertise with which you will be able to apply deep learning to new problems. Without this rapid feedback, it just takes too much time to learn from one’s mistakes and it can be discouraging and frustrating to go on with deep learning. If you look at emerging modern libraries like TensorFlow and PyTorch, they are great for parallelizing recurrent and convolutional networks, and for convolution, you can expect a speedup of about 1.9x/2.8x/3.5x for 2/3/4 GPUs.

At just 70 x 45 mm, the Jetson Nano module is the smallest Jetson device with AI capability. This production-ready System on Module (SOM) delivers big when it comes to deploying AI to devices at the edge across multiple industries—from smart cities to robotics.   Jetson Nano delivers 472 GFLOPs for running modern AI algorithms fast. It runs multiple neural networks in parallel and processes several high-resolution sensors simultaneously, making it ideal for applications like entry-level Network Video Recorders (NVRs), home robots, and intelligent gateways with full analytics capabilities. You can experience powerful and efficient AI, computer vision, and high-performance computing at just 5 to 10 watts.

If you have ever setup Yolo on Jetson Nano, I am sure you must have faced several challenges in terms of compiling Python, OpenCV & Darknet. Under this blog post, I will showcase how object detection can be simplified by using Docker container.

Preparing Jetson Nano

  • Unboxing Jetson Nano Pack
  • Preparing your microSD card

To prepare your microSD card, you’ll need a computer with Internet connection and the ability to read and write SD cards, either via a built-in SD card slot or adapter.

  1. Download the Jetson Nano Developer Kit SD Card Image, and note where it was saved on the computer.
  2. Write the image to your microSD card( atleast 16GB size) by following the instructions below according to the type of computer you are using: Windows, Mac, or Linux. If you are using Windows laptop, you can use SDFormatter software for formatting your microSD card and Win32DiskImager to flash Jetson Nano Image. In case you are using Mac, you will need Etcher software.
  1. To prepare your microSD card, you’ll need a computer with Internet connection and the ability to read and write SD cards, either via a built-in SD card slot or adapter

The Jetson Nano SD card image is of 12GB(uncompressed size).

Next, It’s time to remove this tiny SD card from SD card reader and plugin it to Jetson Board to let it boot.

Wow ! Jetson Nano comes with 18.09 by default

Yes, you read it correct. Let us try it once. First we will verify OS version running on Jetson Nano.

Verifying OS running on Jetson Nano

jetson@jetson-desktop:~$ sudo cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
jetson@jetson-desktop:~$

Verifying Docker

jetson@jetson-desktop:~$ sudo docker version
Client:
 Version:           18.09.2
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        6247962
 Built:             Tue Feb 26 23:51:35 2019
 OS/Arch:           linux/arm64
 Experimental:      false

Server:
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       6247962
  Built:            Wed Feb 13 00:24:14 2019
  OS/Arch:          linux/arm64
  Experimental:     false
jetson@jetson-desktop:~$

Updating OS Repository

sudo apt update

Installing Docker 19.03 Binaries

You will need curl command to update Docker 18.09 to 19.03 flawlessly.

sudo apt install curl
curl -sSL https://get.docker.com/ | sh
jetson@jetson-desktop:~$ sudo docker version
Client: Docker Engine - Community
 Version:           19.03.2
 API version:       1.40
 Go version:        go1.12.8
 Git commit:        6a30dfc
 Built:             Thu Aug 29 05:32:21 2019
 OS/Arch:           linux/arm64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.2
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.8
  Git commit:       6a30dfc
  Built:            Thu Aug 29 05:30:53 2019
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
jetson@jetson-desktop:~$

Running Jetson on 5W

Jetson Nano has two power mode, 5W and 10W. Set the powermode of the Jetson Nano to 5W by running the below CLI:

sudo nvpmodel -m 1

Please note that I encountered an issue while operating it on 10W as everytime I start the opendatacam container, it just gets rebooted.

Setting up a swap partition

In order to reduce memory pressure (and crashes), it is a good idea to setup a 6GB swap partition. (Nano has only 4GB of RAM)

git clone https://github.com/collabnix/installSwapfile
cd installSwapfile
chmod 777 installSwapfile.sh
./installSwapfile.sh

Don’t forget to reboot the Jetson nano.

Verify your if your USB Camera is connected

I tested it with Logitech Webcam and Jetson already have required driver for this to work.

ls /dev/video*

Output should be: /dev/video0

Running the scripts

I will be using OpenDataCam tool for object detection. It is an open source tool which quantifies and tracks moving objects with live video analysis. It runs flawlessly on Linux and CUDA GPU enabled hardware. Good news for NVIDIA Jetson fans ~ It is optimized for the NVIDIA Jetson Board series.

Interestingly, Opendatacam is shipped as a Docker container. Let us go ahead and try it out for the first time on Jetson Nano board. Follow the below steps to pull the shell script and run it for Jetson Nano.

wget -N https://raw.githubusercontent.com/opendatacam/opendatacam/v2.1.0/docker/install-opendatacam.sh
chmod 777 install-opendatacam.sh
./install-opendatacam.sh --platform nano

Listing the container



jetson@worker1:~$ sudo docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS              PORTS                                                                               NAMES
aae5117a06c6        opendatacam/opendatacam:v2.1.0-nano   "/bin/sh -c ./docker…"   15 minutes ago      Up 5 minutes        0.0.0.0:8070->8070/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:8090->8090/tcp, 27017/tcp   heuristic_bardeen
jetson@worker1:~$ sudo docker logs -f aae
2020-01-05T10:24:01.840+0000 I STORAGE  [main] Max cache overflow file size custom option: 0
2020-01-05T10:24:01.845+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2020-01-05T10:24:01.853+0000 I CONTROL  [initandlisten] MongoDB starting : pid=8 port=27017 dbpath=/data/db 64-bit host=aae5117a06c6
2020-01-05T10:24:01.853+0000 I CONTROL  [initandlisten] db version v4.0.12
2020-01-05T10:24:01.853+0000 I CONTROL  [initandlisten] git version: 5776e3cbf9e7afe86e6b29e22520ffb6766e95d4
2020-01-05T10:24:01.853+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2n  7 Dec 2017
2020-01-05T10:24:01.853+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2020-01-05T10:24:01.853+0000 I CONTROL  [initandlisten] modules: none
2020-01-05T10:24:01.853+0000 I CONTROL  [initandlisten] build environment:
2020-01-05T10:24:01.853+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
2020-01-05T10:24:01.853+0000 I CONTROL  [initandlisten]     distarch: aarch64
2020-01-05T10:24:01.853+0000 I CONTROL  [initandlisten]     target_arch: aarch64
2020-01-05T10:24:01.853+0000 I CONTROL  [initandlisten] options: {}
2020-01-05T10:24:01.854+0000 I STORAGE  [initandlisten]
2020-01-05T10:24:01.854+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2020-01-05T10:24:01.854+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2020-01-05T10:24:01.854+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=1470M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2020-01-05T10:24:03.612+0000 I STORAGE  [initandlisten] WiredTiger message [1578219843:612093][8:0x7fb6246440], txn-recover: Set global recovery timestamp: 0
2020-01-05T10:24:03.669+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2020-01-05T10:24:03.730+0000 I CONTROL  [initandlisten]
2020-01-05T10:24:03.730+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2020-01-05T10:24:03.730+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2020-01-05T10:24:03.730+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2020-01-05T10:24:03.730+0000 I CONTROL  [initandlisten]
2020-01-05T10:24:03.731+0000 I CONTROL  [initandlisten] ** WARNING: This server is bound to localhost.
2020-01-05T10:24:03.731+0000 I CONTROL  [initandlisten] **          Remote systems will be unable to connect to this server.
2020-01-05T10:24:03.731+0000 I CONTROL  [initandlisten] **          Start the server with --bind_ip <address> to specify which IP
2020-01-05T10:24:03.731+0000 I CONTROL  [initandlisten] **          addresses it should serve responses from, or with --bind_ip_all to
2020-01-05T10:24:03.731+0000 I CONTROL  [initandlisten] **          bind to all interfaces. If this behavior is desired, start the
2020-01-05T10:24:03.732+0000 I CONTROL  [initandlisten] **          server with --bind_ip 127.0.0.1 to disable this warning.
2020-01-05T10:24:03.732+0000 I CONTROL  [initandlisten]
2020-01-05T10:24:03.733+0000 I CONTROL  [initandlisten]
2020-01-05T10:24:03.734+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2020-01-05T10:24:03.734+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2020-01-05T10:24:03.734+0000 I CONTROL  [initandlisten]
2020-01-05T10:24:03.738+0000 I STORAGE  [initandlisten] createCollection: admin.system.version with provided UUID: 2ecaac66-8c6f-403e-b789-2a69113c59fd
2020-01-05T10:24:03.802+0000 I COMMAND  [initandlisten] setting featureCompatibilityVersion to 4.0
2020-01-05T10:24:03.810+0000 I STORAGE  [initandlisten] createCollection: local.startup_log with generated UUID: 847e0215-cc4d-4f84-8bbe-0bccb2f9dfd3
2020-01-05T10:24:03.858+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2020-01-05T10:24:03.862+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
2020-01-05T10:24:03.863+0000 I STORAGE  [LogicalSessionCacheRefresh] createCollection: config.system.sessions with generated UUID: 1e2b3be5-a92a-4eb8-b8a5-c6d10cfaadb7
2020-01-05T10:24:03.961+0000 I INDEX    [LogicalSessionCacheRefresh] build index on: config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 }
2020-01-05T10:24:03.961+0000 I INDEX    [LogicalSessionCacheRefresh]     building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2020-01-05T10:24:03.965+0000 I INDEX    [LogicalSessionCacheRefresh] build index done.  scanned 0 total records. 0 secs
2020-01-05T10:24:03.965+0000 I COMMAND  [LogicalSessionCacheRefresh] command config.$cmd command: createIndexes { createIndexes: "system.sessions", indexes: [ { key: { lastUse: 1 }, name: "lsidTTLIndex", expireAfterSeconds: 1800 } ], $db: "config" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2, W: 1 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 102ms

> OpenDataCam@2.1.0 start /opendatacam
> PORT=8080 NODE_ENV=production node server.js

Please specify the path to the raw detections file
-----------------------------------
-     Opendatacam initialized     -
- Config loaded:                  -
{
  "OPENDATACAM_VERSION": "2.1.0",
  "PATH_TO_YOLO_DARKNET": "/darknet",
  "VIDEO_INPUT": "usbcam",
  "NEURAL_NETWORK": "yolov3-tiny",
  "VIDEO_INPUTS_PARAMS": {
    "file": "opendatacam_videos/demo.mp4",
    "usbcam": "v4l2src device=/dev/video0 ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsink",
    "usbcam_no_gstreamer": "-c 0",
    "experimental_raspberrycam_docker": "v4l2src device=/dev/video2 ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsink",
    "raspberrycam_no_docker": "nvarguscamerasrc ! video/x-raw(memory:NVMM),width=1280, height=720, framerate=30/1, format=NV12 ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=360 ! videoconvert ! video/x-raw, format=BGR ! appsink",
    "remote_cam": "YOUR IP CAM STREAM (can be .m3u8, MJPEG ...), anything supported by opencv"
  },
  "VALID_CLASSES": [
    "*"
  ],
  "DISPLAY_CLASSES": [
    {
      "class": "bicycle",
      "icon": "1F6B2.svg"
    },
    {
      "class": "person",
      "icon": "1F6B6.svg"
    },
    {
      "class": "truck",
      "icon": "1F69B.svg"
    },
    {
      "class": "motorbike",
      "icon": "1F6F5.svg"
    },
    {
      "class": "car",
      "icon": "1F697.svg"
    },
    {
      "class": "bus",
      "icon": "1F68C.svg"
    }
  ],
  "PATHFINDER_COLORS": [
    "#1f77b4",
    "#ff7f0e",
    "#2ca02c",
    "#d62728",
    "#9467bd",
    "#8c564b",
    "#e377c2",
    "#7f7f7f",
    "#bcbd22",
    "#17becf"
  ],
  "COUNTER_COLORS": {
    "yellow": "#FFE700",
    "turquoise": "#A3FFF4",
    "green": "#a0f17f",
    "purple": "#d070f0",
    "red": "#AB4435"
  },
  "NEURAL_NETWORK_PARAMS": {
    "yolov3": {
      "data": "cfg/coco.data",
      "cfg": "cfg/yolov3.cfg",
      "weights": "yolov3.weights"
    },
    "yolov3-tiny": {
      "data": "cfg/coco.data",
      "cfg": "cfg/yolov3-tiny.cfg",
      "weights": "yolov3-tiny.weights"
    },
    "yolov2-voc": {
      "data": "cfg/voc.data",
      "cfg": "cfg/yolo-voc.cfg",
      "weights": "yolo-voc.weights"
    }
  },
  "TRACKER_ACCURACY_DISPLAY": {
    "nbFrameBuffer": 300,
    "settings": {
      "radius": 3.1,
      "blur": 6.2,
      "step": 0.1,
      "gradient": {
        "1": "red",
        "0.4": "orange"
      },
      "canvasResolutionFactor": 0.1
    }
  },
  "MONGODB_URL": "mongodb://127.0.0.1:27017"
}
-----------------------------------
Process YOLO initialized
2020-01-05T10:24:09.844+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:33770 #1 (1 connection now open)
> Ready on http://localhost:8080
> Ready on http://172.17.0.2:8080
2020-01-05T10:24:09.878+0000 I NETWORK  [conn1] received client metadata from 127.0.0.1:33770 conn1: { driver: { name: "nodejs", version: "3.2.5" }, os: { type: "Linux", name: "linux", architecture: "arm64", version: "4.9.140-tegra" }, platform: "Node.js v10.16.3, LE, mongodb-core: 3.2.5" }
2020-01-05T10:24:09.915+0000 I STORAGE  [conn1] createCollection: opendatacam.recordings with generated UUID: 0b545873-c40f-4232-8803-9c7c0cbd0ec4
2020-01-05T10:24:09.917+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:33772 #2 (2 connections now open)
Success init db
2020-01-05T10:24:09.919+0000 I NETWORK  [conn2] received client metadata from 127.0.0.1:33772 conn2: { driver: { name: "nodejs", version: "3.2.5" }, os: { type: "Linux", name: "linux", architecture: "arm64", version: "4.9.140-tegra" }, platform: "Node.js v10.16.3, LE, mongodb-core: 3.2.5" }
2020-01-05T10:24:09.969+0000 I INDEX    [conn1] build index on: opendatacam.recordings properties: { v: 2, key: { dateEnd: -1 }, name: "dateEnd_-1", ns: "opendatacam.recordings" }
2020-01-05T10:24:09.969+0000 I INDEX    [conn1]          building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2020-01-05T10:24:09.971+0000 I INDEX    [conn1] build index done.  scanned 0 total records. 0 secs
2020-01-05T10:24:09.971+0000 I STORAGE  [conn2] createCollection: opendatacam.tracker with generated UUID: 58e46bc1-6f22-4b3b-9e3f-42201351e5b4
2020-01-05T10:24:10.040+0000 I INDEX    [conn2] build index on: opendatacam.tracker properties: { v: 2, key: { recordingId: 1 }, name: "recordingId_1", ns: "opendatacam.tracker" }
2020-01-05T10:24:10.040+0000 I INDEX    [conn2]          building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2020-01-05T10:24:10.042+0000 I INDEX    [conn2] build index done.  scanned 0 total records. 0 secs
2020-01-05T10:24:10.043+0000 I COMMAND  [conn2] command opendatacam.$cmd command: createIndexes { createIndexes: "tracker", indexes: [ { name: "recordingId_1", key: { recordingId: 1 } } ], lsid: { id: UUID("afed3446-90a2-4a09-b03b-ba2e9e3aa76f") }, $db: "opendatacam" } numYields:0 reslen:114 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2, W: 1 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w: 47016 } }, Collection: { acquireCount: { w: 2 } } } storage:{} protocol:op_msg 118ms
Process YOLO started
{ OPENDATACAM_VERSION: '2.1.0',
  PATH_TO_YOLO_DARKNET: '/darknet',
  VIDEO_INPUT: 'usbcam',
  NEURAL_NETWORK: 'yolov3-tiny',
  VIDEO_INPUTS_PARAMS:
   { file: 'opendatacam_videos/demo.mp4',
     usbcam:
      'v4l2src device=/dev/video0 ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsink',
     usbcam_no_gstreamer: '-c 0',
     experimental_raspberrycam_docker:
      'v4l2src device=/dev/video2 ! video/x-raw, framerate=30/1, width=640, height=360 ! videoconvert ! appsink',
     raspberrycam_no_docker:
      'nvarguscamerasrc ! video/x-raw(memory:NVMM),width=1280, height=720, framerate=30/1, format=NV12 ! nvvidconv ! video/x-raw, format=BGRx, width=640, height=360 ! videoconvert ! video/x-raw, format=BGR ! appsink',
     remote_cam:
      'YOUR IP CAM STREAM (can be .m3u8, MJPEG ...), anything supported by opencv' },
  VALID_CLASSES: [ '*' ],
  DISPLAY_CLASSES:
   [ { class: 'bicycle', icon: '1F6B2.svg' },
     { class: 'person', icon: '1F6B6.svg' },
     { class: 'truck', icon: '1F69B.svg' },
     { class: 'motorbike', icon: '1F6F5.svg' },
     { class: 'car', icon: '1F697.svg' },
     { class: 'bus', icon: '1F68C.svg' } ],
  PATHFINDER_COLORS:
   [ '#1f77b4',
     '#ff7f0e',
     '#2ca02c',
     '#d62728',
     '#9467bd',
     '#8c564b',
     '#e377c2',
     '#7f7f7f',
     '#bcbd22',
     '#17becf' ],
  COUNTER_COLORS:
   { yellow: '#FFE700',
     turquoise: '#A3FFF4',
     green: '#a0f17f',
     purple: '#d070f0',
     red: '#AB4435' },
  NEURAL_NETWORK_PARAMS:
   { yolov3:
      { data: 'cfg/coco.data',
        cfg: 'cfg/yolov3.cfg',
        weights: 'yolov3.weights' },
     'yolov3-tiny':
      { data: 'cfg/coco.data',
        cfg: 'cfg/yolov3-tiny.cfg',
        weights: 'yolov3-tiny.weights' },
     'yolov2-voc':
      { data: 'cfg/voc.data',
        cfg: 'cfg/yolo-voc.cfg',
        weights: 'yolo-voc.weights' } },
  TRACKER_ACCURACY_DISPLAY:
   { nbFrameBuffer: 300,
     settings:
      { radius: 3.1,
        blur: 6.2,
        step: 0.1,
        gradient: [Object],
        canvasResolutionFactor: 0.1 } },
  MONGODB_URL: 'mongodb://127.0.0.1:27017' }
layer     filters    size              input                output
   0 (node:55) [DEP0001] DeprecationWarning: OutgoingMessage.flush is deprecated. Use flushHeaders instead.
conv     16  3 x 3 / 1   416 x 416 x   3   ->   416 x 416 x  16 0.150 BF
   1 max          2 x 2 / 2   416 x 416 x  16   ->   208 x 208 x  16 0.003 BF
   2 conv     32  3 x 3 / 1   208 x 208 x  16   ->   208 x 208 x  32 0.399 BF
   3 max          2 x 2 / 2   208 x 208 x  32   ->   104 x 104 x  32 0.001 BF
   4 conv     64  3 x 3 / 1   104 x 104 x  32   ->   104 x 104 x  64 0.399 BF
   5 max          2 x 2 / 2   104 x 104 x  64   ->    52 x  52 x  64 0.001 BF
   6 conv    128  3 x 3 / 1    52 x  52 x  64   ->    52 x  52 x 128 0.399 BF
   7 max          2 x 2 / 2    52 x  52 x 128   ->    26 x  26 x 128 0.000 BF
   8 conv    256  3 x 3 / 1    26 x  26 x 128   ->    26 x  26 x 256 0.399 BF
   9 max          2 x 2 / 2    26 x  26 x 256   ->    13 x  13 x 256 0.000 BF
  10 conv    512  3 x 3 / 1    13 x  13 x 256   ->    13 x  13 x 512 0.399 BF
  11 max          2 x 2 / 1    13 x  13 x 512   ->    13 x  13 x 512 0.000 BF
  12 conv   1024  3 x 3 / 1    13 x  13 x 512   ->    13 x  13 x1024 1.595 BF
  13 conv    256  1 x 1 / 1    13 x  13 x1024   ->    13 x  13 x 256 0.089 BF
  14 conv    512  3 x 3 / 1    13 x  13 x 256   ->    13 x  13 x 512 0.399 BF
  15 conv    255  1 x 1 / 1    13 x  13 x 512   ->    13 x  13 x 255 0.044 BF
  16 yolo

By now, you should be able to access opendatacam UI under https://IP-ADDRESS:8080

References:

Related Blogs:

Pico goes Cloudless: Running RTMP & Nginx for Video Streaming using Docker on Jetson Nano locally

I conducted Pico workshop for University students (Vellore Institute of Technology, Vellore & the University of Petroleum & Energy Studies, Dehradun) back in October 2019 where I demonstrated Live Object detection and analytics using Docker, AWS Rekognition System and Apache Kafka. The whole idea of Pico project is to simplify object detection and analytics process using few bunch of Docker containers. A cluster of Raspberry Pi nodes installed at various location points are coupled with camera modules and sensors with motion detection activated on them. Docker containers running on these Raspberry Pis are able to convert these nodes into CCTV camera. After producing images of all these cameras, the real-time data are then consumed on any of the five containers because of the replication factor of Kafka. The camera captured video streams and processed by Apache Kafka. The data is consumed inside a different container which runs on all of these nodes. AWS Rekognition analyses the real time video data & searches object on screen against a collection of objects.

Time to go Cloudless..

Cloudless Computing is all about allowing your workloads, computing, and data free to roam around & run where-ever & whenever they need to. With the advent of powerful AI products like Jetson Nano, it is now possible to run Object detection and analytics locally. The Jetson Nano is built around a 64-bit quad-core Arm Cortex-A57 CPU running at 1.43GHz alongside a NVIDIA Maxwell GPU with 128 CUDA cores capable of 472 GFLOPs (FP16), and has 4GB of 64-bit LPDDR4 RAM onboard along with 16GB of eMMC storage and runs Linux for Tegra. The 70 × 45 mm module has a 260-pin SODIMM connector which breaks out interfaces including video, audio, USB, and networking, and allows it to be connected to a compatible carrier board. This board enables the development of millions of new small, low-cost, low-power AI systems. It opens new worlds of embedded IoT applications, including entry-level Network Video Recorders (NVRs), home robots, and intelligent gateways with full analytics capabilities.

For implementing Pico for On-Premises, I planned to use the below high-level architecture. RTMP Server, Nginx & Yolo framework running on 1 or more Jetson Nano whereas leveraging existing stack of Raspberry Pi for capturing the video frames.

My Image

Why I chose RTMP Server?

Real-Time Messaging Protocol (RTMP) is an open source protocol owned by Adobe that’s designed to stream audio and video by maintaining low latency connections. It is a TCP-based protocol designed to maintain low-latency connections for audio and video streaming. It is a protocol for streaming audio, video, and data over the Internet.

To increase the amount of data that can be smoothly transmitted, streams are split into smaller fragments called packets. RTMP also defines several virtual channels that work independently of each other for packets to be delivered on. This means that video and audio are delivered on separate channels simultaneously. Clients use a handshake to form a connection with an RTMP server which then allows users to stream video and audio. RTMP live streaming generally requires a media server and a content delivery network, but by leveraging StackPath EdgeCompute you can remove the need for a CDN and drastically reduce latency and costs.

Infrastructure Setup:

  • Attach Raspberry Pi with Camera Module
  • Turn Your Raspberry Pi into CCTV Camera
  • Run RTMP + Nginx inside Docker container on Jetson Nano
  • Run Yolo inside Docker container on Jetson Nano

Turn Your Raspberry Pi into CCTV Camera

Refer this link

How to run RTMP inside Docker Container on Jetson Nano

docker run -d -p 1935:1935 --name nginx-rtmp ajeetraina/nginx-rtmp-arm:latest

If you want to build the Docker Image from Dockerfile, follow the below steps:

git clone https://github.com/collabnix/pico
cd pico/rtmp/
docker build -t ajeetraina/nginx-rtmp-arm .

Testing RTMP with OBS Studio and VLC

This can be tested either on your laptop or Raspberry Pi(omxplayer).

Follow the below steps in case you have Windows Laptop with OBS Studo and VLC installed.

  • Open OBS Studio
  • Click the “Settings” button
  • Go to the “Stream” section
  • In “Stream Type” select “Custom Streaming Server”
  • In the “URL” enter the rtmp://<ip_of_host>/live replacing <ip_of_host> with the IP of the host in which the container is running. For example: rtmp://192.168.0.30/live
  • In the “Stream key” use a “key” that will be used later in the client URL to display that specific stream. For example: test
  • Click the “OK” button
  • In the section “Sources” click de “Add” button (+) and select a source (for example “Screen Capture”) and configure it as you need
  • Click the “Start Streaming” button
  • Open a VLC player (it also works in Raspberry Pi using omxplayer)
  • Click in the “Media” menu
  • Click in “Open Network Stream”
  • Enter the URL from above as rtmp://<ip_of_host>/live/ replacing <ip_of_host> with the IP of the host in which the container is running and with the key you created in OBS Studio. For example: rtmp://192.168.0.30/live/test
  • Click “Play”
  • Now VLC should start playing whatever you are transmitting from OBS Studio

By now, you should have RTMP server up and configured. In my next post, I will show you how to run Yolo framework running inside Docker container. Stay tuned !

References:

https://github.com/collabnix/pico

Redis running inside Docker container on NVIDIA Jetson Nano

If you are looking out for a small, affordable, low-powered system which comes by default with the power of modern AI for your developers, then NVIDIA Jetson Nano is the answer. NVIDIA Jetson Nano is an embedded system-on-module (SoM) and developer kit from NVIDIA, including an integrated 128-core Maxwell GPU, quad-core ARM A57 64-bit CPU, 4GB LPDDR4 memory, along with support for MIPI CSI-2 and PCIe Gen2 high-speed I/O & that too within $99 price tag. Amazing, isn’t it?

The NVIDIA® Jetson Nano™ Developer Kit is purely an AI computer. It is a small, powerful computer that lets you run multiple neural networks in parallel for applications like image classification, object detection, segmentation, and speech processing. All in an easy-to-use platform that runs in as little as 5 watts. It is perfect for makers, learners, and developers that brings the power of modern artificial intelligence to a low-power, easy-to-use platform.

Why Redis on Jetson Nano?


The major problem with existing IoT devices like Raspberry Pi or Jetson Nano board is that they uses a removable microSD card as its boot device and storage. Hence, the problem of temporarily storing data. Imagine data received by sensors or 4k video images received every seconds on these IoT devices to perform on-device computations. For major of IoT projects, a message queuing system like MQTT is all that is needed to connect sensors, devices and graphic interfaces together. But if you have hard requirements for high throughput or you’re storing special data types like binary data or image files then you should start considering Redis.

Redis is an open source, in-memory Data Structure Store, used as a database, a caching layer or a message broker. Today Redis supports different kinds of abstract data structures, such as strings, lists, maps, sets, sorted sets, HyperLogLog, bitmaps, streams, and spatial indexes.

As per this link, Redis is ideal for IoT and Embedded devices for several reasons:

  • Redis has a very small memory footprint and CPU requirements. It can run in small devices like the Raspberry Pi Zero without impacting the overall performance, using a small amount of memory, while delivering good performance for many use cases.
  • The data structures of Redis are often a good way to model IoT/embedded use cases. For example in order to accumulate time series data, to receive or queue commands to execute or responses to send back to the remote servers and so forth.
  • Modeling data inside Redis can be very useful in order to make in-device decisions for appliances that must respond very quickly or when the remote servers are offline.
  • Redis can be used as an interprocess communication system between the processes running in the device.
  • The append only file storage of Redis is well suited for the SSD cards.
  • The Redis 5 stream data structure was specifically designed for time series applications and has a very low memory overhead.

It is important to note that both Redis 4 and Redis 5 versions supports the ARM processor in general. I have been playing around running containerized applications on Jetson Nano and couldn’t wait to try out Redis on top of NVIDIA Jetson Nano.

Preparing Jetson Nano

  • Unboxing Jetson Nano Pack
  • Preparing your microSD card

To prepare your microSD card, you’ll need a computer with Internet connection and the ability to read and write SD cards, either via a built-in SD card slot or adapter.

  1. Download the Jetson Nano Developer Kit SD Card Image, and note where it was saved on the computer.
  2. Write the image to your microSD card( atleast 16GB size) by following the instructions below according to the type of computer you are using: Windows, Mac, or Linux. If you are using Windows laptop, you can use SDFormatter software for formatting your microSD card and Win32DiskImager to flash Jetson Nano Image. In case you are using Mac, you will need Etcher software.
  1. To prepare your microSD card, you’ll need a computer with Internet connection and the ability to read and write SD cards, either via a built-in SD card slot or adapter

The Jetson Nano SD card image is of 12GB(uncompressed size).

Next, It’s time to remove this tiny SD card from SD card reader and plugin it to Jetson Board to let it boot.

Jetson Nano comes with 18.09 by default

Yes, you read it correct. Jetson Nano is shipped with Docker Engine 18.09 by default. Let us verify OS version running on Jetson Nano first.

Verifying OS running on Jetson Nano

jetson@jetson-desktop:~$ sudo cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
jetson@jetson-desktop:~$

Verifying Docker

jetson@jetson-desktop:~$ sudo docker version
Client:
 Version:           18.09.2
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        6247962
 Built:             Tue Feb 26 23:51:35 2019
 OS/Arch:           linux/arm64
 Experimental:      false

Server:
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       6247962
  Built:            Wed Feb 13 00:24:14 2019
  OS/Arch:          linux/arm64
  Experimental:     false
jetson@jetson-desktop:~$

Updating OS Repository

sudo apt update

Installing Docker 19.03 Binaries

You will need curl command to update Docker 18.09 to 19.03 flawlessly.

sudo apt install curl
curl -sSL https://get.docker.com/ | sh
jetson@jetson-desktop:~$ sudo docker version
Client: Docker Engine - Community
 Version:           19.03.2
 API version:       1.40
 Go version:        go1.12.8
 Git commit:        6a30dfc
 Built:             Thu Aug 29 05:32:21 2019
 OS/Arch:           linux/arm64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.2
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.8
  Git commit:       6a30dfc
  Built:            Thu Aug 29 05:30:53 2019
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
jetson@jetson-desktop:~$

Installing Docker Compose

root@jetson-desktop:/home/jetson# /usr/bin/docker-compose version
docker-compose version 1.17.1, build unknown
docker-py version: 2.5.1
CPython version: 2.7.15+
OpenSSL version: OpenSSL 1.1.1  11 Sep 2018
root@jetson-desktop:/home/jetson#

Run Redis Server inside Docker

Jetson Nano is ARMv8 (64bit) and hence we need to verify if ARM64v8 Redis image is available or not.

jetson@master1:~$ docker run --name redis-server -d arm64v8/redis redis-server --appendonly yes
6b80312b1e05499d565c6962b03f852db7064d5be97acb11dae31791b55ef320
jetson@master1:~$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
6b80312b1e05        arm64v8/redis       "docker-entrypoint.s…"   6 seconds ago       Up 3 seconds        6379/tcp            redis-server
jetson@master1:~$

Verify if Redis Server is running or not

jetson@master1:~$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
340437cc7c7c        arm64v8/redis       "docker-entrypoint.s…"   35 seconds ago      Up 32 seconds       6379/tcp            myredis

Checking the Redis Logs

jetson@master1:~$ docker logs -f 4e194
1:C 23 Dec 2019 15:49:21.819 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 23 Dec 2019 15:49:21.819 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 23 Dec 2019 15:49:21.819 # Configuration loaded
1:M 23 Dec 2019 15:49:21.828 * Running mode=standalone, port=6379.
1:M 23 Dec 2019 15:49:21.828 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 23 Dec 2019 15:49:21.828 # Server initialized
1:M 23 Dec 2019 15:49:21.828 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 23 Dec 2019 15:49:21.829 * Ready to accept connections

Running the Redis CLI

jetson@master1:~$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                        NAMES
4e1941c5be9b        arm64v8/redis       "docker-entrypoint.s…"   5 minutes ago       Up 4 minutes        192.168.1.7:6379->6379/tcp   redis-server
jetson@master1:~$ docker exec -it 4e1941 sh
# redis-cli
127.0.0.1:6379>

Redis PING-PONG Test

# redis-cli
127.0.0.1:6379> ping
PONG
127.0.0.1:6379>

Verifying Redis Command Line Interface

TThe redis-cli is the Redis command line interface, a simple program that allows to send commands to Redis, and read the replies sent by the server, directly from the terminal.

# redis-cli
127.0.0.1:6379> ping
PONG
127.0.0.1:6379> set name collabnix
OK
127.0.0.1:6379> get name
"collabnix"

Testing Redis CLI Counter Test

127.0.0.1:6379> incr counter
(integer) 1
127.0.0.1:6379> incr counter
(integer) 2
127.0.0.1:6379>

Connecting from other Linked container

jetson@master1:~$ docker run -it --rm --link redis-server:redis --name client1 arm64v8/redis sh
# redis-cli -h redis
redis:6379> get name
"collabnix"
redis:6379>

References: