If you’re an IoT Edge developer and looking out to build and deploy the production-grade end-to-end AI robotics applications, then check out a highly powerful and robust NVIDIA Jetson AGX Xavier developer platform. The NVIDIA® Jetson AGX Xavier™ Developer Kit provides a full-featured development platform designed for IoT Edge developers to easily create and deploy end-to-end AI robotics applications. This development platform is supported by NVIDIA JetPack and DeepStream SDKs, as well as CUDA®, cuDNN, and TensorRT software libraries.
Referring to AGX rightly as “Autonomous Machines Accelerator Technology” in loose term, the developer kit provides you with all the necessary tools you need to get started right away. And because it’s powered by the new NVIDIA Xavier processor, you now have more than 20X the performance and 10X the energy efficiency of its predecessor, the NVIDIA Jetson TX2.
At just 100 x 87 mm, Jetson AGX Xavier offers big workstation performance at 1/10 the size of a workstation. This makes it ideal for autonomous machines like delivery and logistics robots, factory systems, and large industrial UAVs. NVIDIA® Jetson™ brings accelerated AI performance to the Edge in a power-efficient and compact form factor. Together with NVIDIA JetPack™ SDK, these Jetson modules open the door for you to develop and deploy innovative products across all industries.
Top 5 Compelling Features of AGX Xavier Kit
- Unlike NVIDIA Jetson Nano 2GB/4GB, NVIDIA Jetson AGX Xavier comes with inbuilt 32GB eMMC 5.1 storage. So, that means you really don’t need to buy separate SD card to install/run the operating system. Hence, saving your time to start with the developer kit flawlessly.
- Compared to NVIDIA Jetson Nano, the new Jetson AGX Xavier module makes AI-powered autonomous machines possible, running in as little as 10W and delivering up to 32 TOPs.
- AGX Xavier support up-to 6 cameras (36 via virtual channels). Cool, isn’t it?
- AGX Xavier comes with inbuilt 32 GB 256-bit LPDDR4x 136.5GB/s memory, much powerful to run applications like DeepStreaming.
- Check out production-ready products based on Jetson AGX Xavier available from Jetson ecosystem partners.
A Bonus..
Jetson AGX Xavier module with thermal solution:
- Reference carrier board
- 65W power supply with AC cord
- Type C to Type A cable (USB 3.1 Gen2)
- Type C to Type A adapter (USB 3.1 Gen 1)
Comparing Jetson Nano Vs Jetson AGX Xavier
Features | Jetson Nano | Jetson AGX Xavier |
GPU | 128-core Maxwell @ 921 MHz | 512-core Volta @ 1.37 GHz |
Memory | 4 GB LPDDR4, 25.6 GB/s | 16 GB 256-bit LPDDR4, 137 GB/s |
Storage | MicroSD | 32 GB eMMC 5.1 |
USB | (4x) USB 3.0 + Micro-USB 2.0 | (3x) USB 3.1 + (4x) USB 2.0 |
Power | 5W / 10W | 10W / 15W / 30W |
PCI-Express lanes | 4 lanes PCIe Gen 2 | 16 lanes PCIe Gen 4 |
CPU (ARM) | 4-core ARM A57 @ 1.43 GHz | 8-core ARM Carmel v.8.2 @ 2.26 GHz |
Tensor cores | — | 64 |
Video encoding | 1x 4K30 (H.265) 2x 1080p60 (H.265) | 4x 4K60 (H.265) 16x 1080p60 (H.265) 32x 1080p30 (H.265) |
Getting Started
If you’re in India, I recommend you to buy it from the authorized dealer and not directly from Amazon, Inc. Amazon is selling it at a higher price. I recommend buying it from here. Thanks to ARM, Inc for delivering this powerful kit as part of ARM Innovator Influencer programme.
Installing Docker
By default, the latest version of Docker is shipped with the development platform. You can verify it by running the below command:
xavier@xavier-desktop:~$ sudo docker version
[sudo] password for xavier:
Client: Docker Engine - Community
Version: 20.10.8
API version: 1.41
Go version: go1.16.6
Git commit: 3967b7d
Built: Fri Jul 30 19:54:37 2021
OS/Arch: linux/arm64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.8
API version: 1.41 (minimum version 1.12)
Go version: go1.16.6
Git commit: 75249d8
Built: Fri Jul 30 19:52:46 2021
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: 1.4.9
GitCommit: e25210fe30a0a703442421b0f60afac609f950a3
runc:
Version: 1.0.1
GitCommit: v1.0.1-0-g4144b63
docker-init:
Version: 0.19.0
GitCommit: de40ad0
xavier@xavier-desktop:~$
Identify the Jetson board
Clone the repository
git clone https://github.com/jetsonhacks/jetsonUtilities
Execute the Python Script:
python3 jetsonInfo.py
NVIDIA Jetson AGX Xavier [16GB]
L4T 32.3.1 [ JetPack 4.3 ]
Ubuntu 18.04.3 LTS
Kernel Version: 4.9.140-tegra
CUDA NOT_INSTALLED
CUDA Architecture: 7.2
OpenCV version: NOT_INSTALLED
OpenCV Cuda: NO
CUDNN: NOT_INSTALLED
TensorRT: NOT_INSTALLED
Vision Works: NOT_INSTALLED
VPI: NOT_INSTALLED
Vulcan: 1.1.70
xavier@xavier-desktop:~/jetsonUtilities$
Installing Jtop
Lucky You! I have created a Docker Image for Jetson Nano few weeks back that you can leverage on Xavier developer kit too. Check this out:
docker run --rm -it --gpus all -v /run/jtop.sock:/run/jtop.sock ajeetraina/jetson-stats-nano jtop
If you want to keep it simple and new to Docker, no worries. Try to install the Python module and you are all good to go.
sudo -H pip install -U jetson-stats
Collecting jetson-stats
Downloading https://files.pythonhosted.org/packages/70/57/ce1aec95dd442d94c3bd47fcda77d16a3cf55850fa073ce8c3d6d162ae0b/jetson-stats-3.1.1.tar.gz (85kB)
100% |████████████████████████████████| 92kB 623kB/s
Building wheels for collected packages: jetson-stats
Running setup.py bdist_wheel for jetson-stats ... done
Stored in directory: /root/.cache/pip/wheels/5e/b0/97/f0f8222e76879bf04b6e8c248154e3bb970e0a2aa6d12388f9
Successfully built jetson-stats
Installing collected packages: jetson-stats
Successfully installed jetson-stats-3.1.1
xavier@xavier-desktop:~/jetsonUtilities$
Don’t get surprise if you encounter the below message. Reboot your system and re-run the command:
$jtop
I can't access jetson_stats.service.
Please logout or reboot this board.
Using Jtop to see the GPU and CPU details
Displaying Xavier Information
Displaying the Xavier Release Info
xavier@xavier-desktop:~$ jetson_release -v
- NVIDIA Jetson AGX Xavier [16GB]
* Jetpack 4.3 [L4T 32.3.1]
* NV Power Mode: MODE_15W - Type: 2
* jetson_stats.service: active
- Board info:
* Type: AGX Xavier [16GB]
* SOC Family: tegra194 - ID:25
* Module: P2888-0001 - Board: P2822-0000
* Code Name: galen
* CUDA GPU architecture (ARCH_BIN): 7.2
* Serial Number: 1420921055981
- Libraries:
* CUDA: NOT_INSTALLED
* cuDNN: NOT_INSTALLED
* TensorRT: NOT_INSTALLED
* Visionworks: NOT_INSTALLED
* OpenCV: NOT_INSTALLED compiled CUDA: NO
* VPI: NOT_INSTALLED
* Vulkan: 1.1.70
- jetson-stats:
* Version 3.1.1
* Works on Python 2.7.17
xavier@xavier-desktop:~$
Displaying Jetson variables
export | grep JETSON
declare -x JETSON_BOARD="P2822-0000"
declare -x JETSON_BOARDIDS=""
declare -x JETSON_CHIP_ID="25"
declare -x JETSON_CODENAME="galen"
declare -x JETSON_CUDA="NOT_INSTALLED"
declare -x JETSON_CUDA_ARCH_BIN="7.2"
declare -x JETSON_CUDNN="NOT_INSTALLED"
declare -x JETSON_JETPACK="4.3"
declare -x JETSON_L4T="32.3.1"
declare -x JETSON_L4T_RELEASE="32"
declare -x JETSON_L4T_REVISION="3.1"
declare -x JETSON_MACHINE="NVIDIA Jetson AGX Xavier [16GB]"
declare -x JETSON_MODULE="P2888-0001"
declare -x JETSON_OPENCV="NOT_INSTALLED"
declare -x JETSON_OPENCV_CUDA="NO"
declare -x JETSON_SERIAL_NUMBER="1420921055981"
declare -x JETSON_SOC="tegra194"
declare -x JETSON_TENSORRT="NOT_INSTALLED"
declare -x JETSON_TYPE="AGX Xavier [16GB]"
declare -x JETSON_VISIONWORKS="NOT_INSTALLED"
declare -x JETSON_VPI="NOT_INSTALLED"
declare -x JETSON_VULKAN_INFO="1.1.70"
xavier@xavier-desktop:~$
Installing nvidia-docker
sudo apt install nvidia-docker2
Install nvidia-container-runtime package:
sudo yum install nvidia-container-runtime
Update docker daemon
sudo vim /etc/docker/daemon.json
Ensure that /etc/docker/daemon.json with the path to nvidia-container-runtime:
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
}
sudo pkill -SIGHUP dockerd
Running the DeepStream Container
DeepStream is a streaming analytic toolkit to build AI-powered applications. It takes the streaming data as input – from USB/CSI camera, video from file or streams over RTSP, and uses AI and computer vision to generate insights from pixels for better understanding of the environment. DeepStream SDK can be the foundation layer for a number of video analytic solutions like understanding traffic and pedestrians in smart city, health and safety monitoring in hospitals, self-checkout and analytics in retail, detecting component defects at a manufacturing facility and others.
- DeepStream 5.1 provides Docker containers for both dGPU and Jetson platforms.
- These containers provide a convenient, out-of-the-box way to deploy DeepStream applications by packaging all associated dependencies within the container.
- The associated Docker images are hosted on the NVIDIA container registry in the NGC web portal at https://ngc.nvidia.com.
- They use the nvidia-docker package, which enables access to the required GPU resources from containers.
- DeepStream builds on top of several NVIDIA libraries from the CUDA-X stack such as CUDA, TensorRT, Triton Inference server and multimedia libraries.
- TensorRT accelerates the AI inference on NVIDIA GPU. DeepStream abstracts these libraries in DeepStream plugins, making it easy for developers to build video analytic pipelines without having to learn all the individual libraries.
Please Note:
The dGPU container is called deepstream and the Jetson container is called deepstream-l4t.
- Unlike the container in DeepStream 3.0, the dGPU DeepStream 5.1 container supports DeepStream application development within the container.
- It contains the same build tools and development libraries as the DeepStream 5.1 SDK.
- In a typical scenario, you build, execute and debug a DeepStream application within the DeepStream container.
- Once your application is ready, you can use the DeepStream 5.1 container as a base image to create your own Docker container holding your application files (binaries, libraries, models, configuration file, etc.,)
The above section describes the features supported by the DeepStream Docker container for the dGPU and Jetson platforms.
To run the container:
Allow external applications to connect to the host’s X display:
xhost +
Running DeepStream Docker container
DeepStream applications can be deployed in containers using NVIDIA container Runtime. The containers are available on NGC, NVIDIA GPU cloud registry.
sudo docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.1 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ad38d8f4612d nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples "/bin/bash" 10 seconds ago Up 9 seconds romantic_hopper
xavier@xavier-desktop:~$
Enter into Deep Streaming container and access the sample app structure:
root@xavier-desktop:/opt/nvidia/deepstream/deepstream-5.1# tree -L 2
.
|-- LICENSE.txt
|-- LicenseAgreement.pdf
|-- README
|-- bin
| |-- deepstream-app
| |-- deepstream-appsrc-test
| |-- deepstream-audio
| |-- deepstream-dewarper-app
| |-- deepstream-gst-metadata-app
| |-- deepstream-image-decode-app
| |-- deepstream-image-meta-test
| |-- deepstream-infer-tensor-meta-app
| |-- deepstream-mrcnn-app
| |-- deepstream-nvdsanalytics-test
| |-- deepstream-nvof-app
| |-- deepstream-opencv-test
| |-- deepstream-perf-demo
| |-- deepstream-segmentation-app
| |-- deepstream-test1-app
| |-- deepstream-test2-app
| |-- deepstream-test3-app
| |-- deepstream-test4-app
| |-- deepstream-test5-app
| |-- deepstream-testsr-app
| |-- deepstream-transfer-learning-app
| `-- deepstream-user-metadata-app
|-- doc
| `-- nvidia-tegra
|-- install.sh
|-- lib
| |-- gst-plugins
| |-- libiothub_client.so
| |-- libiothub_client.so.1 -> libiothub_client.so
| |-- libnvbufsurface.so -> /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so
| |-- libnvbufsurftransform.so -> /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so
| |-- libnvds_amqp_proto.so
| |-- libnvds_audiotransform.so
| |-- libnvds_azure_edge_proto.so
| |-- libnvds_azure_proto.so
| |-- libnvds_batch_jpegenc.so
| |-- libnvds_csvparser.so
| |-- libnvds_dewarper.so
| |-- libnvds_dsanalytics.so
| |-- libnvds_infer.so
| |-- libnvds_infer_custom_parser_audio.so
| |-- libnvds_infer_server.so
| |-- libnvds_infercustomparser.so
| |-- libnvds_inferutils.so
| |-- libnvds_kafka_proto.so
| |-- libnvds_logger.so
| |-- libnvds_meta.so
| |-- libnvds_mot_iou.so
| |-- libnvds_mot_klt.so
| |-- libnvds_msgbroker.so
| |-- libnvds_msgconv.so -> libnvds_msgconv.so.1.0.0
| |-- libnvds_msgconv.so.1.0.0
| |-- libnvds_msgconv_audio.so -> libnvds_msgconv_audio.so.1.0.0
| |-- libnvds_msgconv_audio.so.1.0.0
| |-- libnvds_nvdcf.so
| |-- libnvds_nvtxhelper.so
| |-- libnvds_opticalflow_dgpu.so
| |-- libnvds_opticalflow_jetson.so
| |-- libnvds_osd.so
| |-- libnvds_redis_proto.so
| |-- libnvds_utils.so
| |-- libnvdsgst_helper.so
| |-- libnvdsgst_inferbase.so
| |-- libnvdsgst_meta.so
| |-- libnvdsgst_smartrecord.so
| |-- libnvdsgst_tensor.so
| |-- libtritonserver.so
| |-- pyds.so
| |-- setup.py
| `-- triton_backends
|-- samples
| |-- configs
| |-- models
| |-- prepare_classification_test_video.sh
| |-- prepare_ds_trtis_model_repo.sh
| |-- streams
| `-- trtis_model_repo
|-- sources
| |-- SONYCAudioClassifier
| |-- apps
| |-- gst-plugins
| |-- includes
| |-- libs
| |-- objectDetector_FasterRCNN
| |-- objectDetector_SSD
| |-- objectDetector_Yolo
| `-- tools
|-- uninstall.sh
`-- version
Did you know?
DeepStream applications can be orchestrated on the edge using Kubernetes on GPU. Sample Helm chart to deploy DeepStream application is available on NGC.
What’s Next?
In my next blog post, we will deep dive into Deep Stream sample examples and see how to implement Face-mask detection system using NVIDIA DeepStream.