This blog is authored by Manish L. in collaboration with Jalaj Krishna B.S. and Jeevitha S. from Raja Rajeshwari College of Engineering.
The NVIDIA Jetson Nano is a small, powerful computer that brings the power of modern artificial intelligence (AI) to a compact, energy-efficient platform. It is an ideal device for developers, makers, and students, providing a complete environment for building and deploying AI-powered applications in areas such as image classification, object detection, segmentation, and speech processing.
Part of the NVIDIA Jetson family, the Nano is designed to run multiple neural networks in parallel, allowing for real-time processing of complex data. It utilizes the same CUDA-X accelerated computing stack that powers the world’s fastest supercomputers, making it a highly capable platform for edge AI projects. Its combination of performance, low power consumption, and ease of use makes it a perfect tool for developing embedded and IoT (Internet of Things) applications with computer vision and machine learning capabilities.
Why buy Jetson Nano?
The NVIDIA Jetson Nano is the ideal platform for anyone looking to develop or deploy AI applications at the edge. Its combination of performance, size, and ecosystem makes it a highly compelling choice for a wide range of projects:
The Jetson Nano is the ideal platform for anyone looking to develop or deploy AI applications at the edge. Its combination of performance, size, and ecosystem makes it a highly compelling choice for a wide range of projects:
- Powerful Edge AI Performance: It delivers 472 GFLOPS of compute performance, allowing it to run modern AI algorithms and multiple neural networks in parallel. This capability is essential for real-time applications like object detection, image classification, and segmentation.
- Full NVIDIA Software Support: It is supported by the same CUDA-X accelerated computing stack that powers NVIDIA’s supercomputers. The included JetPack SDK provides a complete AI software stack, including accelerated libraries for deep learning and computer vision, accelerating development time.
- Compact and Energy-Efficient: The module is smaller than a credit card and operates with a low power demand of just 5 to 10 watts. This makes it perfect for power-constrained, embedded, and IoT (Internet of Things) applications, such as home robots or intelligent surveillance systems.
- Affordable AI Development: The Jetson Nano Developer Kit offers the power of modern AI at a low-cost price point, making advanced AI accessible to students, makers, and small development teams.
- Versatile Connectivity: It offers robust connectivity options, including Gigabit Ethernet, USB 3.0 ports for high-speed peripherals, and a dedicated Camera Serial Interface (CSI) for high-resolution cameras. It also features a 40-pin GPIO header for interfacing with various sensors and actuators.
NVIDIA Jetson Nano (4GB) Specifications
|
Feature |
Specification |
|---|---|
|
GPU Architecture |
NVIDIA Maxwell architecture with 128 CUDA cores |
|
AI Performance |
472 GFLOPS (FP16) |
|
CPU |
Quad-core ARM Cortex-A57 MPCore processor @ 1.43 GHz |
|
Memory |
4 GB 64-bit LPDDR4; 25.6 GB/s bandwidth |
|
Storage |
16 GB eMMC 5.1 (Module) or microSD slot (Dev Kit) |
|
Module Power |
5W to 10W (Configurable modes) |
|
Video Encode |
4K @ 30 | 4x 1080p @ 30 | 9x 720p @ 30 (H.264/H.265) |
|
Video Decode |
4K @ 60 | 2x 4K @ 30 | 8x 1080p @ 30 | 18x 720p @ 30 |
Model Comparison for Jetson Nano (4GB RAM)
The following table highlights models that can realistically run on the Nano using 4-bit quantization (INT4) or highly efficient architectures.
|
Model |
Parameters |
Best Use Case |
Expected Speed (Tokens/sec) |
Memory Required |
|---|---|---|---|---|
|
MobileLLM-350M |
350 Million |
Simple Q&A, IoT triggers |
~80 – 120 t/s |
< 1 GB |
|
Qwen 2.5-0.5B |
500 Million |
Multilingual tasks, basic triage |
~50 – 70 t/s |
~1.2 GB |
|
TinyLlama-1.1B |
1.1 Billion |
Conversational assistants |
~20 – 35 t/s |
~1.8 GB |
|
Phi-2 (Quantized) |
2.7 Billion |
Reasoning, logic, medical summaries |
~10 – 15 t/s |
~2.8 GB |
|
Llama 3.2-1B |
1.0 Billion |
Instruction following, agents |
~18 – 25 t/s |
~1.6 GB |
Performance Gains: Standard vs. TensorRT Optimized
Using TensorRT (NVIDIA’s deep learning optimizer) instead of raw PyTorch or CPU-based inference provides a massive jump in speed and power efficiency. This is critical for your real-time medical monitoring goals.
|
Optimization Layer |
Speed Gain |
Latency (Delay) |
Why it happens? |
|---|---|---|---|
|
Native PyTorch |
$1\times$ (Baseline) |
High (Laggy) |
Uses standard Python loops; slow memory management. |
|
Quantization (INT4) |
$2.5\times$ to $4\times$ |
Medium |
Shrinks weights from 16-bit to 4-bit; fits more in RAM. |
|
TensorRT / TensorRT-LLM |
$5\times$ to $10\times$ |
Ultra-Low |
Layer Fusion: Combines math operations to reduce GPU travel. |
|
Kernel Auto-Tuning |
$1.2\times$ |
Low |
Selects the fastest math algorithm for the Nano’s specific GPU. |
Getting Started
- Hardware Setup
- Software Setup
- Installing Linux on NVIDIA Jetson Nano

Prerequisite:
- Jetson Nano
- A DC power supply
- 32GB/64GB SD card
- WiFi Adapter
- Wireless Keyboard
- Wireless mouse
Hardware Setup (Connecting Cables, etc.)
The hardware setup requires several components ~ The NVIDIA Jetson Nano Developer Kit, a DC power supply (4A recommended for heavier loads), a 32GB/64GB microSD card, a WiFi adapter, a monitor, and a wireless keyboard and mouse. The process involves physically installing the prepared microSD card into the slot on the underside of the Nano module. You then connect the monitor using an HDMI or DisplayPort cable, plug in the keyboard and mouse to the USB ports, and finally connect the DC power supply, which will automatically power on the device.
Software:
- Download Jetson Nano Developer Kit SD card image
- Raspberry Pi Imager / Etcher installed on your local system
Before assembling the hardware, we must prepare the operating system image. This setup involves two main steps on our host computer: first, downloading the Jetson SD card image (the Ubuntu-based Linux operating system specific to the Nano), and second, installing a utility such as Raspberry Pi Imager or Etcher, which is required to write the operating system image file onto the microSD card.
Preparing Your Jetson Nano:
- Unzip the SD card image
- Insert SD card into your system.
- Bring up Raspberry Pi Imager tool to flash image into the SD card
Installing Linux on Jetson Nano
This step focuses on creating the bootable system drive. You begin by unzipping the downloaded SD card image file and inserting the blank microSD card into your host computer’s card reader. You then launch the Raspberry Pi Imager tool (or similar flashing app), select the Jetson Nano image file, choose your microSD card as the target drive, and start the flashing process. Once the writing and verification are complete, you safely eject the microSD card, which is now ready to be inserted into the Jetson Nano to begin the first boot.
The command sudo apt update refreshes the list of available software and their versions from the repositories defined in the system’s sources.list. It does not install or upgrade any packages; it only fetches the latest package information, including security updates and new applications. The output confirms that the package lists were successfully read, and one package was found that can be upgraded.
$ sudo apt update
Upgrading the Ubuntu system
The command:
sudo apt upgrade
The command is used to install the latest versions of all packages currently installed on your Ubuntu system.
It uses the package list information gathered in the preceding sudo apt update step to decide which software components need upgrading. Since this command modifies system files, it is run with sudo (Superuser Do), which grants the necessary administrative privileges. The goal is to keep your software up-to-date with the latest features and security fixes.
$ sudo apt upgrade
Install the update manager core:
The command:
sudo apt install update-manager-core
attempts to install or ensure that the latest version of the core system update management package is on your machine. This package is essential for handling system updates and release upgrades. The output confirms that: Update-manager-core is already the newest version on your system, so no installation or upgrade was performed in this step.
The command lsb_release -a is used to display Linux Standard Base (LSB) information about the distribution. The -a flag, standing for “all,” shows all available information. Its primary use is to quickly identify the Linux distribution, version number (e.g., 18.04 ), and codename (e.g., bionic ).
We frequently use this to ensure they are installing the correct software or packages for their specific operating system version.
$ sudo apt install update-manager-core
Getting started with Docker:
Updating the package list:
$ docker version
This command attempts to check the version of the Docker software installed on your system. In which it shows the Docker software client version, server engine configuration which is installed in the Jetson Nano.
The output:
bash: docker: command not found
indicates that the Docker package has not yet been successfully installed. This is an expected result at this stage of the installation process.
$ sudo apt-get update
This command updates the local list of available software and their versions from all configured package repositories, including the newly added Docker repository. It is a crucial step to ensure that the system knows where to find the latest version of the Docker package for installation.
sudo grants the necessary administrative privileges to perform this system update.
Install Docker:
$ sudo apt-get install -y docker.io
This command is used to download and install the official Docker community edition package, named docker.io, on your Ubuntu system. The sudo prefix grants administrative privileges necessary for installing system-level software. The -y flag automatically confirms any prompts, ensuring a non-interactive installation process. This step is the core of getting the Docker container runtime onto your Jetson Nano.
$ sudo apt-get install -y docker.io
Start and enable the Docker service
This step is essential to manage the Docker runtime as a background application on your Ubuntu system. The first command sudo systemctl enable docker configures the Docker service to automatically start every time the Jetson Nano boots up. Next, sudo systemctl start docker immediately launches the Docker service without requiring a system reboot, making it active right away. Finally, sudo systemctl status docker verifies the operation, showing that the Docker service is loaded and currently running, which confirms the successful setup and readiness for using containers.
$ sudo systemctl enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
manish@manish-desktop:~$ sudo systemctl start docker
manish@manish-desktop:~$ sudo systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: e
Active: active (running) since Sun 2026-01-18 21:57:06 IST; 2min 51s ago
Docs: https://docs.docker.com
Main PID: 10201 (dockerd)
Tasks: 8
Verifying the Docker version:
The command sudo docker version is used for verifying the Docker installation and checking its status on the Jetson Nano.
What it does: It runs the Docker command-line utility to query both the Client (the command-line interface) and the Server (the Docker daemon/engine) to report their version numbers, API versions, and other technical details.
Why you do it: This step is crucial for confirmation after installation; it ensures that the Docker service is running correctly, is accessible to the user (via sudo), and confirms the exact software versions for troubleshooting and compatibility checks.
The use of sudo grants the necessary administrative privileges to communicate with the Docker daemon, which runs as a system service.
The detailed output verifies that the entire Docker environment is operational and ready to manage containers.
manish@manish-desktop:~$ sudo systemctl enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
manish@manish-desktop:~$ sudo systemctl start docker
manish@manish-desktop:~$ sudo systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: e
Active: active (running) since Sun 2026-01-18 21:57:06 IST; 2min 51s ago
Docs: https://docs.docker.com
Main PID: 10201 (dockerd)
Tasks: 8
Configure the NVIDIA Runtime:
Install the NVIDIA Container Toolkit (usually comes with JetPack):
The command sudo apt-get install -y nvidia-docker2 installs the NVIDIA Container Toolkit package.
This step is required because standard Docker is not natively aware of NVIDIA GPUs. The toolkit acts as a bridge, enabling your Docker containers to access and utilize the Jetson Nano’s powerful NVIDIA GPU for hardware-accelerated tasks like deep learning and computer vision. The sudo part grants administrative permissions for the system installation, while -y bypasses confirmation prompts. This configuration is essential for running AI applications efficiently within Docker containers.
The commands are used to configure Docker to officially recognize and use the NVIDIA GPU runtime.
The command: sudo vi /etc/docker/daemon.json uses administrative privileges (sudo) and the vi editor to open the primary configuration file for the Docker daemon.
The file content: The JSON code adds a new runtime named “nvidia” and sets it as the “default-runtime”.
Why it’s required: This step is essential because it tells the Docker engine that any container started by default should use the NVIDIA Container Runtime.
This configuration ensures your Docker containers can access and utilize the Jetson Nano’s GPU for hardware-accelerated tasks, which is necessary for running deep learning and computer vision applications efficiently.
Without this explicit setting, Docker would use the standard runtime and would not be able to leverage the NVIDIA GPU, even though the nvidia-docker2 package is installed.
$ sudo apt-get install -y nvidia-docker2
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
nvidia-docker2
0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.
Need to get 0 B/5,536 B of archives.
After this operation, 27.6 kB of additional disk space will be used.
Selecting previously unselected package nvidia-docker2.
(Reading database ... 175420 files and directories currently installed.)
Preparing to unpack .../nvidia-docker2_2.8.0-1_all.deb ...
Unpacking nvidia-docker2 (2.8.0-1) ...
Setting up nvidia-docker2 (2.8.0-1) ...
manish@manish-desktop:~$ sudo vi /etc/docker/daemon.json
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
~
"/etc/docker/daemon.json" 10L, 156C 10,0-1 All
:wq
Restart Docker:
$ sudo systemctl restart
sudo systemctl restart docker command restarts the Docker service on your Jetson Nano. It’s essential after modifying the Docker daemon configuration (e.g., /etc/docker/daemon.json for the NVIDIA runtime) to ensure the Docker engine reloads its settings and applies the changes.
Verifying the Installation:
Run a test container to confirm that Docker can see the NVIDIA GPU:
This command is the final verification test to confirm that the entire Docker and NVIDIA GPU runtime setup is working correctly on your Jetson Nano.
The command performs several key actions:
- sudo docker run: Starts a new container with administrative privileges.
- –rm: Automatically removes the container after it exits.
- –runtime nvidia: Crucially, this explicitly tells Docker to use the NVIDIA Container Runtime (which you configured in the previous step) instead of the standard runtime.
- nvcr.io/nvidia/l4t-base:r32.7.1: Specifies the official NVIDIA “Linux for Tegra” base image for Jetson devices.
- nvidia-smi: This is the command executed inside the container, which is a utility that displays information about NVIDIA GPUs.
Why it is required: Running nvidia-smi successfully inside a Docker container that uses the custom NVIDIA runtime proves that Docker can correctly leverage the Jetson Nano’s GPU. This is essential for running hardware-accelerated AI and machine learning applications.
Pulling nvidia L4T image:
The nvidia/l4t-base image, which stands for Linux for Tegra (L4T), is the official NVIDIA base Docker image specifically designed for Jetson devices like the Jetson Nano.
It was used because:
- Verification: It provides a trusted, minimal environment based on the Jetson operating system where you can be certain all necessary NVIDIA drivers and libraries are available.
- Testing: By running the nvidia-smi command inside this specific container, you perform the final, definitive test to confirm that your entire setup—Docker, the NVIDIA runtime, and the Jetson’s GPU—are all working together correctly and are ready for hardware-accelerated AI applications.
$ sudo docker run --rm --runtime nvidia nvcr.io/nvidia/l4t-base:r32.7.1 nvidia-smi
Unable to find image 'nvcr.io/nvidia/l4t-base:r32.7.1' locally
r32.7.1: Pulling from nvidia/l4t-base
f46992f278c2: Pull complete
151eb940bbec: Pull complete
0e9dda2495b9: Pull complete
0e78bdc2f297: Pull complete
8dc68d594a4e: Pull complete
Digest: sha256:a374d81695f172fcda9da8db23f60d8bc35948762f71f3ee69564b4f6be5ef1c
Status: Downloaded newer image for nvcr.io/nvidia/l4t-base:r32.7.1
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "nvidia-smi": executable file not found in $PATH: unknown.
The error message is highly specific and points to a failure in the final stage of your Docker and NVIDIA configuration.
The line Status: Downloaded newer image… confirms that Docker successfully found and pulled the correct NVIDIA base image (nvcr.io/nvidia/l4t-base:r32.7.1), meaning your network and registry access are correct.
The critical part is the second line: …exec: “nvidia-smi”: executable file not found in $PATH: unknown. This indicates that when the container tried to execute the nvidia-smi command, the system running inside the container could not locate it. This is a common symptom when the NVIDIA Container Runtime is not correctly configured or activated, even if the nvidia-docker2 package is installed. The Docker daemon is either not using the new NVIDIA runtime you configured in /etc/docker/daemon.json, or the Docker service was not properly restarted afterward to load the new configuration. You should ensure you successfully ran sudo systemctl restart docker after editing daemon.json.
Verifying the JetPack version:
We need to verify the JetPack version for the following critical reasons:
Software Compatibility: The nvidia-jetpack is a meta-package that bundles the core NVIDIA drivers, CUDA, and AI libraries like cuDNN and TensorRT. Knowing the exact version ensures that any deep learning containers or software you use (like the NVIDIA L4T image) are compatible with the specific software stack on your Jetson Nano.
Troubleshooting: If you encounter errors, the JetPack version is the first piece of information needed. It confirms the baseline configuration of your system and allows you to check for known issues or required patches associated with that specific release (e.g., R32.7).
System Integrity Check: The command confirms that your system is correctly configured to see and access the official NVIDIA software repositories. An unexpected output (like an inability to find the package) would indicate a problem with your repository setup.
The command apt-cache policy nvidia-jetpack is used to verify the status and version of the NVIDIA JetPack SDK package on your system.
- What it does: It queries the Ubuntu package manager (APT) to show the currently Installed version (which in your output is none) and all Candidate versions available in the NVIDIA repositories (e.g., 4.6.6-b24).
- Why it is required: The nvidia-jetpack is a meta-package that bundles all the necessary NVIDIA drivers and AI libraries. This check confirms that your system can see the official software and helps you determine the specific JetPack version (R32.7 in your case, based on the version table URL) that your system is configured to use, which is crucial for troubleshooting and compatibility with deep learning containers.
$ apt-cache policy nvidia-jetpack
nvidia-jetpack:
Installed: (none)
Candidate: 4.6.6-b24
Version table:
4.6.6-b24 500
500 https://repo.download.nvidia.com/jetson/t210 r32.7/main arm64 Packages
4.6.5-b29 500
500 https://repo.download.nvidia.com/jetson/t210 r32.7/main arm64 Packages
4.6.4-b39 500
500 https://repo.download.nvidia.com/jetson/t210 r32.7/main arm64 Packages
4.6.3-b17 500
500 https://repo.download.nvidia.com/jetson/t210 r32.7/main arm64 Packages
4.6.2-b5 500
500 https://repo.download.nvidia.com/jetson/t210 r32.7/main arm64 Packages
4.6.1-b110 500
500 https://repo.download.nvidia.com/jetson/t210 r32.7/main arm64 Packages
Installing Docker Compose:
You are installing Docker Compose because it is an indispensable tool for complex, multi-container applications. While Docker itself is used to run a single container, Docker Compose allows you to define, launch, and manage entire application stacks that consist of multiple separate containers (e.g., an AI model container, a database container, and a web interface container). It simplifies your workflow by letting you manage your full project environment with a single configuration file ( docker-compose.yml ) and a single command.
The command sudo apt-get install -y libffi-dev is executed to install a development package known as libffi-dev. This package is a required dependency for successfully building and installing other software, specifically Python packages and their dependencies. In the context of your document, you are running this command because it is necessary for the next steps: installing python3-pip (the Python package installer) and then Docker Compose, which is often installed using pip on systems like the Jetson Nano.
By running it with sudo and the -y flag, you grant administrative permission for a non-interactive, automatic installation of this critical underlying library.
$ sudo apt-get install -y libffi-dev python3-pip
Reading package lists... Done
Building dependency tree
Conclusion:
The NVIDIA Jetson Nano is an effective, affordable, and energy-efficient platform for Edge AI and IoT, but its deployment requires meticulous software configuration. This setup involves installing the Ubuntu-based OS via JetPack SDK, then setting up Docker and the NVIDIA Container Toolkit to enable GPU-accelerated containers. To ensure efficient, real-time performance of models like Small Language Models (SLMs) despite the limited 4GB RAM, optimization techniques such as 4-bit quantization and TensorRT are essential.