Explain Jetson Nano

I would like to apologize in advance for this long and non-technical post, and I am sorry if these were basic questions.

I bought the Jetson Nano Developer Kit recently and went over the “Hello AI World” tutorial. I have heard and read that this device is particularly useful for AI tasks, and I have been lucky to try some of them out (Object detection and image recognition from the tutorial).

However, I was not able to install DIGITS since I was not using the headless mode. I got errors while installing drivers and the docker and could not find solutions online. Then I saw this post posted by one of the NVIDIA person saying that DIGITS is not supported on the nano platform yet (only works if we use headless mode i guess?)…

So I was wondering what in particular is Jetson nano useful for? I tried to find descriptions online about the device, but I didn’t seem to find any specific enough to clear my confusion. Is it only used to run inferences, and the ML models would still be trained on other computers? Or is the Jetson Nano also capable of training models? If so, how do people usually do it? (Any useful tools like DIGITS or just by frameworks like Tensorflow?)

I guess I would just like a brief description of why Jetson Nano is so powerful, and specifically, how do people use it. Also, would y’all recommending using it on headless mode or with monitor and run on itself, is there a “better choice?”

Sorry again for the long question post. Thanks in advance!

Some of the software you might be interested in is more useful for training. Nanos (and any lower power embedded device) will tend to be for running a pre-trained model, and not for training. Train on the PC or cloud, then deploy to the Nano.

A Nano could train, but it would terribly slow. I think some of the software which isn’t implemented on the Nano is not implemented for that reason.

Thank you linuxdev for answering!

So a Jetson Nano would be mainly used to run models already trained on some other PC or cloud, and I would imagine loading this Nano device (with the pretrained model) onto some kind of robotics. Am I interpreting this correctly? Are there any other features the Nano have that are particularly useful?

Also, would you recommend using the headless mode or the mode that runs on itself (with a monitor)? What are the main differences between them?


That’s correct, Nano is mostly intended for inference. You can technically run limited training with PyTorch or TensorFlow with swap mounted. You can do that in Nano DLI course: https://courses.nvidia.com/courses/course-v1:DLI+C-RX-02+V1/about

Regarding headless mode, it is common for deploying edge devices, robotics, smart cameras, or other embedded applications. It can save some memory and system resources versus desktop.

Note that you can run “desktop” mode even with no display connected.
It just won’t … do much :-) And it will consume some resources, mainly RAM.
RAM is somewhat scarce on the Nano, so if you don’t need/use the display for anything, consider running headless, at least once you’ve developed your application to the point where it’s time to run it “unattended.”