Hello AI World - now supports Python and onboard training with PyTorch!

Hi all, just merged a large set of updates and new features into jetson-inference master:

  • Python API support for imageNet, detectNet, and camera/display utilities
  • Python examples for processing static images and live camera streaming
  • Support for interacting with numpy ndarrays from CUDA
  • Onboard re-training of ResNet-18 models with PyTorch
  • Example datasets: 800MB Cat/Dog and 1.5GB PlantCLEF
  • Camera-based tool for collecting and labeling custom datasets
  • Text UI tool for selecting/downloading pre-trained models
  • New pre-trained image classification models (on 1000-class ImageNet ILSVRC)
    • ResNet-18, ResNet-50, ResNet-101, ResNet-152
    • VGG-16, VGG-19
    • Inception-v4
  • New pre-trained object detection models (on 90-class MS-COCO)
    • SSD-Mobilenet-v1
    • SSD-Mobilenet-v2
    • SSD-Inception-v2
  • API Reference documentation for C++ and Python
  • Command line usage info for all examples, run with --help
  • Output of network profiler times, including pre/post-processing
  • Improved font rasterization using system TTF fonts

Screencast video - Realtime Object Detection in 10 Lines of Python Code on Jetson Nano


Here’s an object detection example in 10 lines of Python code using SSD-Mobilenet-v2 (90-class MS-COCO) with TensorRT, which runs at 25FPS on Jetson Nano on a live camera stream with OpenGL visualization:

import jetson.inference
import jetson.utils

net = jetson.inference.detectNet("ssd-mobilenet-v2")
camera = jetson.utils.gstCamera()
display = jetson.utils.glDisplay()

while display.IsOpen():
	img, width, height = camera.CaptureRGBA()
	detections = net.Detect(img, width, height)
	display.RenderOnce(img, width, height)
	display.SetTitle("Object Detection | Network {:.0f} FPS".format(1000.0 / net.GetNetworkTime()))

Thanks to all the beta testers of the new features from here on the forums!

Project Link…https://github.com/dusty-nv/jetson-inference/
Model Mirror…https://github.com/dusty-nv/jetson-inference/releases

Hello, when I tried this pytorch install, it faield with numpy installing. It seems that the torch wheel file depends on a newer version of numpy.

My Jestson Xavier has already installed numpy with this command “sudo apt-get install libpython3-dev python3-numpy”

And I also tried “sudo pip3 install numpy” manually. It stopped the installing process anyway at the step “Building wheel for numpy…” and cannot move on anymore.

Collecting numpy
Downloading https://files.pythonhosted.org/packages/ff/59/d3f6d46aa1fd220d020bdd61e76ca51f6548c6ad6d24ddb614f4037cf49d/numpy-1.17.4.zip (6.4MB)
|████████████████████████████████| 6.4MB 58kB/s
Building wheels for collected packages: numpy
Building wheel for numpy (setup.py) … -

Hi 290844930, sometimes it takes awhile to build the required version of numpy - are you sure the process is not still working? Is there another error message?

Hi all, we’ve just posted a screencast tutorial for Hello AI World - check it out!

Realtime Object Detection in 10 Lines of Python Code on Jetson Nano


Hi, can you explain or give me some links on how to train my own ssd inception model optimized with TensorRT ?


Hi @marconi.k, please refer to this post: DIGITS or somthing else

Also, I am currently working on adding to the Hello AI World tutorial a part where you can re-train SSD-Mobilenet using PyTorch onboard your Jetson. It should be ready in the coming weeks, so stay tuned.

Happy to hear that ! Thanks for the quick reply ,will stay stuned for this :)

Hi! Thank you for a great kick-start material.

Would appreciate if you can add a method to stream the output video to be able to monitor it on the laptop/cellphone screen rather than on the Jetson itself