Hi all, just merged a large set of updates and new features into jetson-inference master:
- Python API support for imageNet, detectNet, and camera/display utilities
- Python examples for processing static images and live camera streaming
- Support for interacting with numpy ndarrays from CUDA
- Onboard re-training of ResNet-18 models with PyTorch
- Example datasets: 800MB Cat/Dog and 1.5GB PlantCLEF
- Camera-based tool for collecting and labeling custom datasets
- Text UI tool for selecting/downloading pre-trained models
- New pre-trained image classification models (on 1000-class ImageNet ILSVRC)
- ResNet-18, ResNet-50, ResNet-101, ResNet-152
- VGG-16, VGG-19
- Inception-v4
- New pre-trained object detection models (on 90-class MS-COCO)
- SSD-Mobilenet-v1
- SSD-Mobilenet-v2
- SSD-Inception-v2
- API Reference documentation for C++ and Python
- Command line usage info for all examples, run with --help
- Output of network profiler times, including pre/post-processing
- Improved font rasterization using system TTF fonts
Screencast video - Realtime Object Detection in 10 Lines of Python Code on Jetson Nano
Here’s an object detection example in 10 lines of Python code using SSD-Mobilenet-v2 (90-class MS-COCO) with TensorRT, which runs at 25FPS on Jetson Nano on a live camera stream with OpenGL visualization:
import jetson.inference
import jetson.utils
net = jetson.inference.detectNet("ssd-mobilenet-v2")
camera = jetson.utils.gstCamera()
display = jetson.utils.glDisplay()
while display.IsOpen():
img, width, height = camera.CaptureRGBA()
detections = net.Detect(img, width, height)
display.RenderOnce(img, width, height)
display.SetTitle("Object Detection | Network {:.0f} FPS".format(1000.0 / net.GetNetworkTime()))
Thanks to all the beta testers of the new features from here on the forums!
Project Link…https://github.com/dusty-nv/jetson-inference/
Model Mirror…https://github.com/dusty-nv/jetson-inference/releases