Real time object detection on Jetson Nano

Hello. I am trying to do real-time object detection (i.e. from live video feed which coming from a Logitech CS922) using SSD MobileNet provided by the TensorFlow Object Detection API. I have used tf_trt_models API to build the frozen graph and used TensorRT to further optimize it to use on my Nano.

I am able to load the TRT graph file in a reasonable time, but I am not able to do the inference on the live video. I was not able to find good resources on this as well.

Help would be appreciated.

Hi spsayakpaul, you can run the SSD-Mobilenet models in the Hello AI World detectnet-camera program, see here:

To run it with SSD-Mobilenet-v2 for example, you can launch it like:

detectnet-camera --network=ssd-mobilenet-v2

Okay. This ( uses the Jetson inference engine. I am actually trying to load a TRT graph of SSD MobileNet and then trying to use it for object detection. The speed is pretty reasonable.

That code is using TensorRT an the same UFF models for SSD-Mobilenet that are exported by this sample.

If you prefer to use only your existing Python project, see the jetcam repo for camera streaming.

Thanks for the references. Also, is there any documentation of the Jetson Inference Engine? I am specifically interested in the following lines of the code:

# load the object detection network
net = jetson.inference.detectNet(, argv, opt.threshold)

# create the camera and display
camera = jetson.utils.gstCamera(opt.width, opt.height,

FYI, I am able to use my Logitech CS922 and supply the streaming video to the model. But I am unable to produce multiple bounding boxes even when they are applicable.

You can find the API Reference Documentation under here:

You might want to try lowering the threshold value and see if it produces more bounding boxes. If not, compare the output against your previous app.

Thank you very much :)