Jetson Nano best person detector model

Hi,
I see that there are many example on object detection for Nvidia platform, so I don’t know where to start. I would to use my Jetson Nano to detect and track only person from a webcam. Could you suggest me a models? I see that with ssd-mobilenet-v2 I can get good fps. There is a way to select just person using detectnet in Jetson-inference or I need to re-train ssd-mobilenet-v1 only for person?

Thanks

1 Like

If you want to find only people, you may train. Otherwise, you should filter labels in your script (close other detected objects’ labels).

Hi,

There is a post below which emphasizes speed and accuracy between different models.

Yolo seems to be fast and more accurant when you compare the models. You can train you custom Yolo model. You can check this link for training custom model.

1 Like

You can check out the production-quality PeopleNet model here: https://ngc.nvidia.com/catalog/models/nvidia:tlt_peoplenet

It was trained/pruned using Transfer Learning Toolkit and you can run it with DeepStream for the best performance.

1 Like

Thank you so much for fast reply :) , another question, I’m trying to crop a cudaImage from jetson-inference example (detectnet.py) to get a 360 degree image from insta360 oneX2.
This is what I get from /dev/video1:

I need to crop the image half height and then concatenate the image horizontally. Using cv2 and numpy I can do it simply but I don’t know how to do using cudaCrop. I can successfully crop a cudaImage but I don’t
know how to concatenate the 2 images horizontally. I try to convert it to NumPy and then back to cudaImage and give it to detections but I got many errors .Is there a simple way to modify the input source for detectnet.py?

Hi @codeforge, you can use the cudaOverlay() function to compost two images into an output image. See here for example - https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-image.md#overlay

1 Like

I will try that, thank u.

The performance graph shows the Nano running inference with that model at zero frames per second, so maybe not a great choice for the Nano.

Thanks for pointing that out - you can find a table of the data here:

https://docs.nvidia.com/metropolis/TLT/tlt-user-guide/text/purpose-built_models.html#deployment

PeopleNet is 14FPS for PeopleNet-ResNet18 and 11FPS for PeopleNet-ResNet34 on Nano.

That sounds much better! Thanks Dusty.