Running SSD mobilenet v2 with live USB camera capture on jetson nano

Hi there,

I found this reply by Dustin Franklin on the NVIDIA forum for Jetson Nano which suggests a sample code to run SSD mobile net v2 for performance profiling:

[url]https://devtalk.nvidia.com/default/topic/1049802/object-detection-with-mobilenet-ssd-slower-than-mentioned-speed/[/url]

I build the sample on jetson nano, it loads a PPM image and apply the SSD on the model and shows the time spent for each inference.

I need an example which camptures images from USB and after resizing image to 300x300 applies SSD mobilenet v2 and get the bounding boxes and their lables.

Is there any sample code which does this purpose? This is an urgent need for a project.

I have found this link also which applies an object detection from usb camera:
[url]https://github.com/dusty-nv/jetson-inference[/url]

but this detection accuracy of this code is not good enough.

I am not sure how to combine these two or there are better ways for doing this like opencv?
I need a sample code to achieve this goal

Thanks
Amin

Hi Amin, check the python branch of jetson-inference, I have added support for these SSD models there.

For more info, see this post: [url]https://devtalk.nvidia.com/default/topic/1051389/jetson-nano/is-there-any-demos-available-for-python-jetson-inference/post/5348561/#5348561[/url]

Hi Dustin,

Many thanks for quick reply.
Is the python branch the same as this?
$git clone https://github.com/dusty-nv/jetson-inference.git

I did this just now and after
$git submodule update --init
$cd jetson-inference
$mkdir build
$cd build
$cmake …/
$build

but there is no detect-camera.py?

Thanks

The support for Python is currently in the python branch, because I have been developing it recently. I am getting ready to merge it back into master after I update the docs.

Clone and build it like this:

$ git clone -b python https://github.com/dusty-nv/jetson-inference jetson-inference-python
$ cd jetson-inference-python
$ git submodule update --init
$ mkdir build
$ cd build
$ cmake ../
$ make
$ sudo make install

If you want it to build with support for Python 3.6, you should install python3-dev package beforehand with “sudo apt-get install python3-dev”. By default it builds with support for Python 2.7, because python-dev package is already installed by default.

Many thanks again Dustin.

Now I have the python files. But when I run it I get this error:

device GPU, networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
W= 7 H = 100 C = 1
detectNet – maximum bounding boxes: 100
detectNet – failed to find networks/SSD/ssd_coco_labels.txt
jetson.inference – detetctNet failed to load built-in network ‘ssd-mobilenet-v2’
PTensorNet_Dealloc()
TRacebakc(most recent call lats)
File “./detectnet-camera.py”, line 42, in moudule
net = jetson.inference.detectNet(opt.network, arv,opt.threshold)
Exceptoin: jetson.inference – dtectNet failed to load network

when I ran cmake, I saw a list to download network where I scroll down using arrow key and pressed enter on SSD mobilenet v2, was that not enough to download that?

build engine CUDA takes a few minutes, is there any way to reduce this? This potentially can cause a problem for the product which is going to use it after turning on.

We are going to use 100 of Jetson Nano mid June in new product release and if goes well, we will be using it in large scale 12000 a year. So for us it is really important.

Many thanks

Hi Dustin,

it seems that coco label is inside v1 and v2 folder and it is looking for it inside networks/SSD folder

So I manually copied the coco label file into into network/SSD:

jetson-inference-python-build/aarch64/bin/network$ mkdir SSD
jetson-inference-python-build/aarch64/bin/network$ cp ./SSD-Mobilenet-v2/ssd_coco_labels.txt ./SSD/

This time passed that error of not finding the coco label file but GSTREAM crushed:


[gstreamer] gstreamer msg stream-start ==> pipeline1
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes:
Segmentation fault (core dumped)

Any help would be greatly appreciated.

Thanks

Sorry about that, I just checked in an update to python branch fixing that issue with the paths.

It only “builds the CUDA engine” the first time you run the app, then it saves the serialized TensorRT engine to disk. You can run it first in the lab, and then deploy the serialized engine on your units to the field. They will then load much faster, like if you tried running the app again.

The lack of “Available Sensor modes:” seems to indicate that the low-level camera driver is not detecting your sensor. Which camera are you using? If you run nvgstcapture program, do you get a video feed from it?

Hi Dustin,

Many thanks for the reply.

Regarding saving the serialized TensorRT engine to disk, how can I achieve that? is there any link how to do this?

regarding the camera, at the moement I am testing the algorithm with the following USB web cam:
[url]https://uk.rs-online.com/web/p/webcams/7950870/[/url]

gst-camera in bin folder also crushes; however, when I run
$./detectnet-camera pedent

it loads the video stream fine, the source code suggests it creats a gst-camera. Is it the same as nvgstcapture?
I could not find nvgstcapture? what is the path for nvgstcapture?

In production, I am planning to use this USB camera which works with $detectnet_camera

[url]https://www.amazon.co.uk/ELP-Autofocus-degree-Webcam-Windows/dp/B07FKKJN65[/url]

how can I resolve the error? do I need to install something?

Thanks

It is automatically done by jetson-inference the first time you run a particular network. Then if you try to load the network again, it will only take a second or so.

Can you try launching detectnet-camera.py with --v4l2_device=0 argument?

$ ./detectnet-camera.py --v4l2_device=0 --network=ssd-mobilenet-v2

It is located in /usr/bin. If you are using webcam, launch nvgstcapture with the --camsrc=0 argument (which changes it to V4L2 mode)

Many many thanks Dustin.

$ ./detectnet-camera.py --v4l2_device=0 --network=ssd-mobilenet-v2
works fine.

also $nvgstcapture --camsrc=0 loads the video stream.

The FPS is 6 or 7

Do you think if I resize the image to 300x300 I can get higher FPS?

Thanks

You should be getting closer to 20 FPS with SSD-Mobilenet, so it might be the camera that is slowing it down. The image already gets resized down to 300x300 for the network (inside detectNet class), but reducing the camera resolution might be able to speed up the camera capture.

See the --width and --height arguments to detectnet-camera.py to set a different camera resolution (default is 1280x720). You will want to set it to a resolution that your camera supports (which you can check with the “v4l2-ctl --list-formats-ext” command (from the v4l-utils apt package).

Many thanks again.

I installed the tool and found the camera supports 640x360 which is the closest to 300x300 and also bigger.
Using this resolution FPS increased to 12-13 FPS.

All your help is really appreciated.