Object Detection with MobileNet-SSD slower than mentioned speed

Hi, I also have similar problem. Max confidence score is only ~ 24% with incorrect class label of motor cycle for dog.ppm. SSD inception v2 is able to detect dog image correctly.

Since the number of labels in the code was 37 and after some research, I am of the conclusion that the .uff shared is not for the default COCO model but for the pet detector.

Hi Dustin,

I followed your step to run ./sample_uff_ssd_rect, it runs fine but output no results. Under data/ssd/, I have 3 ppm files. also have a file: list.txt which lists 3 ppm files. But my output shows ā€œNum batches 1ā€.

What did I do wrong?

…/data/ssd/sample_unpruned_mobilenet_v2.uff
Registering UFF model
Registered Input
Registered output NMS
Creating engine
Begin parsing model…
End parsing model…
Begin building engine…
Time lapsed to create an engine: 233952ms
End building engine…
Created engine
Num batches 1
Data Size 270000
*** deserializing
3 Binding
Allocating buffer sizes for binding index: 0 of size : 270000 * 4 B
Allocating buffer sizes for binding index: 1 of size : 700 * 4 B
Allocating buffer sizes for binding index: 2 of size : 1 * 4 B
Time taken for inference per run is 36.7193 ms.
Time taken for inference per run is 36.6956 ms.
Time taken for inference per run is 36.5924 ms.
Time taken for inference per run is 36.5902 ms.
Time taken for inference per run is 36.5938 ms.
Time taken for inference per run is 36.5934 ms.
Time taken for inference per run is 36.5932 ms.
Time taken for inference per run is 36.5978 ms.
Time taken for inference per run is 36.5933 ms.
Time taken for inference per run is 36.5951 ms.
Average time spent per iteration is 36.6164 ms.
Time taken for inference is 36.5951 ms.
KeepCount 100

Here is another implementation of object detector which is giving 55 fps:

Having same problem… Can someone help pleaseeee…

I’m not sure that sample performs the full post-processing and outputs the full detection results. Rather, it’s for benchmarking the core network.

For the full SSD detection pipeline, including the pre- and post-processing, you can see these samples:

[url]jetson-inference/detectnet-console-2.md at master Ā· dusty-nv/jetson-inference Ā· GitHub

Thanks for the suggestions but what was looking for a tensorRT C++ implemention of mobilenet_V2 trained model.

Unsure if detectnet offers anything as optimized as tensorRT. This one is based on Python. https://github.com/AastaNV/TRT_object_detection

Please need something using TensorRT and C++ for object detection using Mobilenet_V2.

Or at least show us how to configure c++ tensorRT to work with Mobilenet_v2.

The 2nd link from my post above is in C++ (and Python) and can load SSD-Mobilenet-v2 in addition to SSD-Mobilenet-v1 and SSD-Inception-v1. See the table of pre-trained models available from the link.

Thanks Dusty_nv for the reply.

However the link does seem to use detectNet. I am unfamiliar with detectNet. there was nothing about tensorRT on the link.

I am interested in TensorRT solution as it has better performance.

Please provide something on C++ tensorRT and Mobilenet_V2.

Especially like getting the samplUffSDD to work with mobilenet_V2.

The sampleUffSDD can draw bounding boxes. All I need is for the sample to work woth mobilenet_v2 like it does with inception.

The C++ class for performing object detection in jetson-inference repo is called detectNet, but it can load different detection networks (including SSD-Mobilenet-v1, SSD-Mobilenet-v2, and SSD-Inception-v1). Please refer to the table from the link in my previous post for the different pre-trained models available. Listed are these SSD-based models that I mentioned.

That code does use TensorRT, the TensorRT code is contained in the tensorNet base class. Please refer to the main README for more info.

Have you tried running the sample on images of dogs? It was mentioned in another thread that this model was trained on a database of different dog breeds.

About testing with dog; yes I have tested with dog on original sampleUffSDD from nvidia.
and it drew the bounding box fine.

However when I test with the modified version provided on the post, meant to work with mobilenet, I get this final line as:

&&&& FAILED TensorRT.sample_uff_ssd # ./sample_uff_ssd_rect

Regarding detectNet supporting tensorRT. Just checked it and appears accurate.
The videos of testing detectNet online appears slower than stated tensorRT videos.

That said, thanks for the advice. Shall give it a go and report back.

Got the custom SSD_Mobilenet_v2_coco model working with detectnet (tensorRT) on Jetson TX2 using the Jetson-inference script.

[url]https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-camera-2.md[/url]

It was messy however. The working Mobilenet_V2 Uff file was generated after tweaking code from TRT_object_detection library.

[url]https://github.com/AastaNV/TRT_object_detection[/url]

It worked fine on TRT_object_detection and generated the tensorRT engine with good inference result.

Also another step that I had to do was generate the mobilenet_V2 frozen_inference_graph.pb from an older model/research object-detection library. Used the commit from early 2018.
Using later ones appear to add nodes that conflict with Uff converter or tensorRT.

Hope that helps.

Result: About the frame rate, got over 50fps most of the time, rising up to 85fps.

This is strange because I was under the impression the my camera can only do up to 30fps, so maybe tensorRT was processing some frames 2 or 3 times.

Or was it deriving its values by inferences per second?

Yes, it reports the network time, not limited by camera.

Can you please elaborate? What modifications were required?

I’ve been stuck trying to get the convert_to_uff script working with the default ssd models from the zoo model library.

I took the ā€œTRT_object_detectionā€ example code (https://github.com/AastaNV/TRT_object_detection) and implemented a python program which could do real-time object detection using various input image/video sources.

The demo code converts the trained ssd_mobilenet_v1_coco model to UFF, then to TensorRT engine (bin). When I tested this TRT optimized ssd_mobilenet_v1_coco model on Jetson Nano (JetPack-4.2.2), the frame rate was ~22.8 fps which I think is very good.

I also tested my custom trained model, ssd_mobilenet_v1_egohands (hand detector). The frame rate was even higher (27~28 fps), but detection was not good…

[url]Testing a TensorRT optimized SSD hand detector on Jetson Nano - YouTube

Check out my GitHub repo (demo #3) for details: GitHub - jkjung-avt/tensorrt_demos: TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet

Hello!

step1:git clone jetson-inference
step2:cmake …/
make
sudo make install
step3:./detectnet-camera, But ssd-mobilenet-v2 only have ~12FPS.

Anyone have some advices?

Thanks

My tensorrt_demo code also supports ssd_mobilenet_v2_coco. It runs at ~20FPS on Jetson Nano. Just clone the code from GitHub and follow the steps in demo #3.

[url]https://github.com/jkjung-avt/tensorrt_demos[/url]

Could you explain th reason ?

why you could get ~20FPS? Any different from jetson-inference repo code?

I will have a try.

I am a newbie.

Thank you very much!

My ssd_mobilenet_v2_coco implementation is referencing NVIDIA’s ā€œTRT_object_detectionā€ sample. It converts the trained SSD model into UFF, and then optimize it with TensorRT. You could refer to the original GitHub repository and TensorRT documentation for more information. I myself might find time to write a blog post about it later on.

[url]https://github.com/AastaNV/TRT_object_detection[/url]
[url]Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

Thank you very much!