Inference on Video file using TensorRT in sampleUffSSD model c++

Description

I have successfully executed the sampleUffSSD using frozen graph ssd mobilenet v2. inference is observed for dog.ppm and bus.ppm (which is given in the code.).
How do I perform the inference on video file with this script in c++.
Could you please let me know what needs to added or changed to make it work for video file.

Thank you

Environment

TensorRT Version: 6.0.2
GPU Type: using Jetson Nano device
Nvidia Driver Version:
CUDA Version: 10.0.1
CUDNN Version:
Operating System + Version: ubuntu 18.04
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): 1.13
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Please refer to below link for samples for image detection using live camera images:

Thanks

I want to use sampleUffSSD repo in tensorrt samples directory.

Is it not possible to produce the results with that??

Could you provide me some lines of code to invoke a video file using that repo.

Thank you

May be you can refer to below blog link which have python implementation using live camera images and replicate it in your code:
https://devblogs.nvidia.com/object-detection-gpus-10-minutes/

Thanks

Hi,

I have done this. but i need something for C++ repo.
Please provide me some lines fo code to execute it in c++

Currently I don’t have any open source C++ sample available to handle video files that can be shared.

Thanks

Can I use ssd inception v2 2018 model to run inference using sampeUffSSD?
Currently it supports inception v2 2017 model.

Or What versions of tensorflow and tensorflow api should be checkedout to train ssd_inception v2 model and run inference using sampleUffSSD??

I haven’t tried that, but i think if ops are supported then it should work. You might have to update the config.py file accordingly.

Since UFF & Caffe parser are deprecated from TRT 7, will recommend you to try tf2onnx-> onnx2trt workflow in case of any issues.

Thanks

I tried tf2onnx and go the onnx model.
Then I used trtexec command to load onnx model. but I got error as unsupported datatype UINT8(2).

I have asked about it in forum already. But I have not got any response yet.

How to update config.py to support ssd inception v2 2018 trained model.
I am using tensorflow 1.1.4.0 to train my network.
sampleUffSSD has no suppor to ssd inception 2018 standard model (withoout retraining).

Please let me know the possible solution.
I just want to sue sdd inception v2 2018 model and train it using tensorlfow 1.1.4.0 version and run inference using sampleUffSSD or trtexec command(if onnx model).

Hi,

I have used sampleSSDUFF.cpp to run inference on ssd inception 2017 model.

It worked perfectly.

Then I retrained my model with pet dataset with 37 classes.
I got good results in tensorflow. I tetsed it.
bounding box is also correct.

I used tensorflow 1.14.0
object detection api from november 17 2017.

Everything is correct for now

But when I port it ot Uff, I get 795 nodes.
then when I use that uff insampleSSDUFF.cpp.

I get wrong detections.

I get something like :

Image name:/english_cocker_spaniel_24.jpg, Label: Abyssinian, confidence: 93% xmin: 0 ymin: 8500 xmax: 4.48887 ymax: 27.7736

Bounding box and label info is not correct.

I get results as english_cocker_spaniel_24_inferenced .

But it works well with standard ssd ncpetion 2017-11-17 model for 91 classes.

What could be the problem???

Hi,

Just wanted to check, did you updated the config file with new class count along with label.txt file?
The sample also requires a labels.txt file with a list of all labels used to train the model. The labels file for sample network is <TensorRT Install>/data/ssd/ssd_coco_labels.txt

https://github.com/NVIDIA/TensorRT/blob/master/samples/opensource/sampleUffSSD/config.py#L39

Thanks

I have done it.
Standard version works perfectly. But not the custom trained version.
do you know what version of object detection api is used to train standard ssd inception 2017-11-17 ??
So that i can replicate the same

When i convert standard ssd inception 2017-11-17 model to uff.
I get no. nodes = 563.

But When I convert my custom trained version. I get 795 nodes.

I think this is the problem.

But which tensorflow API should I use to train the model to reproduce ssd inception_coco_2017_17_11 version??

Currently I am using tensorflow object detection api =
hash: 11e9c7adfbf7d50dd9ef4442cf7806cdb2ee2368
released on November 17 2017.

please let me know version you have used to produce it or object detection api hash code to train the model

Hallo,

Please let me know the solutions for the problem.

I have everything correct in my config file.
It is something related to versions I use to train the network.

And need to update the config file accordingly for that for C++ version.

Please help me out with this.

I know python version works well. But I want C++ version sampleUffSSD.cpp to run inference

Moving to Jetson Nano forum for resolution.

@kayccc,

Hope to get some solution to this problem.

Please check this topic Custom trained SSD inception model in tensorRT c++ version - #3 by god_ra for the following status.

Thanks.

hi @god_ra. I have same problem. Did you already fix it?

Yes, I have successfully implemented my custom trained ssd inception v2 model in Tensorflow and then ported to UFF and then inferenced on Jetson Nano using C++.