Inference on Video file using TensorRT in sampleUffSSD model c++

Description

I have successfully executed the sampleUffSSD using frozen graph ssd mobilenet v2. inference is observed for dog.ppm and bus.ppm (which is given in the code.).
How do I perform the inference on video file with this script in c++.
Could you please let me know what needs to added or changed to make it work for video file.

Thank you

Environment

TensorRT Version: 6.0.2
GPU Type: using Jetson Nano device
Nvidia Driver Version:
CUDA Version: 10.0.1
CUDNN Version:
Operating System + Version: ubuntu 18.04
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): 1.13
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Please refer to below link for samples for image detection using live camera images:

Thanks

I want to use sampleUffSSD repo in tensorrt samples directory.

Is it not possible to produce the results with that??

Could you provide me some lines of code to invoke a video file using that repo.

Thank you

May be you can refer to below blog link which have python implementation using live camera images and replicate it in your code:

Thanks

Hi,

I have done this. but i need something for C++ repo.
Please provide me some lines fo code to execute it in c++

Currently I don’t have any open source C++ sample available to handle video files that can be shared.

Thanks

Can I use ssd inception v2 2018 model to run inference using sampeUffSSD?
Currently it supports inception v2 2017 model.

Or What versions of tensorflow and tensorflow api should be checkedout to train ssd_inception v2 model and run inference using sampleUffSSD??

I haven’t tried that, but i think if ops are supported then it should work. You might have to update the config.py file accordingly.

Since UFF & Caffe parser are deprecated from TRT 7, will recommend you to try tf2onnx-> onnx2trt workflow in case of any issues.

Thanks

I tried tf2onnx and go the onnx model.
Then I used trtexec command to load onnx model. but I got error as unsupported datatype UINT8(2).

I have asked about it in forum already. But I have not got any response yet.

How to update config.py to support ssd inception v2 2018 trained model.
I am using tensorflow 1.1.4.0 to train my network.
sampleUffSSD has no suppor to ssd inception 2018 standard model (withoout retraining).

Please let me know the possible solution.
I just want to sue sdd inception v2 2018 model and train it using tensorlfow 1.1.4.0 version and run inference using sampleUffSSD or trtexec command(if onnx model).