I have successfully executed the sampleUffSSD using frozen graph ssd mobilenet v2. inference is observed for dog.ppm and bus.ppm (which is given in the code.).
How do I perform the inference on video file with this script in c++.
Could you please let me know what needs to added or changed to make it work for video file.
Thank you
Environment
TensorRT Version: 6.0.2 GPU Type: using Jetson Nano device Nvidia Driver Version: CUDA Version: 10.0.1 CUDNN Version: Operating System + Version: ubuntu 18.04 Python Version (if applicable): 3.6 TensorFlow Version (if applicable): 1.13 PyTorch Version (if applicable): Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
How to update config.py to support ssd inception v2 2018 trained model.
I am using tensorflow 1.1.4.0 to train my network.
sampleUffSSD has no suppor to ssd inception 2018 standard model (withoout retraining).
Please let me know the possible solution.
I just want to sue sdd inception v2 2018 model and train it using tensorlfow 1.1.4.0 version and run inference using sampleUffSSD or trtexec command(if onnx model).
Just wanted to check, did you updated the config file with new class count along with label.txt file?
The sample also requires a labels.txt file with a list of all labels used to train the model. The labels file for sample network is <TensorRT Install>/data/ssd/ssd_coco_labels.txt
I have done it.
Standard version works perfectly. But not the custom trained version.
do you know what version of object detection api is used to train standard ssd inception 2017-11-17 ??
So that i can replicate the same
Yes, I have successfully implemented my custom trained ssd inception v2 model in Tensorflow and then ported to UFF and then inferenced on Jetson Nano using C++.