god_ra
July 30, 2020, 12:04pm
10
Hi @kayccc ,
I have opened new topic long back. But i have not received any update from NVIDIA.
It is been almost a month since I am waiting.
Please look into these topics and solve the issues.
I need solutions for my custom trained ssd inception v2 2017_11_17 network to ONNX / UFF conversion and then to TensorRT (c++ version)
Please solve this issue.
How can I change the UINT8 to INT32 in tensorflow ssd inception v2 model.?
Is there any workaround for this model.?
Here is the model : http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2018_01_28.tar.gz:
I used tf2onnx command to convert saved model to onnx. It worked .
Then I got the below error when I run this command ./trtexec --onxx=inception.onnx
Error repeated for both custom trained and also standard ssd inception v2 model
WARNING: ONNX model has a newer…
Hi @god_ra ,
We are currently checking on this. Please allow us some time.
Thanks!
Description
I have successfully executed the sampleUffSSD using frozen graph ssd mobilenet v2. inference is observed for dog.ppm and bus.ppm (which is given in the code.).
How do I perform the inference on video file with this script in c++.
Could you please let me know what needs to added or changed to make it work for video file.
Thank you
Environment
TensorRT Version: 6.0.2
GPU Type: using Jetson Nano device
Nvidia Driver Version:
CUDA Version: 10.0.1
CUDNN Version:
Operating Syst…