Provide details on the platforms you are using:
Linux distro and version
GPU type
nvidia driver version
CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version
If Jetson, OS, hw versions
Describe the problem
Files
Include any logs, source, models (uff, pd) that would be helpful to diagnose the problem.
I’ve tried this with Ubuntu 16.04, CUDA9 / CUDNN 7.3 / python 3.6 / Tensorflow 1.12 / TensorRT / 5.0.2.6 and also with the 18.12 version container from the NGC container registry (https://ngc.nvidia.com). It works in both, so you might want to try pulling nvcr.io/nvidia/tensorrt:18.12-py3 and looking into differences from there. It’s likely something didn’t get installed or didn’t get installed correctly.
Is that on the GTX 850M? Let’s focus on that and use the tensorRT 18.12 container from the NGC registry (step-by-step is below). When you run the “convert-to-uff --input-file frozen_inference_graph.pb -O NMS -p config.py” instruction from the sampleUffSSD/README.txt, what’s your output?
I get the same error as shown in the post. If the model is wrong which updated model should I use. Currently, I am using the SSD model whose link is provided in the ReadME file from TensorRT.