I have an onnx model that has to be used in DeepStream. The onnx model runs fine when I carry out inference in python, but when I convert the model to tensorrt, the outputs are always (0, 1) regardless of what the input is.
The same occurs when I use trtexec to convert the onnx model to tensorrt, the output is (0, 1) always.
Environment
TensorRT Version: 7.2.2 GPU Type: RTX 2060 Nvidia Driver Version: 470.63.01 CUDA Version: 11.1 CUDNN Version: 8.0.5 Operating System + Version: Ubuntu 18.04 Python Version (if applicable): 3.8.5 TensorFlow Version (if applicable): 2.3.1 PyTorch Version (if applicable): NA Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorflow:20.12-tf2-py3
You can refer below link for all the supported operators list, in case any operator is not supported you need to create a custom plugin to support that operation
Also, request you to share your model and script if not shared already so that we can help you better.
Meanwhile, for some common errors and queries please refer to below link:
Looks like you’re using an old version of TensorRT. We recommend you to please try on latest TensorRT version 8.2 GA and let us know if you still face this issue.
Hey spolisetty,
It turns out I had to set net-scale-factor to 1/255, since the model required the input tensor to be in range of 0-1. After adding this I started to get proper inference results.