TensorRT model giving constant output

Description

I have an onnx model that has to be used in DeepStream. The onnx model runs fine when I carry out inference in python, but when I convert the model to tensorrt, the outputs are always (0, 1) regardless of what the input is.

The same occurs when I use trtexec to convert the onnx model to tensorrt, the output is (0, 1) always.

Environment

TensorRT Version: 7.2.2
GPU Type: RTX 2060
Nvidia Driver Version: 470.63.01
CUDA Version: 11.1
CUDNN Version: 8.0.5
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.8.5
TensorFlow Version (if applicable): 2.3.1
PyTorch Version (if applicable): NA
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorflow:20.12-tf2-py3

Relevant Files

Github repo: firekind/realtime-face-liveness-detector

the onnx model is model/permuted-antispoofing.onnx. The notebook that was used to create the onnx model is notebooks/to-onnx.ipynb

Steps To Reproduce

Here’s the trtexec command I used:

$ trtexec --onnx=model/permuted_antispoofing.onnx --shapes=permute_input:1x3x112x112 --saveEngine=model/permuted_antispoofing_x86_64.engine --dumpOutput

Hi,
Can you try running your model with trtexec command, and share the “”–verbose"" log in case if the issue persist
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

You can refer below link for all the supported operators list, in case any operator is not supported you need to create a custom plugin to support that operation

Also, request you to share your model and script if not shared already so that we can help you better.

Meanwhile, for some common errors and queries please refer to below link:

Thanks!

@NVES
Here are the verbose logs of trtexec command: trtexec verbose logs

The model just has convolutional, relu, batch norm and max pool. The model itself is relatively small.

Yes, I have shared the github link with the model, the path to the model in the repo is: model/permuted_antispoofing.onnx.

Hi,

Looks like you’re using an old version of TensorRT. We recommend you to please try on latest TensorRT version 8.2 GA and let us know if you still face this issue.

Thank you.

Hey spolisetty,
It turns out I had to set net-scale-factor to 1/255, since the model required the input tensor to be in range of 0-1. After adding this I started to get proper inference results.

Thanks for your help @spolisetty @NVES

2 Likes