TensorRT model returns only zero outputs

Description

I have build a pipeline to export a EfficientDet Pytorch implementation first to ONNX and then from ONNX to TensorRT with ONNX-Tensorrt. The conversion seems to work alright. I only get the following warning when exporting to TensorRT which shouldn’t be the cause of the problem:

Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

But when doing inference with TensorRT the output is always zero while in ONNX and in PyTorch I get a non zero output for the same input.
I also tested my TensorRT inference script with a minmal example model and it got a non zero output with the same pipeline. Am I missing something why am I only getting zero outputs?

Environment

**TensorRT Version 7.1.3.0:
**GPU Type Nvidia Jetson Xavier:
**Nvidia Driver Version N/A:
**CUDA Version 10.2:
**CUDNN Version N/A:
**Operating System + Version Ubuntu 18.04:
**Python Version (if applicable) python 3.7:
**PyTorch Version (if applicable) 1.7:

Relevant Files

TRT model: https://drive.google.com/file/d/1kfRHI31kZfZWbfd37khhxrg8bKzmb63r/view?usp=sharing
Onnx model: https://drive.google.com/file/d/1NoawdtB2BG2Myz9dycQjjSyPm_DzyuGT/view?usp=sharing
Sample image: https://drive.google.com/file/d/1Nb_QeL96Xzrp5GsltCPTZQuCutVwiBKh/view?usp=sharing
Onnx Inference script: https://drive.google.com/file/d/1Wmgo32lb2POpFqOfJdbhYaDbs9LrgwvV/view?usp=sharing
TRT inference script: https://drive.google.com/file/d/1EFj1O5IT7ZVZQGFY0E2Vu0Qn9y51i3uN/view?usp=sharing

Steps To Reproduce

For the TensorRT conversion I used: onnx-tensorrt
Use the onnx inference script to run successful inference on the sample image
Use the TRT script to run non successful inference on the the sample image

Hi @koch.sebastian

TensorRT does not natively support INT64, whereas in Onnx, some operators require int_max or int_min as special values to denote ‘infinity’(e.g. Slice operator), which is probably where the large integer values are coming from.

Also Kindly provide access to your model.

Thanks!

I also encountered the same problem? I use mmsegmentation ocrnet-hr18

Same issue, any progress?

I am having the same issue with the exact same scenario, only the network is different than in the original post. Is there any solution so far?

Hi ,
We recommend you to check the supported features from the below link.

You can refer below link for all the supported operators list.
For unsupported operators, you need to create a custom plugin to support the operation

Thanks!