ONNX to TRT Engine conversion Error

Description

When running trtexec command to convert ONNX model to Trt engine, using this command:
trtexec --onnx=/home/anurag/NVME/overhead-detector/rapid_32.onnx --saveEngine=/home/anurag/NVME/overhead-detector/rapid_32.engine --explicitBatch

we ran into an error:
[07/27/2021-19:14:11] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: builtin_op_importers.cpp:2371 In function importRange:
[8] Assertion failed: inputs.at(0).isInt32() && “For range operator with dynamic inputs, this version of TensorRT only supports INT32!”
[07/27/2021-19:14:11] [E] Failed to parse onnx file
[07/27/2021-19:14:11] [E] Parsing model failed
[07/27/2021-19:14:11] [E] Engine creation failed
[07/27/2021-19:14:11] [E] Engine set up failed

Environment

L4T 32.4.4 [ JetPack 4.4.1 ]
Ubuntu 18.04.5 LTS
Kernel Version: 4.9.140-tegra
CUDA 10.2.89
CUDA Architecture: 7.2
OpenCV version: 4.1.1
OpenCV Cuda: NO
CUDNN: 8.0.0.180
TensorRT: 7.1.3.0
Vision Works: 1.6.0.501
VPI: 4.4.1-b50
Vulcan: 1.2.70

Python Version (if applicable): Python 3
PyTorch Version (if applicable): 1.6.0
Baremetal or Container (if container which image + tag):
I’ve uploaded the script that I used to convert from pytorch to ONNX.

I can share my model if needed.

More information about the model: GitHub - duanzhiihao/RAPiD: RAPiD: Rotation-Aware People Detection in Overhead Fisheye Images (CVPR 2020 Workshops)

export_to_onnx.py (2.8 KB)

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Thank you for the prompt reply.

Summary:

  1. We used the link you provided to convert onnx to trt (trtexec command provided in the original post).
  2. The error we got was also mentioned in the original post.
  3. The ONNX model seems to be correct because it passed the onnx.checker.check_model() function
  4. I’ve attached the pytorch to onnx conversion script in the original post
  5. The onnx model can be found here:
    https://drive.google.com/file/d/1P2gJf2hFlcgXju49TmZep5XPnBnnucMV/view?usp=sharing

Hope this helps.

@hao.cheam,

Thank you for sharing the ONNX model. We could reproduce this issue.
Looks like you’re using Range op, As the error message suggests, TRT currently only accepts INT32 as the range input.

How do I resolve this issue? Is there a way to configure the conversion to onnx such that only INT32 is used for the Range op?
Or is there a manual trt plugin that we can write that will work with INT32 as range input?

Thanks,
Hao

@hao.cheam,

It is a known limitation of how we handle range. This may be fixed in the future releases.
If your data needs floating-type precision, then using INT32 values would probably not work. A custom plugin can be written to handle the FLOAT case, and it should be relatively easy to write.

Thank you.

Is there a tutorial or a template for the custom plugin? If you could point us to a relevant document, that would be very helpful.

Hi,
Please refer to below links related custom plugin implementation and sample:

https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleOnnxMnistCoordConvAC

Thanks!

@spolisetty I’m facing the same issue. I don’t understand how floating-type values could be used got Range op. Is this something specific for the OP’s model? Second will I have to create special layer just convert int64 input to int32? Would it be a lot simpler affair if I opt for torch-tensorRT?

Thank you for the assistance.