When running trtexec command to convert ONNX model to Trt engine, using this command:
trtexec --onnx=/home/anurag/NVME/overhead-detector/rapid_32.onnx --saveEngine=/home/anurag/NVME/overhead-detector/rapid_32.engine --explicitBatch
we ran into an error:
[07/27/2021-19:14:11] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: builtin_op_importers.cpp:2371 In function importRange:
[8] Assertion failed: inputs.at(0).isInt32() && “For range operator with dynamic inputs, this version of TensorRT only supports INT32!”
[07/27/2021-19:14:11] [E] Failed to parse onnx file
[07/27/2021-19:14:11] [E] Parsing model failed
[07/27/2021-19:14:11] [E] Engine creation failed
[07/27/2021-19:14:11] [E] Engine set up failed
Python Version (if applicable): Python 3 PyTorch Version (if applicable): 1.6.0 Baremetal or Container (if container which image + tag):
I’ve uploaded the script that I used to convert from pytorch to ONNX.
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command. https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
Thank you for sharing the ONNX model. We could reproduce this issue.
Looks like you’re using Range op, As the error message suggests, TRT currently only accepts INT32 as the range input.
How do I resolve this issue? Is there a way to configure the conversion to onnx such that only INT32 is used for the Range op?
Or is there a manual trt plugin that we can write that will work with INT32 as range input?
It is a known limitation of how we handle range. This may be fixed in the future releases.
If your data needs floating-type precision, then using INT32 values would probably not work. A custom plugin can be written to handle the FLOAT case, and it should be relatively easy to write.
@spolisetty I’m facing the same issue. I don’t understand how floating-type values could be used got Range op. Is this something specific for the OP’s model? Second will I have to create special layer just convert int64 input to int32? Would it be a lot simpler affair if I opt for torch-tensorRT?