Unet Inference on Jetson nano faces problem of compiling the onnx file to the engine file

Hello all,

I use jetson nano to convert my onnx file of Unet. However, I get the following error:
[08/03/2023-15:47:01] [TRT] [W] onnx2trt_utils.cp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attemptting to cast down to INT32.
ERROR: Failed to parse the ONNX file.
In node 0 (parseGraph): INVALID_NODE: Invalid Node StatefulPartitionedCall/model_1/conv2d/BiasAdd__6
Attribut not found: allowzero

Nvidia Driver Version: Jetson nano
CUDA Version: 10.2.300
CUDNN Version:
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): Python 3.6.9

Could you help me to solve this problem?


This looks like a Jetson issue. Please refer to the below samples in case useful.

For any further assistance, we will move this post to to Jetson related forum.