WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 313 [Range -> "/0/model.22/Range_output_0"]:
ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node ---
ERROR: [TRT]: ModelImporter.cpp:775: input: "/0/model.22/Constant_23_output_0"
input: "/0/model.22/Cast_output_0"
input: "/0/model.22/Constant_24_output_0"
output: "/0/model.22/Range_output_0"
name: "/0/model.22/Range"
op_type: "Range"
I tried exporting .pt to .onnx by GoogleColab, Jetson Nano … After that, I used onnx for my app, but it still had errors. However, I tried it on DS7.1 with dGPU, it works correctly when downscasting INT32 and run for my app. Can you help me?