Hello,
I am launching an inference on a Jetson Orin Nano using the container image dustynv/onnxruntime:r35.3.1 on JetPack 5.1.1.
I have converted from pytorch model to onnx model on a amd64 architecture using the follow line of code torch.onnx.export(model, dummy_input, onnx_path, verbose=True). The script launched at the Jetson is as follows:
This code works on CPU IoT devices and in a Jetson Orin Nano the code won’t stop and remains in an infinite loop freezing the screen. The exact error is shown at the image attached.
Hello,
thanks for your response. Can you give me some guidance on how to find out which provider displaying the error? It is because my screen freezes when I run the code and I can not continue.
thanks for your response. I execute the line of code with Tensorrt and generate the error, my screen freezes again. There are no problem when I execute just with CUDA provider, it works in this way but not with Tensorrt provider.
I hope this information helps to solve the problem.