Builder.build_cuda_engine fails to build engine but prints no errors

Hello!

I am trying to convert an ONNX version of a ReIdentification vehicle network to TensorRT. I am running this script for the conversion from ONNX to TensorRT. However, the engine is never created. I have found this error in other posts, but the solution suggested is to use trtexec…however, I was wondering how to solve it with Python.

My ONNX model is here.

I am running this on a Jetson Xavier NX:
TensorRT = 7.1.3.0
PyCUDA = built from this wheel
Pytorch = 1.6.0

I appreciate your help =)

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#onnx-export

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

@NVES Thanks for the quick reply. As you can see in my original post, I shared my ONNX file since the beginning.

Today I was able to successfully convert the model using trtexec. However, If I try to load it into pytorch (torch.load) it gives me the following error:

“UnpicklingError: unpickling stack underflow”

Could you please tell me the easiest way to load my trained trt model into pytorch?

Thanks! :)

Hi,

Hope following sample will help you, which tells how to use trt engine for inference.

Thank you.