&&&& FAILED TensorRT.trtexec [TensorRT v8502]

Hi.Im gonna try to convert onnx to engine but when i tried this im taking this error."&&&& FAILED TensorRT.trtexec [TensorRT v8502] ". How can I fix that?
Im using Jetson orin nano.
CUDA:10.4
L4T:35.3.1
And I have very complicated situation.I installed the tensorrt 8.5.2.2 version correctly in the nvidia web environment, then I installed nvidia-tensorrt due to the nvidia-tensorrt prompt while trying to use the object detection code using ultralytics.The latest version of nvidia-tensorrt was 5.1.1.5.1.1 Version of nvidia-tensorrt library was the most up-to-date , so I installed 5.1.1. After these installations tensorrt version in “jtop” changed to “5.1.1” and CUDNN version to “1.0”. I don’t understand what’s going on. I guess that’s why I can’t use trtexec but I’m not sure. Please help.
Thanks.
$usr/src/tensorrt/bin/trtexec --onnx=v8n.onnx --saveEngine=nano.engine
&&&& FAILED TensorRT.trtexec [TensorRT v8502] # /usr/src/tensorrt/bin/trtexec --onnx=v8n.onnx --saveEngine=nano.engine

1 Like

Dear @furkant,
Have you compiled the trtexec sample on target? Is it due to trtexec binary mismatch?

I couldn’t understand the question,sorry.Can you explain it?

Dear @furkant,
You have issue with runninng trtexec on on board which installed with Jetpack 5.1.1?

i can use trtexec for model conversion(for onnx to engine).I can also do this model conversion via ultralytics, but I think I didn’t get the actual performance of tensorrt with ultralytics when I made the engine.That’s why I think it will work better if I print it out with trtexec, but the model outputs I get with trtexec don’t work in my code and I’m adding the following error.

Dear @furkant,
So you are able to get ONNX → TRT using trtexec. You see issue when using the TRT model in your code. Is your code based on TRT API?

No,I don’t have code based on TRT API.I have a simple python code.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.