[TensorRT] Trying to convert an onnx model to tensorrt

Hi everyone,

I am currently working on converting a YOLOv5 ONNX model to TensorRT on my AGX Xavier running Jetson 5.1. Unfortunately, I’ve encountered an issue that I’m struggling to resolve,

Here’s the error message I’m encountering:

[12/12/2023-09:55:41] [TRT] [E] 10: [optimizer.cpp::computeCosts::3728] Error Code 10: Internal Error (Could not find any implementation for node /model.0/conv/Conv.)

I am using TensorRT version 8.5.2.2 with CUDA 11.4. If necessary, I can share the ONNX model.

Any insights or guidance on how to resolve this issue would be greatly appreciated. Thank you in advance for your time and assistance.

Hi, can you share your onnx file so that I can try it from here?
Also, how did you create this onnx file?

In the meantime, can you follow this site to create the onnx, because I did a couple of weeks ago and was able to generate the TensorRT engine.

This one was created by using the above method.
yolov5s.onnx.zip (23.6 MB)

Hi,

Please check our doc for more information:

https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#error-messaging

TensorRT Core Library Error Messages
> Builder Errors
>> Internal error: could not find any implementation for node . …

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.