] Error Code 1: Myelin (autotuning: CUDA error 3 allocating 0-byte buffer: )

I got this error on an ONNX model which works perfect on TensorRT 7

After upgrade to tensorrt 8.0 or tensorrt 8.2, always hit this error.

my model is DETR a transformer oNNX model which can be accessed from facebookresearch repo.

Hi @LucasJin , welcome back to the NVIDIA Developer forums!

Judging from how you describe your issue being connected to TensorRT, I moved this topic to that specific category

I hope you will get it resolved soon.


Thank you @MarkusHoHo.

Hi @LucasJin,

Could you please share us issue repro ONNX model and trtexec --verbose logs for better help.

Thank you.

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet


import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging

Oh, after serveral years I just got the message.
However, seems I have already made DETR inference via tensorrt long time ago.