] Error Code 1: Myelin (autotuning: CUDA error 3 allocating 0-byte buffer: )

I got this error on an ONNX model which works perfect on TensorRT 7

After upgrade to tensorrt 8.0 or tensorrt 8.2, always hit this error.

my model is DETR a transformer oNNX model which can be accessed from facebookresearch repo.

Hi @LucasJin , welcome back to the NVIDIA Developer forums!

Judging from how you describe your issue being connected to TensorRT, I moved this topic to that specific category

I hope you will get it resolved soon.

Markus

Thank you @MarkusHoHo.

Hi @LucasJin,

Could you please share us issue repro ONNX model and trtexec --verbose logs for better help.

Thank you.

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#onnx-export

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!