Cuda Runtime Error when infering Onnx model


We developped a version of Yolo V3 on ONNX format to infer with TensorRT.
After the serialization of the model, randomly, we have this cuda error
1: [gatherRunner.cpp::execute::104] Error Code 1: Cuda Runtime (invalid configuration argument)
when the execution of the “enqueuev2” command.
This error appends with differents GPUs and on dfferents computers.

If you have any idea to help us to resolve our issue, thank you.


TensorRT Version: 8.0.0
GPU Type: 1080 - 20280 Ti
Nvidia Driver Version: 465.19.01
CUDA Version: 11.3
CUDNN Version:
Operating System + Version: Ubuntu 18.04 (Docker)
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): nvidia/cuda:11.3.0-cudnn8-devel-ubuntu18.04

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging

Hi. Thanks for your answer.
Unfortunately, I cannot share my model because it is the property of my company.

I tried to use the snippet and I have this issue :

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.6/dist-packages/onnx/", line 104, in check_model
onnx.onnx_cpp2py_export.checker.ValidationError: No Op registered for BatchedNMSDynamic_TRT with domain_version of 11

==> Context: Bad node spec for node. Name: onnx_graphsurgeon_node_0 OpType: BatchedNMSDynamic_TRT

Otherwise, the use of the trtexec command does not reveal any issue with my model.


We recommend you to please make sure you’re using enqueuev2 correctly.
Please refer following samples for your reference

BatchedNMS plugin - TensorRT/plugin/batchedNMSPlugin at master · NVIDIA/TensorRT · GitHub

Also we recommend you to please try on latest TensorRT version as BatchedNMS plugin updated.

Thank you.