Cuda Runtime Error when infering Onnx model

Description

We developped a version of Yolo V3 on ONNX format to infer with TensorRT.
After the serialization of the model, randomly, we have this cuda error
1: [gatherRunner.cpp::execute::104] Error Code 1: Cuda Runtime (invalid configuration argument)
when the execution of the “enqueuev2” command.
This error appends with differents GPUs and on dfferents computers.

If you have any idea to help us to resolve our issue, thank you.

Environment

TensorRT Version: 8.0.0
GPU Type: 1080 - 20280 Ti
Nvidia Driver Version: 465.19.01
CUDA Version: 11.3
CUDNN Version: 8.2.0.53
Operating System + Version: Ubuntu 18.04 (Docker)
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): nvidia/cuda:11.3.0-cudnn8-devel-ubuntu18.04

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi. Thanks for your answer.
Unfortunately, I cannot share my model because it is the property of my company.

I tried to use the snippet check_model.py and I have this issue :

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.6/dist-packages/onnx/checker.py", line 104, in check_model
    C.check_model(protobuf_string)
onnx.onnx_cpp2py_export.checker.ValidationError: No Op registered for BatchedNMSDynamic_TRT with domain_version of 11

==> Context: Bad node spec for node. Name: onnx_graphsurgeon_node_0 OpType: BatchedNMSDynamic_TRT

Otherwise, the use of the trtexec command does not reveal any issue with my model.

Hi,

We recommend you to please make sure you’re using enqueuev2 correctly.
Please refer following samples for your reference

BatchedNMS plugin - TensorRT/plugin/batchedNMSPlugin at master · NVIDIA/TensorRT · GitHub

Also we recommend you to please try on latest TensorRT version as BatchedNMS plugin updated.

Thank you.