Example with addPluginV3 and EfficientNMS_TRT

Description

We are trying to switch from tensorrt 8.6.1 to tensorrt 10.0.1.6-1+cuda12.4, and we need more details about addPluginV3. We use a docker from nvidia/cuda:12.4.1-cudnn-devel-ubuntu20.04 , where tensorrt 10.0.1.6-1+cuda12.4 is installed. We use c++, i upload our cpp file, the issue is inside buildTrtModel, we don’t know what to do with inputShapes.

Environment

TensorRT Version: 10.0.1.6-1+cuda12.4
GPU Type: NVIDIA RTX A4000 / A5000
Nvidia Driver Version: 535.183.01
CUDA Version: 12.4 (with nvidia-smi 12.2)
CUDNN Version: 9.1.0
Operating System + Version: ubuntu 20.04
Python Version (if applicable): -
TensorFlow Version (if applicable): -
PyTorch Version (if applicable): -
Baremetal or Container (if container which image + tag): -

Relevant Files

tensorRTWrapper.txt (23.6 KB)
output.txt (148.1 KB)

Can someone help us ? We have already read Developer Guide :: NVIDIA Deep Learning TensorRT Documentation with 9.1.4. Adding a Plugin Instance to a TensorRT Network , but it doesn’t help.

Best regards,
Brice

I am also having the same problem, please help me solve it.

Đèn led MPE