Description
Hi,
I’m rebuilding a custom ops but it was built on an older version before so I got warnings when I built these ops. Does IPluginV2Ext still work in trt 10.x? or do I have to overwrite some functions using its alternative? (IPluginV3)? IPluginV2Ext is marked as deprecated in trt 10.x, but not sure if it is till working. Thanks!
Environment
TensorRT Version: 10.3
GPU Type:
Nvidia Driver Version:
CUDA Version: 12.4
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
/azureuser/mmdeploy/csrc/mmdeploy/backend_ops/tensorrt/common/trt_plugin_base.hpp:11:22: warning: 'IPluginV2Ext' is deprecated [-Wdeprecated-declarations]
11 | #define TRT_NOEXCEPT noexcept
| ^~~~~~~~
/azureuser/mmdeploy/csrc/mmdeploy/backend_ops/tensorrt/batched_rotated_nms/trt_batched_rotated_nms.cpp:180:56: note: in expansion of macro 'TRT_NOEXCEPT'
180 | const char* name, const PluginFieldCollection* fc) TRT_NOEXCEPT {
| ^~~~~~~~~~~~
In file included from /azureuser/TensorRT-10.3.0.26/include/NvInferRuntimeCommon.h:34,
from /azureuser/TensorRT-10.3.0.26/include/NvInferPluginUtils.h:21,
from /azureuser/mmdeploy/csrc/mmdeploy/backend_ops/tensorrt/batched_rotated_nms/trt_batched_rotated_nms.hpp:8,
from /azureuser/mmdeploy/csrc/mmdeploy/backend_ops/tensorrt/batched_rotated_nms/trt_batched_rotated_nms.cpp:3:
/azureuser/TensorRT-10.3.0.26/include/NvInferRuntimePlugin.h:463:22: note: declared here
463 | class TRT_DEPRECATED IPluginV2Ext : public IPluginV2
Based on the provided context, it appears that you are working with NVIDIA’s TensorRT (TRT) and have a custom ops built on an older version. You’re encountering warnings when building these ops with TRT 10.x and are unsure if they still work or need to be modified using the alternative IPluginV3 interface.
TensorRT 10.x introduces changes to the plugin interface, and IPluginV2 is marked as deprecated. However, it’s not explicitly stated that IPluginV2 is completely removed or non-functional in TRT 10.x.
To determine the best course of action, let’s compare the two plugin interfaces:
IPluginV2: This is the older plugin interface, which you’ve used to build your custom ops. Although it’s marked as deprecated in TRT 10.x, it might still work, but you may encounter warnings or compatibility issues.
IPluginV3: This is the newer plugin interface, introduced in TRT 10.x. It provides additional features and improvements over IPluginV2. If you want to ensure compatibility and avoid potential issues, you should consider migrating your custom ops to use IPluginV3.
To answer your question:
- Does
IPluginV2 still work in TRT 10.x? It’s likely that IPluginV2 still works, but you may encounter warnings or compatibility issues. NVIDIA might not actively support or maintain IPluginV2 in future releases.
- Do you need to overwrite some functions using
IPluginV3? If you want to ensure compatibility and avoid potential issues, it’s recommended to migrate your custom ops to use IPluginV3. This will provide a more future-proof solution and allow you to take advantage of the new features and improvements in TRT 10.x.
In summary, while IPluginV2 might still work in TRT 10.x, it’s recommended to migrate your custom ops to use IPluginV3 to ensure compatibility and avoid potential issues. If you’re unsure about the migration process or need further guidance, you can refer to the TensorRT documentation and NVIDIA’s developer resources for more information.