How to set parameters of BatchedNMSPlugin?

I have a model that is exported to ONNX with BatchedNMSPlugin, I then load this onnx and convert to TRT to execute. Is it possible to change the parameters (eg. confidence threshold) of BatchedNMSPlugin at runtime? eg. after the TRT has been built?

I see there is TensorRT: nvinfer1::plugin::NMSParameters Struct Reference but can’t find any examples of use, also not clear if this is only for exporting.

1 Like

Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.


thanks for reply. I’m not trying to implement or modify a custom plugin, I’m trying to set the parameters of an existing plugin after the model has been exported to ONNX or built into a trt engine, even. I’m not sure if these links are relevant to that?

Hi @lukee2ni6 ,

You can try reading it from environment variable inside the enqueue function, or can try reading it from any configuration file.
Please let us know if this works?


My question is how to do this? Is there any documentation or example? Just to reemphasise - this is not a plugin I have written, it is an off the shelf plugin that comes with TRT and my question is how can it be configured without having to re-export the ONNX and rebuild the TRT from pytorch

Hi @lukee2ni6 ,
Apologies for the delayed response.
You may need to access and modify the already available plugin code during inference to do the desired runtime changes.