Hi,
Is there any implementation of batchedNMSPlugin in python? Any method through which I can use something like batchedNMSPlugin during my TRT conversion step only? I’m converting an onnx model to tensorrt.
Hi,
Just want to clarify first.
Do you need a batchedNMSPlugin for your TensorRT model or you want to use the layer-level implementation directly?
To use it as a plugin for TensorRT, you just need to replace the plugin library from our OSS GitHub and inference the model with python interface.
Thanks.
I only want to replace my post-processing step with your batchedNMSPlugin so that I can reduce the post-processing time used by NMS. My code implementation is in python.
My original model is available in ONNX and TensorRT as well for which I’m using a common post-processing script. Should I use layer-level implementation directly (for both ONNX and TensorRT) or straightaway use batchedNMSPlugin for TensorRT?
Thanks!
Hi,
YES. Just feed the corresponding input into the layer should be enough.
May I know which frameworks are you using for the model?
If it is TensorFlow, you can add the layer into config.py directly.
Thanks.
Hi,
I trained my model in PyTorch and converted it to ONNX followed by TensorRT for the final inference using python. Is there any way I can I do it by bypassing the UFF conversion step and straightaway going from PyTorch/ONNX to TensorRT?
Hi,
YES. You can use the plugin with the information shared here:
But there is only C++ interface available.
Thanks.