Code sample to add custom importer for BatchedNMS_TRT in builtin_op_importers.cpp

• Hardware Platform (Jetson / GPU) T4
• DeepStream Version docker image nvcr.io/nvidia/deepstream:5.0.1-20.09-triton
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.0.0
• NVIDIA GPU Driver Version (valid for GPU only) 450.51.06
• Issue Type( questions, new requirements, bugs)

ONNX-> ONNX GS->TRT INT8

I am trying to optimize the object detection model ONNX YOLOV3 in TRT INT8 with trtexec, to deploy it later on with DS-Triton (which supports only TRT 7.0.X, see here). In order to do so, I have used first the ONNX-GraphSurgeon to modify the ONNX model and replace NMS nodes by the TRT plugin BatchedNMS_TRT. Now, I need to know what is the block code to be added to the custom importer for BatchedNMS_TRT in builtin_op_importers.cpp. Can somebody help, please?

I don’t understand your question.
BatchedNMS_TRT is a TRT plugin, if you want to use nvinfer plugin to run your model.
Since TRT in nvinfer plugin already supports BatchedNMS_TRT , what else you want to add.

BTW, if the answer here addressed your question, can you please mark it closed?

Hi @mchi, before running the onnx object detection model, I need to optimize it first in INT8 but I haven’t be able to do it yet. With TRT 7.0.0, I have applied ONNX GS to replace nms nodes in the YOLOV3 model with the BatchedNMS_TRT TRT plugin , and then used trtexec to convert the updated onnx model to trt engine but I am getting the error {8] No importer registered for op: BatchedNMS_TRT even though the plugin is supported in TRT7.0.0. I have reported the issue here, and the recommendation was to add a custom importer for BatchedNMS_TRT in builtin_op_importers.cpp , so that is my question: what is the block code that needs to be added to the script builtin_op_importers.cpp . to register the custom importer for BatchedNMS_TRT, so I can fix the importer registration issue and complete the optimization.

$ trtexec --onnx=onnx-tensorrt/models/yolov3-10-with-plugin.onnx --saveEngine=/workspace/onnx-tensorrt/models/optimized_yolov3.trt --int8

Error:
    [02/03/2021-22:41:50] [W] [TRT] /workspace/onnx-tensorrt/onnx2trt_utils.cpp:235: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
    [02/03/2021-22:41:50] [W] [TRT] /workspace/onnx-tensorrt/onnx2trt_utils.cpp:261: One or more weights outside the range of INT32 was clamped
    [02/03/2021-22:41:50] [W] [TRT] /workspace/onnx-tensorrt/onnx2trt_utils.cpp:261: One or more weights outside the range of INT32 was clamped
    While parsing node number 467 [BatchedNMS_TRT]:
    ERROR: /workspace/onnx-tensorrt/ModelImporter.cpp:134 In function parseGraph:
    [8] No importer registered for op: BatchedNMS_TRT
    [02/03/2021-22:41:50] [E] Failed to parse onnx file
    [02/03/2021-22:41:50] [E] Parsing model failed
    [02/03/2021-22:41:50] [E] Engine creation failed
    [02/03/2021-22:41:50] [E] Engine set up failed
    &&&& FAILED TensorRT.trtexec # trtexec --onnx=onnx-tensorrt/models/yolov3-10-with-plugin.onnx --saveEngine=/workspace/onnx-tensorrt/models/optimized_yolov3.trt --int8

Hi @virsg ,
Please try the trt plugin lib in deepstream_tao_apps/TRT-OSS/x86 at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub or re-build trt plugin lib following the guidance

Thanks!

Hi @mchi, I have replaced the trt plugin library using the prebuild lib and following instructions as described in step 3, I have the same environment as it was built (running from docker image nvcr.io/nvidia/tensorrt:20.03-py3) but got the same error [8] No importer registered for op: BatchedNMS_TRT.

Replacing the lib
// Backup original libnvinfer_plugin.so.7.0.0
$ root@xxx:/workspace# mv libnvinfer_plugin.so.7.0.0 /workspace/libnvinfer_plugin.so.7.0.0.back
// Download the prebuild plugin
$ root@xxx:/workspace# wget https://nvidia.box.com/shared/static/o4gt2b50qfga71qd3kognf0v9iv6o2hx.1 -O libnvinfer_plugin.so.7.0.0.1
// Only replace the real file, don’t touch the link files, e.g. libnvinfer_plugin.so, libnvinfer_plugin.so.7
$ cp libnvinfer_plugin.so.7.0.0.1 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0
$ sudo ldconfig

Converting the onnx model with trtexec

// Run the trtexec command with the new plugin
$ root@xxx:/workspace# trtexec --onnx=onnx-tensorrt/models/yolov3-10-with-plugin.onnx --saveEngine=/workspace/onnx-tensorrt/models/optimized_yolov3.trt --int8 --verbose
Error:

02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ::Normalize_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ::RPROI_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ::BatchedNMS_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ::FlattenConcat_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ::CropAndResize
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ::Proposal
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ::BatchTilePlugin_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ::DetectionLayer_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ::ProposalLayer_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ::PyramidROIAlign_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ::ResizeNearest_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ::SpecialSlice_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ::InstanceNormalization_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ::GenerateDetection_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ::MultilevelProposeROI_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ::MultilevelCropAndResize_TRT

Input filename: onnx-tensorrt/models/yolov3-10-with-plugin.onnx
ONNX IR version: 0.0.7
Opset version: 10
Producer name:
Producer version:
Domain:
Model version: 0
Doc string:

[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::GridAnchor_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::NMS_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Reorg_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Region_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::PriorBox_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Normalize_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::RPROI_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::BatchedNMS_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::FlattenConcat_TRT
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::CropAndResize
[02/04/2021-21:07:19] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Proposal

[02/04/2021-18:21:56] [W] [TRT] /workspace/onnx-tensorrt/onnx2trt_utils.cpp:235: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively sup port INT64. Attempting to cast down to INT32.
[02/04/2021-18:21:56] [W] [TRT] /workspace/onnx-tensorrt/onnx2trt_utils.cpp:261: One or more weights outside the range of INT32 was clamped
[02/04/2021-18:21:56] [W] [TRT] /workspace/onnx-tensorrt/onnx2trt_utils.cpp:261: One or more weights outside the range of INT32 was clamped
While parsing node number 467 [BatchedNMS_TRT]:
ERROR: /workspace/onnx-tensorrt/ModelImporter.cpp:134 In function parseGraph:
[8] No importer registered for op: BatchedNMS_TRT
[02/04/2021-18:21:56] [E] Failed to parse onnx file
[02/04/2021-18:21:56] [E] Parsing model failed
[02/04/2021-18:21:56] [E] Engine creation failed
[02/04/2021-18:21:56] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # trtexec --onnx=onnx-tensorrt/models/yolov3-10-with-plugin.onnx --saveEngine=/workspace/onnx-tensorrt/models/optimized_yolov3.trt --int8 --verbose

It seems that the register for the custom importer BatchedNMS_TRT plugin is still missing, what about the solution proposed by @pranavm-nvidia and pointed above to add a custom importer for BatchedNMS_TRT in builtin_op_importers.cpp with TRT 7.0.0.?

Maybe it’s a solution,

if going this way, InstanceNormalization : onnx-tensorrt/builtin_op_importers.cpp at main · onnx/onnx-tensorrt · GitHub can be a reference.

But, there are many other plugins that are not registered in builtin_op_importers.cpp, so I’ll check how they work and get back here.

Hi @mchi, I will need the code snippets to register the BatchedNMS_TRT plugin in builtin_op_importers.cpp in the same way as the referenced InstanceNormalization you pointed above. According to @pranavm-nvidia, the plugin importer was introduced in 7.1, so with TRT 7.1 and later there is no need to modify the parser builtin_op_importers.cpp at all (for any plugin, not just NMS), but I am limited to use TRT7.0.0 since DS-Triton only supports this version as you have confirmed here

Alternatively, is there any other method to optimize object detection models with custom ops NMS, using TRT 7.0.0, to be deployed later on with DS-Triton?.

Your help is much appreciated!

or you can wait for next DS release which is coming very soon (maybe in following one or two weeks). New DS will support TRT 7.2.x on dGPU platform

Thanks!

Hi @mchi, do you have some update on when will be available the next DS-Triton release?.

On the other, I was revisiting the instructions and maybe it didn’t work because the prebuild lib available to download is specifically for Jetson libnvinfer_plugin.so instead of x86?, in both README files said it is for Jetson. I will build the TRT OSS plugin by myself and try again

Hi @virsg
DS5.1 is alive.

Thanks so much @mchi, working with it now. I have some performance issues deploying the model with DS-Triton 5.1, where should I post the issue?

If it’s DS related, you could create a new topic in this forum.

Thanks!

1 Like

Hi @mchi. Yes, it is related. Could you please take a look at the new issue here