Can I connect BatchedNMSPlugin to deserialized trt engine?

* Name:           NVIDIA Jetson 
* Type:           AGX Xavier
* Jetpack:        UNKNOWN [L4T 32.2.2] (JetPack 4.3. DP)
* GPU-Arch:       7.2
  • Libraries:
    • CUDA: 10.0.326
    • cuDNN: 7.6.3.28-1+cuda10.0
    • TensorRT: 6.0.1.5-1+cuda10.0
    • VisionWorks: NOT_INSTALLED
    • OpenCV: 4.0.0 compiled CUDA: YES

Hi,

I want to connect BatchedNMSPlugin to detector trt engine.

I have a detector onnx file.

And I convert to trt engine file using onnx2trt.

How can I connect nms plugin to trt engine?

If not, can I connect nms plugin to engine before build engine using onnx2trt?

Thanks.

Hi,

There’s an example of using an NMS plugin in C++ in our open source samples here: https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffSSD

There is a python version of the sample that comes with the TensorRT installation, in tensorrt/samples/python/uff_ssd.

You can also refer to the BatchedNMSPlugin in the documentation for more information: https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/c_api/_nv_infer_plugin_8h.html#aa2f0a45df014ea2e3810f9df821d0514

Or perhaps this repo can provide some helpful information: https://github.com/dmikushin/detectrt/tree/master/src/plugin/nmsPlugin

Thanks,
NVIDIA Enterprise Support

Hi,

Is there any example for batchedNMSPlugin?

I can’t find it anywhere.

I want to try it, but it’s not working.

[Sample Code]

nvinfer1::plugin::NMSParameters nms_parameter;
nms_parameter.shareLocation = true;
nms_parameter.backgroundLabelId = -1;
nms_parameter.numClasses = 1;
nms_parameter.scoreThreshold = 0.1f;
nms_parameter.iouThreshold = 0.5f;
nms_parameter.isNormalized = false;

auto nms_plugin = createBatchedNMSPlugin(nms_parameter);

std::vector<nvinfer1::ITensor*> nms_inputs(2);
// trt_network : my original network
nms_inputs[0] = trt_network->getOutput(7);   // bboxes output for nms
nms_inputs[1] = trt_network->getOutput(8);   // scores output for nms

trt_network->addPluginV2(nms_inputs.data(), 2, *nms_plugin);

[Error Log]

bbox: 4: 1, 1393, 1, 4, // dimensions of bboxes nms input
score: 3: 1, 1393, 1,    // dimensions of scores nms input

[2019-10-29 01:26:16     BUG] Assertion failed: dims.d[i] >= 1
../builder/cudnnBuilderGraph.cpp:605
Aborting...

[2019-10-29 01:26:16   ERROR] ../builder/cudnnBuilderGraph.cpp (605) - Assertion Error in checkDimsSanity: 0 (dims.d[i] >= 1)
terminate called after throwing an instance of 'std::runtime_error'
  what():  Failed to create object
Aborted (core dumped)

I think there was just a fix to the batchedNMS OSS plugin on GitHub regarding dimensions, maybe this will solve your problem? See here: https://github.com/NVIDIA/TensorRT/pull/204