Detector parameter tuning

• Hardware Platform (Jetson / GPU) Jetson Xavier
• DeepStream Version DS 6.0
• JetPack Version (valid for Jetson only) 4.6 (L4T 32.6.1)
• TensorRT Version 8.0.1
**• Issue Type: question

Hello!

I wanted to ask about the detector parameter tunning in deepstream.
I am training a .pt model and find the optimal thresholds (confidence threshold and IoU for example), then i am exporting it to .onnx static graph (with these values) and then to .trt engine, also with the efficient NMS plug-in.

My question is: what parameters should i specify at the .txt config of the deepstream nvinfer element (for example nms-iou-threshold, confidence-threshold, cluster mode…) and how do they affect the real time inference of the model, because as i understand these are specified in the architecture of the model during export.
Why i need to specify these arguments again in deepstream?
Also i am using the customParserEfficientNMS function to pair the output of the Efficient NMS plug in to the deepstream buffer format. This function for example takes the confidence from the architecture or from the . txt config file? If i want to change the NMS logic can i do it from there?

Thank you very much

For question 1&2

These are the post-processing parameters, you can refer to this link

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html#id3

Some models require post-processing similar to NMS/DBSCAN in deepstream, so the parameters of the algorithm need to be specified

If your model does not require post-processing, you can set cluster-mode to 4 in the configuration file.

It’s open source, you can refer to the DetectPostprocessor::fillDetectionOutput function in /opt/nvidia/deepstream/deepstream-7.0/sources/libs/nvdsinfer/nvdsinfer_context_impl_output_parsing.cpp

You can set cluster-mode to 4 and then add NMS in your custom post-processing.
such as

parse-bbox-instance-mask-func-name=NvDsInferParseCustomMrcnnTLTV2
# do nms at libnvds_infercustomparser.so
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/libnvds_infercustomparser.so
#no cluster
cluster-mode=4

You can get more information from the below code.
/opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/deepstream-mrcnn-test
/opt/nvidia/deepstream/deepstream-7.0/sources/libs/nvdsinfer_customparser

Thank you very much for your immediate response.

So just to make it clear.

  1. Efficient NMS plug-in is part of Yolov7 (.trt) architecture and is not a part of post processing. This model only post-processing is the CustomNMSparser, right?

  2. My model (Yolov7) does not require post processing and if i set cluster-mode = 4 it will take all the parameters from its architecture, what if i dont set it to 4?

  3. Also for example if i want to set a specific threshold per function can i do it from the config, or no?

Thank you very much

Yes.

Will run the code I mentioned above to filter the bbox.

If you mean per class, the answer is yes.

Here is the community implementation of yolov7

So if i declare cluster-mode= 4 for example and lets say my model is trained and exported at .trt with threshold 0.4.
Then if i set pre-cluster-threshold = 0.3 or per-class-threshold to something bellow or higher that 0.4 what threshold will it take?
Is the answer same with IoU?

If you set cluster-mode to 4, pre-cluster-threshold will no longer take effect.
Because the nvinfer plugin will skip the DBSCAN/NMS.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.