Yolov7 Post-processing - output-host-copy - Gst-nvinferserver

About this repo cuda-post-processing
The configurations in the link config_infer_primary_yoloV7.txt appear to be designed for the Gst-nvinfer plugin.

I am interested in using the Gst-nvinferserver plugin and would like to know the equivalent configuration for it. Specifically, I am looking for the equivalent parameter ‘disable-output-host-copy’ used in the Gst-nvinfer plugin.
Also, it would be very helpful if you could provide the same configuration as config_infer_primary_yoloV7.txt but for the Gst-nvinferserver (Triton-Server) plugin.

Any help in providing this information would be greatly appreciated!

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

nvinfer plugin is opensource. disable-output-host-copy has no actual function becuase there is only a macro defination. you can check the code to verify.
could you share your expected function?

You mentioned that I could use the function NvDsInferParseCustomYoloV7_cuda, which has CUDA acceleration for post-processing, as mentioned in this post: Deepstream / Triton Server - YOLOv7 - #7 by fanzh

However, I realized that the function NvDsInferParseCustomYoloV7_cuda is a Parse function to plugin Gst-nvinfer (and not Gst-nvinferserver what Im using) and does not use EfficientNMS, and this caused a misunderstanding.

I am actually using the function NvDsInferParseCustomEfficientNMS, which is available in /opt/nvidia/deepstream/deepstream-6.1/sources/libs/nvdsinfer_customparser/nvdsinfer_custombboxparser.cpp.

During the model export to ONNX, the NMS, topk, and IoU settings are directly exported using the EfficientNMS plugin, making it unnecessary to parameterize these parameters during inference. For this reason, I have to customize the outputLayersInfo of the function NvDsInferParseCustomEfficientNMS to make it work according to the post mentioned here: Deepstream / Triton Server - YOLOv7.

I understand that I still need to use the function NvDsInferParseCustomEfficientNMS because it best suits my needs.
It would be interesting if you could create an official code of the NvDsInferParseCustomEfficientNMS parse function for YOLOV7, compatible with the Gst-nvinferserver and Gst-nvinfer plugins including all proto configuration file.

This would be much simpler because the function NvDsInferParseCustomEfficientNMS only serves the purpose of parse, without the need to configure the NMS and hardcode the number of classes in the code as static const int NUM_CLASSES_YOLO = 80; . NvDsInferParseCustomEfficientNMS is much more streamlined and efficient.

Forget previous post about disable-output-host-copy.
Is it possible to have an official version of NvDsInferParseCustomEfficientNMS for YOLOv7?

Parsing function depends on the model’s outputs. for example, this sample config_infer_primary_yoloV7.txt has one output layer, this sample deepstream-triton-server-yolov7 has four output layers, it is hard to make a functions for all models.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.