Timing issue in Deepstream Infer Plugin

Please provide complete information as applicable to your setup.

• DeepStream Version 5.0
• Hardware Platform (Jetson / GPU) Jetson Nano
• JetPack Version (valid for Jetson only) 4.3
• TensorRT Version 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Our C++ version YOLOv5s network model implemented with NVIDIA TensorRT API has almost the same object detection precision as the official python version YOLOv5s if we run our C++ version YOLOv5s network model independently on NVIDIA jetson Nano, but the detection precision is degraded if we integrate our C++ version YOLOv5s network model into NVIDIA Deepstream Infer plugin following the way that NVIDIA integrates the Yolov3 model in Deepstream infer plugin.
I tuned some parameters in detector_config.txt, such as net-scale-factor, but no improvement was seen.

After further investigating, I managed to have found the cause is a timing issue in DeepStream Infer Plugin.

If I add some code in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_context_impl.cpp and disable some code in /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvinfer/gstnvinfer.cpp, then the detection precision of the yolov5s model running in DeepStream Infer plugin is as good as that we get on PC .

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

can you share more details?

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.