Memory leaky

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson
• DeepStream Version
deepstream6.2 & deepstream6.3 both
• JetPack Version (valid for Jetson only)
5.1.1
• TensorRT Version
8.5.2.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
When I was using deepstream’s instance segmentation feature with the yolov5-seg model and passing the mask to the corresponding data NvDsInferInstanceMaskInfo through post-processing, I noticed a slight memory leak. The phenomenon is that when there are many input paths and targets, the occupied memory slowly increases. This phenomenon occurs in both version 6.2 and 6.3. I’m not sure if it’s a bug or a problem with my program. After a simple trace of the issue, it seems that the memory leak occurs before the release of mask in the InstanceSegmentPostprocessor::fillDetectionOutput function, specifically in the fillUnclusteredOutput(output) function. If I release the mask before this function, the memory growth stops but the target boxes are no longer drawn. I hope to get a response as soon as possible, thank you.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

You can refer to valgrind to analyze the specific locations of memory leaks first.
Could you attach the amount of the memory leak?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.