Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
Jetson
• DeepStream Version
deepstream6.2 & deepstream6.3 both
• JetPack Version (valid for Jetson only)
5.1.1
• TensorRT Version
8.5.2.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
When I was using deepstream’s instance segmentation feature with the yolov5-seg model and passing the mask to the corresponding data NvDsInferInstanceMaskInfo through post-processing, I noticed a slight memory leak. The phenomenon is that when there are many input paths and targets, the occupied memory slowly increases. This phenomenon occurs in both version 6.2 and 6.3. I’m not sure if it’s a bug or a problem with my program. After a simple trace of the issue, it seems that the memory leak occurs before the release of mask in the InstanceSegmentPostprocessor::fillDetectionOutput function, specifically in the fillUnclusteredOutput(output) function. If I release the mask before this function, the memory growth stops but the target boxes are no longer drawn. I hope to get a response as soon as possible, thank you.