Hi,
I got DeepStream 6.3 source code from docker nvcr.io/nvidia/deepstream:6.3-samples.
And I found gstnvinfer.cpp use batch.release() to pass GstNvInferBatch* to convert_batch_and_push_to_input_thread ().
But convert_batch_and_push_to_input_thread() does not delete “batch” before return FALSE.
Will it cause memory leak?
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Thanks for your response.
The source code is from /opt/nvidia/deepstream/deepstream-6.3 in docker nvcr.io/nvidia/deepstream:6.3-samples.
I think the “code logic” may cause memory leak.
And it does not related to any hardware and other software version.
Caller is expected to delete object by itself after call std::unique_ptr<T,Deleter>::release.
I am okay to close this post if it is not an issue. Thanks :)