We are using the Milestone VPS plugin with Deepstream Pipeline that includes different Detection/Classification Models.
When Using H264 Codec Everything works fine, whenever we switch to JPEG Codec GPU Memory keep increasing until we get full memory and Pipeline Crash.
Can you please help with this?
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• Hardware Platform (Jetson / GPU) Nvidia T4 GPU
• DeepStream Version 5.0
• TensorRT Version 7.0
• NVIDIA GPU Driver Version (valid for GPU only) 460.56
We don’t face the issue if we are using CODEC h264 from a live camera, whenever we switch to a directshow(Video) Codec will be JPEG and after 20 minutes the pipeline crash due to memory overflow.
Can you try your case with the latest deepstream version 5.1?
You need to tell us how to reproduce the memory leak.E.G. a simple deepstream app which can run in our enviroment.
Yes, we are using Deepstream 5.1.
Our main issue is that the source of the stream uses AVI/JPEG codec only and we can’t change it. However, in the pipeline (as demonstrated by Nvidia examples) we are using h264 decoder. When using h264 sources, all is working perfectly.
Is there any alternative for the h264 decoder?? is there a jpeg decoder that we can use (and is there any example available??) ?
Thanks in advance.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.