We are using Deepstream 5.0 and ran 30 channels with resnet 10 as primary engine. At start CPU memory usage shows around 4 GB over the time it increases and after some time its taking maximum RAM and the process get killed.
Is there any way we can fix this ? Do we need to change any config ?
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hardware Platform (Jetson / GPU) Jetson Xavier AGX • DeepStream Version 5.0 JetPack Version (valid for Jetson only) 4.4.1 TensorRT Version NOT_INSTALLED NVIDIA GPU Driver Version (valid for GPU only) Issue Type( questions, new requirements, bugs) bugs How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
We have mapped 32 channels of RTSP , initially the RAM consumption was 6.4 GB after several hours it went to full throttle and the app got killed.
We have changed live-source to 1 in [streammux] and applied patch as suggested in the SDK FAQ.
unfortunately, it did not solve the problem. we still see the memory leak.
Please let me know if you need more information.
This is the command we tried :
python3 deepstrea-test01.py rtsp://admin:int12345@10.15.12.63 rtsp://admin:int12345@10.15.12.125 rtsp://admin:int12345@10.15.12.126 rtsp://admin:int12345@10.15.12.65 rtsp://admin:int12345@10.15.12.136 rtsp://admin:int12345@10.15.12.119 rtsp://admin:int12345@10.15.12.115 frames
Please suggest if we need to do any changes in config files
In your tiler_sink_pad_buffer_probe, the opencv is used, this is time consuming operation(opencv is slow) and will block frame buffer, so the whole pipeline is blocked when the opencv is working, the FPS continue dropping.
Never do any time consuming operation in probe function.
We tried deepstream-test3.py with RTSP streams , we have not used any OpenCV , but still block the frame buffer.
Python Sample code : deepstream_test_3.py (14.4 KB)
The maximum number of streams may change case by case, depends on the whole loading of the pipeline, diffrent inference models, different pipeline will have different performance.
If one of the RTSP stream stops to stream, the others will go on.