Memory usage increases over the time for DeepStream 5.0 for AGX

Hi,

We are using Deepstream 5.0 and ran 30 channels with resnet 10 as primary engine. At start CPU memory usage shows around 4 GB over the time it increases and after some time its taking maximum RAM and the process get killed.

Is there any way we can fix this ? Do we need to change any config ?

Thanks
JR

1 Like

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hardware Platform (Jetson / GPU) Jetson Xavier AGX
• DeepStream Version 5.0
JetPack Version (valid for Jetson only) 4.4.1
TensorRT Version NOT_INSTALLED
NVIDIA GPU Driver Version (valid for GPU only)
Issue Type( questions, new requirements, bugs) bugs
How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

deepstream-app -c source8.txt
source8.txt (13.6 KB)

Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Jetson_AGX

We have mapped 32 channels of RTSP , initially the RAM consumption was 6.4 GB after several hours it went to full throttle and the app got killed.

Please help us to find a solution.

Thanks
JR

1 Like

Will this memory leak happen with Nvidia pre-trained model?

Since you are using rtsp sources, can you set “live-source” of “[streammux]” to 1?

There is also memory leak fix we found : DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums, can you try it first?

Hi,

We have changed live-source to 1 in [streammux] and applied patch as suggested in the SDK FAQ.
unfortunately, it did not solve the problem. we still see the memory leak.
Please let me know if you need more information.

Please suggest

Thanks
JR

Will this memory leak happen with Nvidia pre-trained model?

Hi Fiona,

Yes this is happening with Nvidia pre-trained model as well, we are using resnet10.caffemodel with INT8 precision mode.

Thanks
JR

1 Like

Hi,

An update from my side
Only with RTSP live stream memory leak is happening… we tested with static videos , it works fine.

Please suggest.

Thanks

After applying the patch and change the source type = 4 ( RTSP ) the memory seems to be stable now.

We have new issue ,… when we run python application, we get 8 FPS only for 7 RTSP channels , and after some time it goes down to 0.2 FPS ,

config_infer_primary.txt (4.1 KB)
deepstream-test01.py (15.8 KB)

This is the command we tried :
python3 deepstrea-test01.py rtsp://admin:int12345@10.15.12.63 rtsp://admin:int12345@10.15.12.125 rtsp://admin:int12345@10.15.12.126 rtsp://admin:int12345@10.15.12.65 rtsp://admin:int12345@10.15.12.136 rtsp://admin:int12345@10.15.12.119 rtsp://admin:int12345@10.15.12.115 frames

Please suggest if we need to do any changes in config files

Thanks
JR

In your tiler_sink_pad_buffer_probe, the opencv is used, this is time consuming operation(opencv is slow) and will block frame buffer, so the whole pipeline is blocked when the opencv is working, the FPS continue dropping.

Never do any time consuming operation in probe function.

We tried deepstream-test3.py with RTSP streams , we have not used any OpenCV , but still block the frame buffer.
Python Sample code :
deepstream_test_3.py (14.4 KB)

Config File :
dstest3_pgie_config.txt (3.3 KB)

python3 deepstream_test_3.py “rtsp://admin:uty12345@10.15.12.154” “rtsp://admin:uty12345@10.15.12.157” “rtsp://admin:blast12345@10.15.12.173” “rtsp://admin:blast12345@10.15.12.174” “rtsp://admin:blast12345@10.15.12.178” “rtsp://admin:blast12345@10.15.12.182” “rtsp://admin:blast12345@10.15.12.214” “rtsp://admin:blast12345@10.15.12.217” “rtsp://admin:blast12345@10.15.12.218” “rtsp://admin:blast12345@10.15.12.219” “rtsp://admin:anne12345@10.15.12.188” “rtsp://admin:anne12345@10.15.12.191” “rtsp://test:int12345@10.15.13.49” “rtsp://test:int12345@10.15.13.52” “rtsp://test:int12345@10.15.13.67” “rtsp://test:int12345@10.15.13.53” “rtsp://test:int12345@10.15.13.56” “rtsp://test:int12345@10.15.13.58” “rtsp://test:int12345@10.15.13.64” “rtsp://test:int12345@10.15.13.69” “rtsp://admin:ccm12345@10.15.12.209” “rtsp://admin:ppc12345@10.15.13.163” “rtsp://admin:brm12345@10.15.13.72” “rtsp://admin:brm12345@10.15.13.73” “rtsp://admin:brm12345@10.15.13.75” “rtsp://admin:brm12345@10.15.13.76” “rtsp://admin:brm12345@10.15.13.77” “rtsp://admin:brm12345@10.15.13.78” “rtsp://admin:brm12345@10.15.13.79” “rtsp://admin:brm12345@10.15.13.82” “rtsp://admin:line12345@10.15.13.179” “rtsp://admin:line12345@10.15.13.181” “rtsp://admin:line12345@10.15.13.183” “rtsp://admin:admin12345@10.15.12.3” “rtsp://admin:admin12345@10.15.12.4” “rtsp://admin:int12345@10.15.12.6” “rtsp://admin:int12345@10.15.12.7” “rtsp://admin:admin12345@10.15.12.10”

What causes the blocking of frames.

Thanks

38 rtsp streams are heavy for Xavier. Have you viewed the GPU usage while runing this case?

What is the maximum number of RTSP channels for AGX for stable process Xavier recommend?

GPU usage is between 70-99 %.
If any one of the streams goes down will the frame buffer waits for stream and blocks all other streams?

Thanks

The maximum number of streams may change case by case, depends on the whole loading of the pipeline, diffrent inference models, different pipeline will have different performance.
If one of the RTSP stream stops to stream, the others will go on.