DeepStream Application Hanging After Extended Runtime: Possible Memory Leakage or Other Cause?

• Hardware Platform (Jetson / GPU):
GPU (NVIDIA GeForce RTX 4090)

• DeepStream Version:
7.1

• JetPack Version (valid for Jetson only):
N/A

• TensorRT Version:
10.7.0.23

• NVIDIA GPU Driver Version (valid for GPU only):
565.57.01

• Issue Type:
Question

• How to reproduce the issue?
I’m working on a DeepStream application based on the deepstream_app.c reference implementation. My setup includes:

  1. Hardware: RTX 4090 with 6 RTSP camera streams.
  2. Models Used:
  • Primary Model: YOLO for person detection.
  • Secondary Model 1: YOLO for face detection.
  • Secondary Model 2: ArcFace for face recognition
    I am running a DeepStream application with 6 RTSP sources. The primary detection model I am using is YOLO11m for object detection, and I also perform face detection and face recognition using the ArcFace model. However, after running the system for 2 to 3 hours, the application hangs.

Could this issue be caused by memory leakage, or is there another underlying reason for the system hanging after extended usage? How can I resolve this issue and ensure continuous, stable performance?

Some suggestions

  1. If you suspect that the stuck is caused by memory leak, use top to check the memory usage of the current process, and use valgrind to detect whether there is a memory leak.
  2. Use sudo gdb attach {pid} to debug the stucked process and view the stack.
  3. GST_DEBUG=3 your_app to check the output log