• NVIDIA GPU Driver Version (valid for GPU only):
565.57.01
• Issue Type:
Question
• How to reproduce the issue?
I’m working on a DeepStream application based on the deepstream_app.c reference implementation. My setup includes:
Hardware: RTX 4090 with 6 RTSP camera streams.
Models Used:
Primary Model: YOLO for person detection.
Secondary Model 1: YOLO for face detection.
Secondary Model 2: ArcFace for face recognition.
Steps to Reproduce:
Configure the pipeline with the above models and streams.
Run the application with all 6 RTSP streams active.
Observe artifacts in the RTSP video streams during real-time processing when the pipeline is under load.
• Requirement Details:
The possible causes of video artifacts in this multi-camera, multi-model setup.
Recommended configurations (e.g., decoder settings, batching, or memory optimizations) to handle high workloads effectively.
Best practices for profiling and optimizing such a pipeline.
How many cameras can the RTX 4090 handle efficiently in a DeepStream pipeline when using multiple models (YOLO and ArcFace for face recognition) simultaneously?
I am working with DeepStream for a retail store setup, using multiple RTSP camera streams (1920x1080, bitrate 1280, and FPS 10) from Dahua cameras. Initially, everything was working fine, but after some time, I started observing artifacts in the video streams. The artifacts appear intermittently and seem to resolve after a short period, but then reappear. The issue persists despite normal performance at other times.
My configuration involves using streammux to handle multiple RTSP streams, YOLOv8 for object detection, and ArcFace for face recognition. The results are being streamed out using the RTSP output (sink type 4). Additionally, I’m using a GPU for inference.
I am wondering:
Could network latency or unstable bandwidth be causing these intermittent artifacts? How can I monitor or mitigate this?
How can I optimize memory usage to avoid these artifacts, given my GPU’s current memory usage of 4653 MiB?
Are there specific parameters I should adjust within the streammux configuration to prevent memory overflow or issues with multiple camera streams?
Can you suggest any profiling or debugging techniques that can help pinpoint the exact cause of these artifacts, especially considering I am using multiple sources in real-time?"
this is the config file [application]
enable-perf-measurement=0
perf-measurement-interval-sec=10
#gie-kitti-output-dir=/home/jayadevice0001/Jayachandaran_AI/src/deepstream-app/tracker
Yes. If there is network bandwidth issue, it will cause artifacts on video for the packets lost.
There is no evidence to show that the memory usage is related to the artifacts.
Before you adjust any parameter, can you set “enable=1” in the [sink1] group to record the output video before it be sent through network? You need to identify whether the packets lost happens in input side or the output side.
Since there are 3 models in use in your piepline, can you use “nvidia-smi dmon” command to get the performance log while running the pipeline?
What is your camera videos’ resolution, format and FPS?
This issue is related to the network packet loss. It is no use to run with gdb.
Please set “enable=1” in the [sink1] group to record the output video before it be sent through network? You need to identify whether the packets lost happens in input side or the output side.
Thank you for the response. I understand that the issue might be related to network packet loss. Could you please guide me on how to systematically check and address this? :
How to Identify Packet Loss on the Input and Output Sides:
What tools or methods can I use to monitor and verify packet loss at both the input side (RTSP streams from cameras) and the output side (RTSP output streaming)?
Are there specific logs or statistics in DeepStream I should be analyzing to determine where the loss occurs?
Suggestions for Rectifying Packet Loss:
Are there any network configurations, such as adjusting latency or drop-frame-interval in the DeepStream config, that could help mitigate packet loss?
Would increasing buffer sizes for RTSP streams help, and if so, which parameters should I modify in the DeepStream configuration?
Monitoring and Debugging Tools:
Can you recommend any tools (e.g., Wireshark, nvidia-smi, or DeepStream performance metrics) to help monitor network performance or diagnose packet loss effectively?
Is there a way to log dropped frames or measure network jitter using DeepStream?
Best Practices for Stable RTSP Streaming:
Should I adjust the bitrate or fps of the RTSP streams from the cameras to ensure stability? If yes, what values are optimal for a 1920x1080 resolution stream at 10 FPS?
Are there other network optimization practices, such as using a dedicated network switch or VLAN for the cameras and DeepStream application, that you recommend?
The 1280 bps is too low for the 1080p resolution video. The video may looks mosaics with such bitrate. Please set a reasonable bitrate.
Please set “enable=1” in the [sink1] group to record the output video before it be sent through network? You need to identify whether the packets lost happens in input side or the output side.
0:00:10.859489995 29747 0x7ce94401baa0 FIXME rtph265depay gstrtph265depay.c:1287:gst_rtp_h265_depay_process:<depay_elem3> Assuming DONL field is not present.
I am also observing artifacts in the output video stream, but I am unable to determine whether the issue is related to the stream itself, the DeepStream pipeline configuration, or the code. Could you please help me with the following?
Understanding the Message:
What does this debug message indicate?
Could this be related to missing or improperly formatted RTP headers, specifically the DONL (Decoded Order Number Length) field?
Rectifying Artifacts:
What configurations or parameters should I check in the DeepStream pipeline to address this issue?
For example, should I modify enable-video-sink, adjust buffer settings, or use a specific codec-related configuration?
General Suggestions:
Are there any best practices for handling RTSP streams with H.265 encoding to avoid such artifacts in DeepStream?
Should I consider re-encoding or transcoding the stream before processing?
I would appreciate detailed guidance on how to resolve this issue effectively. Thank you!
Please set “enable=1” in the [sink1] group to record the output video before it be sent through network? You will get a mp4 video file after you terminate the deepstream-app. You need to check with the recorded video first.
I sincerely appreciate your support and the detailed steps you shared. They worked perfectly and helped me resolve the issue. Thanks so much for your time and assistance!
Hi! Sure, I’d be happy to share how I resolved the artifacts issue. Here’s what worked for me:
Camera Configuration Adjustments:
I reduced the FPS and bitrate settings for the camera.
Changed the resolution to balance performance and quality.
Batch Size Optimization:
Since I used 6 cameras, I set the streammux batch size to 6.
For the two different secondary models, I set the batch size to double the number of cameras to ensure smooth processing.
Using newstreammux Instead of Default streammux:
I updated the newstreammux properties with the following configuration: