Deepstream images quality

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

The hardware is RTX3090, and at the same time, 100 video streams are pulled for YOLO model inference and two-level classification model to obtain raw image data blur and ghosting. How to solve this problem?
Hardware Platform:GPU
deepstream-app version 6.3.0
DeepStreamSDK 6.3.0
CUDA Driver Version: 12.4
CUDA Runtime Version: 12.1
TensorRT Version: 8.6
cuDNN Version: 8.9
libNVWarp360 Version: 2.0.1d3

What kind of streams? Local video files or live streams? Which video format(h264, h265,…)? The resolution and frame rate?

How did you view the images?

Can you share us your complete pipeline and configurations?

live stream
1080p
25fps
format:h264, h265
Save each frame of the image using the probe function by cupy
new question:Choosing frame skipping for detection may result in missed targets,but if detected at full frame rate, this situation will not occur

According to Video Encode and Decode GPU Support Matrix | NVIDIA Developer and Video Codec SDK | NVIDIA Developer, the Geforce RTX 3090 can only support 18x1080p@30fps H264 streams OR 40 x 1080p@30fps HEVC streams. The 100 1080p@25fps h264/h265 streams are overloaded.

The RTSP client will be too slow to consume the received packets due to the overloading. Then the network packet loss happens. The video is broken by missing data with the packet loss.

Have you used nvtracker after nvinfer to keep the output bbox?

thanks,If I set drop frame interval=5, can it meet the requirement

yes

According to Video Encode and Decode GPU Support Matrix | NVIDIA Developer and Video Codec SDK | NVIDIA Developer,How to calculate the resolution and number of channels to be pulled

Set drop frame interval with video decoder?

For nvtracker, please refer to Deepstream Tracker FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums for tuning . Maybe you can try to set “probationAge” to a very small value(maybe zero).

There is video encoding and decoding performance data in Video Codec SDK | NVIDIA Developer such as


Only some Data Center GPU are in the graph, but the nvdec hardware is the same as the other GPUs(E.G Geforce GPUs).

Take Geforce RTX 3090 as the example, you can get the NVDEC family and generation information in Video Encode and Decode GPU Support Matrix | NVIDIA Developer, it is Ampere 5th generation, 1 core in RTX 3090. The A10 GPU also has 2 Ampere 5th generation NVDEC cores. So the RTX 3090 NVDEC capability is the half of A10 GPU.


Back to Video Codec SDK | NVIDIA Developer, you can find the A10 support 37 1080p@30fps H264 streams OR 81 1080p@30fps HEVC streams. So the RTX 3090 can support 37/2=18 1080p@30fps H264 streams OR 81/2=40 1080p@30fps HEVC streams.


rtx3090:1830=540 per seconds
set drop frame interval=5,100 live streams,100
(25/5)=500 per seconds

Thank you very much

Set to 0, the target has not been lost

The “nvurisrcbin” drop-frame-interval property will not reduce the NVDEC loading. The H264/HEVC compression algorithm needs all frames to be decoded in such case. ‘dec-skip-frames’ may help a little, if your video has proper key frame interval, you may try “2 (decode_key): decode key frames” value. The actual peformance is decided by your input streams’ key frame intervals.

the target has not been lost,i should set dec-skip-frames=2 to reduce the NVDEC loading,thank you!!!
If I come to Hangzhou, I will take you to eat West Lake Fish in Vinegar Sauce

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.