Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) - GPU
• DeepStream Version - 5.1 and 6.2
• TensorRT Version - 7.2 and 8.1
• NVIDIA GPU Driver Version (valid for GPU only) - 530.30.02
I used the basic deepstream app. All my RTSP streams are having the following configuration set.
RTSP camera configuration.
→ video compression = H.264
→ Resolution = 4 MP / 2 MP
→ Frame rate = 25 FPS
→ Bit rate = 3072
→ Bit rate type = CBR
→ I frame interval = 40
→ CPU = AMD EPYC 7543 32-core processor.
→ GPU = A5000 24GB RAM.
→ RAM = 64 GB DDR4
When i add 6 RTSP streams with PGIE disabled i get FPS of almost 24. But as soon as i start adding more RTSP streams the FPS starts dropping. Here are some test cases.
- 6 RTSP streams we get 24 FPS.
- 8 RTSP streams we get 18 FPS.
- 10 RTSP streams we get 17 FPS.
- 15 RTSP streams we get 9 FPS.
- 20 RTSP streams we get 6 FPS.
- 30 RTSP streams we get 3 FPS.
When i enable the PGIE the performs drops by 2 FPS. Example - if i use 8 RTSP streams we get 16 FPS. and if we use 30 RTSP streams we get 1 FPS.
As the FPS is dropping we start to observe a glitched or a delayed RTSP stream.
Even if PGIE is disabled then why are we getting the DROP in FPS ?
How can i solve this problem. We have invested a lot in acquiring the system and getting such low performance.
Have you set up a suitable batch-size for nvstreammux? You can refer to the link below first: https://forums.developer.nvidia.com/t/deepstream-sdk-faq/80236/34
I have set the batch size as follows.
batch-size = 8
batch-size = 8
Here i have set batch size as 8 because I’m working with 8 streams.
Yes. But you add more streams than 8. You also can try to set the
All the RTSP streams or feeds with which I’m working are set to 25 FPS and the calculation for batched-push-timeout would be 10,00,000 / 25 = 40,000.
As i increase the streams I’m making sure that batch-size for streammux and primary-gie is same as the number of streams.
FPS in the app drops as i increase the streams. The same is mentioned above.
OK. Could you attach your piepline by referring the https://forums.developer.nvidia.com/t/deepstream-sdk-faq/80236/10.
You can also observe the load situation of your device. Perhaps the device is overloaded, causing slow processing speed.
OK. I’ll check and update.
Do you still need support for this topic? Or should we close it? Thanks.
I still need the support for this topic.
Overloading may be an issue so I disabled all the detection models and still observed the same results.
So after disabling all the detection models, do you observe the load situation of your device again? Could you attach your deepstream-app config file?
And what is the specific model of your graphics card?
I’m using the default deepstream app and deepstream test 5 app for this testing.
In the config file i have disabled [message-converter] and [message-consumer0]
I have made sure that batch size of streammux is same as that of the number of sources and live-source=, batched-push-timeout=40000 as i am working with 25 FPS RTSP feed.
I have A5000 GPU.
I Observed the following in some of the tests which i have done.
I created multiple config files where i limited one config file with only 4 streams because in the case scenario observed above i was getting 24 FPS for 6 streams.
I created two config files having 4 streams each and i got an average FPS of 20 to 21 FPS for both the configs when i run simultaneously.
similarly when i created 3 config files of 4 streams each i am getting average FPS of 16 to 17 FPS.
similarly when i created 4 config files of 4 streams each i am getting average FPS of 15 to 16 FPS.
But i observed that even the FPS is high I am getting glitch in the deepstream app. I did verified the stream side by side in VLC media player as well. There appears to be no glitch in the VLC media player.
When i created one single config i.e for all 16 streams the average FPS i got was 8 to 9.
I am using nvidia-smi to verify the GPU RAM consumption and usage.
Is this the correct way to identify the GPU load situation ?
Could you attach your config file to us? What is the resolution and bitrate of your camera video?
You can use the
nvidia-smi dmon to identify the GPU load situation and the
htop to identify the CPU load situation.
Could you attach your config file to us.
→ It is the same config file which is provided with the deepstream.
config.txt (3.5 KB)
What is the resolution and bitrate of your camera video.
→ Resolution: 1920x1080
bitrate: 4096 Kbps.
Output of nvidia-smi dmon while running on 12 streams.
OK. Could you also check the CPU loading with
Could you change the type of sink0 to Fakesink and if the fps changes with the number of sources increasing?
You can also open the latency log of the deepstream-app by referring to Enable Latency measurement for deepstream sample apps.
Below are the images of htop.
Before restart of the server.
After restart of the server.
When sink0 is set to Fakesink the FPS improvised by 1 to 2 FPS. But i want the tiled display to be enabled.
latency is ranging from 150 to 200 ( ms ).
What did you do
before restart of the server and
after restart of the server? Just from the first image, your cpu load average exceeds 90%. The CPU may be your performance bottleneck.
And you can check which plugin has a higher latency.
Before and after restart of the server i have done nothing. Same application works.
I have created 3 configs where 1 config has 4 streams i.e total 12 cameras and i am getting avg FPS of 16 to 17 but, I’m observing glitch as well. CPU load is 70 % and nvidia-smi dmon output is similar as above.
These are the issues which i am facing.
- There is drop in FPS and also high glitch in the output feed when all the streams are in same / single config file.
- I do get some improvements in the FPS when i split them but the glitch still remains.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
I have tried that in my environment, there are no fps drop issue. Could you try that in my way？
- refer to the Build rtsp server to build rtsp server and generate 30 rtsp streams
- use the
source30_1080p_dec_infer-resnet_tiled_display_int8.txt file in the
samples/configs/deepstream-app dir, modify that with the 30 rtsp sources
- deepstream-app -c source30_1080p_dec_infer-resnet_tiled_display_int8.txt
source30_1080p_dec_infer-resnet_tiled_display_int8.txt (13.5 KB)