Feasibility of running 12 video streams on RTX3050

Hello,
our company is attempting to create a video analytics solution that requires object detection and classification, using deepstream. We have a requirement of supporting streams from 12 cameras (3072Γ—1728, 5MP, H264, 10FPS). Our server specs are:

* PC Intel H610 i3-12100
* RAM 16GB DDR4
* VGA RTX3050 8GB
* SSD 256GB 

With our current deepstream configuration, we are able to run 1 camera according to requirements. More added streams lead to dropped FPS and video fragmentation.
Is it possible to satisfy the requirements fully with our hardware? We are planning to optimize deepstream configuration parameters, we are also open to using a lower FPS, and perhaps lower resolution.

1 Like

It depends on the time consumption of your model inference. What’s your pipeline like?

1 Like

We are using YOLOv4 for object detection (primary-gie) and EfficientNet-B0 for classification (secondary-gie). We are also using a custom plugin, that puts deepstream detections to Redis queue. So the pipeline is:
Source β†’ YOLOv4 inference β†’ EfficientNet inference β†’ put detections to Redis queue

You shuld add streammux before the inference.
1.You need to ensure that your hardware transmission bandwidth can meet your needs first.
2.You can try to use nvvideoconvert to scale your picture first, and set the streammux batch-size to the number of your sources.

Thanks for the reply.
We are using Streammux before the inference, currently [streammux] width and height properties are set to incoming source resolution 3072Γ—1728. We have also set [streammux] batch-size to the number of sources.

  1. Could you provide more details regarding this? Which specs should we look at?
  2. What is the difference performance-wise between using streammux to scale the picture, vs using nvvideoconvert? Are there examples of integrating this plugin into our current pipeline (did not find information in plugin documentation page Gst-nvvideoconvert β€” DeepStream 6.2 Release documentation)?

Did you write the app code by yourself or just use deepstream-app to run your pipeline?
If you write the app by yourself, you can refer to the link below. This repo integrates many models and has source code.
https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps
If you just use the deepstream-app, you can refer to the deepstream source code:

samples\configs\deepstream-app

Thank you very much!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.