• Hardware Platform (GPU) • DeepStream Version 6.1 • Issue Type (questions)
Hi, my question is what is the best way to deal with the same multiple pipelines? For example, each camera(src) needs a dedicated sink but other intermediate nodes are exactly the same, camera A needs sink to rtsp:/A/, camera B needs sink to rtsp:/B/, …, and cameras need dynamic management.
A. Maybe multiple cameras can be aggregated through nvstreammux, and then decoupled through nvstreamdemux after nvinfer. This requires dynamic addition/removal of src and sink for mux and demux.
B. Whenever a new camera appears, directly start a new process to run a new pipeline.
Which is the best option, and if I choose A, how should I achieve it?
Depends on your scenario. If there is an upper limit of the stream number, you can use A. If the stream number is arbitrary, B may be the better choice.
Thanks for your reply. In my scenario, there may be no camera, or there may be arbitrary number of streams (under the premise that the gpu resources are sufficient). By the way, how should solution A manage the allocation of gpu in industrial scenario? Do I need a service like k8s to manage the remaining resources?
For example, the video memory of GPU0 can only accommodate the existence of N pipelines, so we need to specify gpu-id to be 1 when starting the N + 1 pipeline and need to be automatically specified by our code.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
With DeepStream itself, we support to assign a task on specified GPU by set the configuration parameter “gpu-id”. All DeepStream plugins support such parameters except some plugins which only work on CPU. GStreamer Plugin Overview — DeepStream 6.1.1 Release documentation