Is there a way to add or remove a stream to the pipeline during runtime?
Example: Running deepstream with inferencing on four unique inputs and then adding a fifth input several minutes after the program has started on the original four. Then a few minutes later, removing two of the original four streams so that only three are left.
I would imagine that this should be possible as it is mentioned in the Deepstream SDK pdf, but I haven’t seen any examples of it and have not found a way to do it either.
Does anyone have any examples? Preferably one that can work with the deepstream-app sample as I’m currently most familiar with that workflow.
Is there an example on Dynamic stream management?
Anomaly Detection Reference App (<a target='_blank' rel='noopener noreferrer' href='https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/anomaly'>https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/anomaly</a>)
Dynamically add or delete channels when the pipeline is running.
I can’t imagine this is a limitation. We know a T4 can handle multiple streams with inferencing concurrently. If this isn’t possible then whenever a camera goes down and gets fixed, then the entire pipeline needs to be restarted just to include the fixed camera into the mix. This would cause an interruption in all the other cameras also and would never be considered acceptable by commercial/enterprise clients.
I’m certain that we’re just misunderstanding each other. Please have a look at the links I posted.
different from the exmaple https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/runtime_source_add_delete, besides uridecodebin, there are also dynamic del/add of GSTelements after nvstreamdemux of pipeline in each stream. our problem is:
removal of stream was ok, but after addition of stream, the stream just played for a while(several seconds), then stoped. once addition of next stream (played for a while then stoped ), the stoped stream recovered. and the problem only occured in rtmp live source, while rtsp is ok.
could you give me some advices? thanks!
Any chance of getting a python example for this please?
Our use case is that we k8s pod’s with one GPU assigned to each, when a user streams video for object detection from a smart device the instance connects to the stream and communicates classification via websockets.
I suppose we could just launch one pod per user, with the stream ID as a parameter but it’s a bit inefficient. :-)
The behavior i’ve noticed is that there can be problems if the number of connected sources doesn’t match the batch-size on the stream-muxer, but the nvinfer element just uses it as a maximum for the batch-size.
Frankly, I am still very confused but that last above linked post cleared up a lot. I am hoping for DeepStream 5, the handling of this parameter, as well as the engine file name which depends on batch-size, is automatically handled by all nvidia elements.