I’m wondering if it is possible to create just one nvinfer component and use it in multiple pipelines, I would like to have multiple pipelines, one for each rtsp stream that I add to the Solution (dynamic pipeline).
I’m looking for this solution because I tried to use only one pipeline to manage dynamically multiple streams, and one EOS in one of the streams was transmitted to the different streams, so my next logical step is to create multiple pipelines and creating multiples nvinfer components is not reasonable because I will have to duplicate the models and the nvinfer component.
if you have any idea of how this can be acomplished please feel free to share with me.
Regards.
Can you elaborate how you “use only one pipeline to manage dynamically multiple streams”? What does it mean with “one EOS in one of the streams was transmitted to the different streams”?
The sample app deepstream-app already support multiple streams and can ignore single “EOS” from one of the stream. Can you refer to the sample?
I mean multiple pipelines, for example having a list of ten pipeplines, getting rtsp streams and output rtsp. uridecodebin…xxx…nvinfer…rtsp sink, but if I use a nvinfer different for each pipeline, with a Yolo model, I think that I will have ten Yolo models in memory, I want to have only one, and use it in all the pipelines
We did some changes, and we added a demux to output a RTSP for each of the cameras.
The pipeline structure should be (seen from a high level) something like:
Multiple RTSP sources → streammuxer → Several primary and secondary inference engines → Demuxer → OSD → Multiple rtspclientsinks.
We found these issues:
Nvstreammuxer failed to adjust its operation when adding/removing cameras. We were able to sort this out by using the “new nvstreammuxer”.
An EOS from a single camera would propagate through the pipeline stopping plugins that should keep running for the rest of the cameras.
Uridecodebin sometimes would fail to be created properly after several add/delete cycles, not letting the camera images reach the inference part. We have a temporary solution which is a custom source bin (rtp depay + decoder + …) but this does not seem to be as reliable as the uridecodebin. It for example does not work with cameras with low bandwidth and unstable connection.
I don’t have problem using only a pipeline with multiple sources, but I have been not able to find a sample with a demuxer in Python, and the tests that we have tried didn’t worked, can you point me to some code?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Please refer to deepstream-app source code
No such python sample. You can refer to deepstream-app c/c++ for the usage.