Share nvinfer between multiple pipelines (dynamic pipelines)

Dear all,

I’m wondering if it is possible to create just one nvinfer component and use it in multiple pipelines, I would like to have multiple pipelines, one for each rtsp stream that I add to the Solution (dynamic pipeline).
I’m looking for this solution because I tried to use only one pipeline to manage dynamically multiple streams, and one EOS in one of the streams was transmitted to the different streams, so my next logical step is to create multiple pipelines and creating multiples nvinfer components is not reasonable because I will have to duplicate the models and the nvinfer component.
if you have any idea of how this can be acomplished please feel free to share with me.
Regards.

Can you elaborate how you “use only one pipeline to manage dynamically multiple streams”? What does it mean with “one EOS in one of the streams was transmitted to the different streams”?

The sample app deepstream-app already support multiple streams and can ignore single “EOS” from one of the stream. Can you refer to the sample?

What does this mean? Do you mean multiple pipelines or multiple stream? GStreamer does not support sharing element between pipelines.

I mean multiple pipelines, for example having a list of ten pipeplines, getting rtsp streams and output rtsp. uridecodebin…xxx…nvinfer…rtsp sink, but if I use a nvinfer different for each pipeline, with a Yolo model, I think that I will have ten Yolo models in memory, I want to have only one, and use it in all the pipelines

DeepStream support multiple input sources and multiple output in single pipeline, why do you need multiple pipelines?

GitHub - sherlockchou86/video_pipe_c: a plugin-oriented framework for video structured. this sdk support multi popelines work together.

Dear @Fiona.Chen ,

We started using your example for adding/removing pipelines, deepstream_python_apps/apps/runtime_source_add_delete at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub in Python.

We did some changes, and we added a demux to output a RTSP for each of the cameras.

The pipeline structure should be (seen from a high level) something like:
Multiple RTSP sources → streammuxer → Several primary and secondary inference engines → Demuxer → OSD → Multiple rtspclientsinks.

We found these issues:

  • Nvstreammuxer failed to adjust its operation when adding/removing cameras. We were able to sort this out by using the “new nvstreammuxer”.
  • An EOS from a single camera would propagate through the pipeline stopping plugins that should keep running for the rest of the cameras.
  • Uridecodebin sometimes would fail to be created properly after several add/delete cycles, not letting the camera images reach the inference part. We have a temporary solution which is a custom source bin (rtp depay + decoder + …) but this does not seem to be as reliable as the uridecodebin. It for example does not work with cameras with low bandwidth and unstable connection.

I don’t have problem using only a pipeline with multiple sources, but I have been not able to find a sample with a demuxer in Python, and the tests that we have tried didn’t worked, can you point me to some code?

Thanks @zhouzhi123.8 , does it works with nvidia Triton?

thanks @zhouzhi123.8 , I want to focus this thread on the deepstream framework, we can have a separate discussion about the video_pipe.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Please refer to deepstream-app source code

No such python sample. You can refer to deepstream-app c/c++ for the usage.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.