So in my current application I am manually making my inference pipeline using just Gstreamer and Deepstreamer elements. The main problems I am facing is that a lot of pipelines elements are repeated, like camera and video setup, and I need the pipeline to be dynamic at runtime. For example I want to start streaming from my camera sources to the user before starting inference. As the user needs to choose some inference settings or maybe modify the camera settings before starting.
My current solution to video sources / sinks and file sources / sinks is to encapsulate them into bins, but when looking into how to make my pipeline more dynamic I found GstInterpipe. Which does seem like a nice solution as it allows me to basically replace my bins with interpipes. With each interpipe functioning as a tee that can be dynamically connected to each other. So I wanted to ask before I start tearing into my code if anyone had suggestions regarding other approaches or notes regarding GstInterpipe that I might have missed?
It is a RidgeRun plugin. @miguel.taylor Can you help this user?
We actually have a demo app that uses GstInterpipes with DeepStream:
It seems that would be a good starting point for your use case. We have been working on DeepStream applications for several years and almost always rely on GstD and GstInterpipes as our primary tools for development. Feel free to share any specific issues you encounter, and we’ll likely be able to provide assistance.
Some general considerations when using GstInterpipes:
To prevent buffer congestion, we always incorporate a dropping queue before the interpipesink:
queue leaky=2 max-size-buffers=10.
Sometimes introducing interpipes will cause the negotiation to fail. In these cases you can set the caps with the interpipesrc caps property and disable renegotiation
interpipesrc caps="..." allow-renegotiation=false
Most of the common issues can be fixed by using the correct
stream-sync property value:
0: Restart timestamp, which means the original timestamp is removed, and interpipesrc adds its own subpipeline timestamp to the buffer. This is the recommended value for streaming or recordings, as some elements use the timestamp for video metadata or seek operations.
1: Passthrough, where the original timestamp is preserved. This is the recommended value when compensation doesn’t work.
2: Compensation, where the logic is using the original timestamp and either adding or subtracting a value depending on the difference between the original and the new pipeline. This is the overall recommended value, but some elements like video sinks won’t work with compensated buffers.
I hope this helps.
If first found GstInterpipes from this Nvidia Technical Blog. Which referred to the demo you linked. I am considering if I should use GstD as well, but I am uncertain about how to handle getting inference information back from the pipeline. Currently, I am using a pad to go through the NvDsBatchMeta data to extract bounding boxes and so on. From what I understand this would not be possible with GstD. The technical blog mentions using event signals to get information from the pipeline, but I don’t think the NvInfer element or Deepstream supports this. From what I know I probably would need to use a message broker to get the data back from the pipeline if I was to use GstD. Is this the case?
To address the issue, we developed an element responsible for serializing DeepStream metadata into JSON and made it accessible via a GStreamer signal. GstD connects to this signal, enabling the Python application to execute supplementary metadata processing tasks. The metadata is returned into the pipeline using a property within the same element and the element updates the DeepStream metadata.
The primary advantage of this design is that the Python application can play or stop pipelines as needed based on metadata information. For instance, it can start the WebRTC streaming pipeline when a person is detected within a ROI. The Python application can also provide visual feedback by editing the overlay metadata. You could also use the message broker, but it may not offer the same level of flexibility and feedback to the pipeline.
Is the same signal available for C/C++ applications, and do you have the name or a link to the documentation for the element? Do you also happen to have any demos or samples that show how to connect the signals?
I went and looked at the RigdeRun development wiki, found the GstMetadata and GstInference elements. Was it one of these elements that you were referring too?
Also, in my application I need to process the bounding boxes as quick as possible, or at least before the next image arrives. I imagine changing from using probes to GstD would cause some slowdown as I would need to serialize and de-serialize the JSON data. While I would need to implement and test to know if I would lose a noticeable amount of performance. I was wondering if you had any knowledge for a any previous experience to share?
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.