Synchronization between pgie probe and sgie probe

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 495.46
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) See attached example code
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi,

I have a Deepstream pipeline with 2 nvinfer elements, each attached with a probe for parsing inference results. I created a minimal code example below for demonstration, attached below. My goal is to pass data from the first probe the the second, and I want to make sure that they are properly synchronized.

Questions:

  1. Is nvinfer1_probe() a blocking probe? I mean, does it block further execution of nvinfer1 element?
    Is it possible for two invocations of this probe to run concurrently (if the probe is slower than the element)?
    If it’s not sequentially executed, the order of results in the global queue may be violated.
  2. How can I ensure that nvinfer1_probe() is done executing before nvinfer2_probe() is invoked?
    In other words, I want to make sure that when nvinfer2_probe() is invoked the global queue is not empty.
  3. What happens in case nvinfer1_probe() is very slow and becomes the bottleneck of the entire pipeline?

Code:

"""
    Pipeline structure (simplified):
    rtsp_src_bin -> streamMux -> nvinfer1 -> tracker -> nvinfer2 -> message broker

    Explanations:
    nvinfer1 - Executes a deep learning model
    nvinfer2 - Executes another deep learning model
"""

# Attach probes to nvinfer1 (pgie) and nvinfer2 (sgie) to access inference results
nvinfer1_src_pad.add_probe(Gst.PadProbeType.BUFFER, self.nvinfer1_probe, 0)
nvinfer2_src_pad.add_probe(Gst.PadProbeType.BUFFER, self.nvinfer2_probe, 0)

# Create a global queue to pass data from nvinfer1_probe() to nvinfer2_probe()
global_queue = queue.Queue()


def nvinfer1_probe(self, pad, info, u_data):
    value_to_pass_as_metadata = "value_parsed_from_nvinfer1_inference_results"
    global_queue.put(value_to_pass_as_metadata)

def nvinfer2_probe(self, pad, info, u_data):
    value_from_nvinfer1 = global_queue.get_nowait()
    logger.debug('value_from_nvinfer1')

Thank you

Hi,
It looks like your use-case is to run two primary gie. May refer to this topic:
Adding a ghost pad after splitting a pipeline using Tee? - #11 by DaneLLL
You can get the result separately by setting unique-id. Please check if this can be applied to your use-case.

Hi, the mentioned use case is just an example case. My goal is to get answers for the three questions mentioned above.

Yes. This is just a callback. The buffer can not be released until the callback finished. Please refer to GstPad (gstreamer.freedesktop.org) and Pipeline manipulation (gstreamer.freedesktop.org). The data probe will hold the “Buffer”. You need to copy the data inside the buffer out and handle the data in another thread to make sure the buffer can be transferred to downstream plugins(including nvinfer2) ASAP.

From GstBuffer point of view, nvinfer1_probe() is always called before nvinfer2_probe(). When nvinfer2_probe() is processing the nth buffer, nvinfer1_probe() may be working on the n+3th buffer. But for the same buffer, nvinfer1_probe() always process before nvinfer2_probe(). Please read the introduction carefully. Overview (gstreamer.freedesktop.org)

nvinfer1_probe() will hold the buffer before it finished. The dataflow is blocked here. So it is the bottleneck of the whole pipeline. Overview (gstreamer.freedesktop.org)

DeepStream is based on gstreamer. GStreamer: open source multimedia framework Please make sure you are familiar with gstreamer basic knowledge and coding skills before you start with DeepStream.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.