• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 495.46
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) See attached example code
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi,
I have a Deepstream pipeline with 2 nvinfer elements, each attached with a probe for parsing inference results. I created a minimal code example below for demonstration, attached below. My goal is to pass data from the first probe the the second, and I want to make sure that they are properly synchronized.
Questions:
- Is nvinfer1_probe() a blocking probe? I mean, does it block further execution of nvinfer1 element?
Is it possible for two invocations of this probe to run concurrently (if the probe is slower than the element)?
If it’s not sequentially executed, the order of results in the global queue may be violated. - How can I ensure that nvinfer1_probe() is done executing before nvinfer2_probe() is invoked?
In other words, I want to make sure that when nvinfer2_probe() is invoked the global queue is not empty. - What happens in case nvinfer1_probe() is very slow and becomes the bottleneck of the entire pipeline?
Code:
"""
Pipeline structure (simplified):
rtsp_src_bin -> streamMux -> nvinfer1 -> tracker -> nvinfer2 -> message broker
Explanations:
nvinfer1 - Executes a deep learning model
nvinfer2 - Executes another deep learning model
"""
# Attach probes to nvinfer1 (pgie) and nvinfer2 (sgie) to access inference results
nvinfer1_src_pad.add_probe(Gst.PadProbeType.BUFFER, self.nvinfer1_probe, 0)
nvinfer2_src_pad.add_probe(Gst.PadProbeType.BUFFER, self.nvinfer2_probe, 0)
# Create a global queue to pass data from nvinfer1_probe() to nvinfer2_probe()
global_queue = queue.Queue()
def nvinfer1_probe(self, pad, info, u_data):
value_to_pass_as_metadata = "value_parsed_from_nvinfer1_inference_results"
global_queue.put(value_to_pass_as_metadata)
def nvinfer2_probe(self, pad, info, u_data):
value_from_nvinfer1 = global_queue.get_nowait()
logger.debug('value_from_nvinfer1')
Thank you