Please provide complete information as applicable to your setup.
• Hardware Platform (GPU) T4 GPU
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
Hey I noticed that the probe function that is currently being used in most applications is blocking : i.e if I run a time.sleep(1s) in the probe the entire pipeline will stop for 1s
Can the probe function in the DeepStream PipeLine be run in a non blocking way?
I ask this because currently I plan on using two pipelines1)(DeepStream for Meta Data Generation ) and 2)another one for my customized application. I need to send the meta data to the second pipeline and process it
For now I have tried the using zmq,multiprocessing queues and manager objects all of them throttle the fps because of the process of converting a frame to numpy array followed by sending it to the second pipeline, because this happens in the probe function the deepstream fps also gets throttled as it waits for these processes to complete (because probe is blocking),
I would ideally like a situation where deepstream could run at it’s own pace and the other pipeline just took the meta data of whatever it processed last (does not need to recieve all frames). Is it possible to have a probe function work asynchronously like this ? are there any ways I could modify the architecture (using tees and queesu) to introduce fake asynchronocity?