Does the Probe Function in Deepstream have to be blocking?

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU) T4 GPU
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

Hey I noticed that the probe function that is currently being used in most applications is blocking : i.e if I run a time.sleep(1s) in the probe the entire pipeline will stop for 1s

Can the probe function in the DeepStream PipeLine be run in a non blocking way?

I ask this because currently I plan on using two pipelines1)(DeepStream for Meta Data Generation ) and 2)another one for my customized application. I need to send the meta data to the second pipeline and process it

For now I have tried the using zmq,multiprocessing queues and manager objects all of them throttle the fps because of the process of converting a frame to numpy array followed by sending it to the second pipeline, because this happens in the probe function the deepstream fps also gets throttled as it waits for these processes to complete (because probe is blocking),

I would ideally like a situation where deepstream could run at it’s own pace and the other pipeline just took the meta data of whatever it processed last (does not need to recieve all frames). Is it possible to have a probe function work asynchronously like this ? are there any ways I could modify the architecture (using tees and queesu) to introduce fake asynchronocity?

Can you try below pipeline:

src -> nvstreammux -> ..... -> tee -> queue -> ds pipeline
                                  |-> queue (leaky=2) -> appsink

And do the heavy operations in the new sample callback of appsink

3 Likes

Hey, thanks for replying

Just to confirm

I am trying out the following pipeline

src->nvstreammux->nvinfer->tracker->vidconv->osd->tee->queue-> probefn1(just measure FPS
----------------------------------------------------------- ->queue(leaky=2)->probe fn2(to send info to other app)

->sink(common for both branches)

and this pipeline should allow Deepstream to move at it’s own pace and not be affected by the sending probe?

I am worried that as probes are blocking it is possible sending probe will hold the pipeline while it processes

Check appsink to see if it can help, also we had a sample deepstream_appsrc_test_app.c

Hey, I’m sorry I forgot to mention I’m using the python API

I’ve tried the following things

-Both branches of tees have fakesink , the probe function in branch1 has no delay and probefunction in branch2 has 2s delay after info.getbuffer()

The result I got was that the time.sleep() in the probe function of branch 2 slowed down the pipeline to process every2s

-branch2 of tee has appsink , the probe function in branch1 has no delay and probefunction in branch2 has 2s delay after info.getbuffer()

The result I got was that the behaviour was the same but this time pipeline gets stuck at a point and stops working (requring me to stop the container)

Please take a look at the deepstream_appsrc_test_app.c or you can google to search how to use appsink.

1 Like