Please provide complete information as applicable to your setup.
• Hardware Platform == GPU
• DeepStream Version == 6.2, Python
• TensorRT Version == 8.5.2-1+cuda11.8
• NVIDIA GPU Driver Version == 525.105.17
• Issue Type == Question
I have a pipeline where I access several RTSP streams, perform some GPU decoding, and then batch the frames. I’d like to output this batch of frames to some other python code for inferencing/etc., I’m not looking to use pgie here for this use case.
rtspsrc_0 → depay → h26parse → nvvideoconvert - >capsfilter \
rtspsrc_1 → depay → h26parse → nvvideoconvert - >capsfilter \
rtspsrc_2 → depay → h26parse → nvvideoconvert - >capsfilter → nvstreammux → appsink
rtspsrc_3 → depay → h26parse → nvvideoconvert - >capsfilter /
rtspsrc_n → depay → h26parse → nvvideoconvert - >capsfilter /
appsink = Gst.ElementFactory.make('app_sink', 'appsink')
appsink.set_property("emit-signals", True)
sink_handler_id = appsink.connect("new-sample", on_new_batch)
Would this work to extract the decoded frames from the pipeline (from the on_new_batch callback I’d need to pass batched numpy frames to some queue for processing elsewhere)?
How should the signal function (on_new_batch callback) parse the nvstreammux batch to provide say a list of numpy arrays?
def on_new_batch(app_sink):
sample = app_sink.pull_sample()
# Parse the NvDsBatchMeta here somehow?