Passing NvStreamMux (new) batch to AppSink for Post Processing

Please provide complete information as applicable to your setup.
• Hardware Platform == GPU
• DeepStream Version == 6.2, Python
• TensorRT Version == 8.5.2-1+cuda11.8
• NVIDIA GPU Driver Version == 525.105.17
• Issue Type == Question

I have a pipeline where I access several RTSP streams, perform some GPU decoding, and then batch the frames. I’d like to output this batch of frames to some other python code for inferencing/etc., I’m not looking to use pgie here for this use case.

rtspsrc_0 → depay → h26parse → nvvideoconvert - >capsfilter \
rtspsrc_1 → depay → h26parse → nvvideoconvert - >capsfilter \
rtspsrc_2 → depay → h26parse → nvvideoconvert - >capsfilter → nvstreammux → appsink
rtspsrc_3 → depay → h26parse → nvvideoconvert - >capsfilter /
rtspsrc_n → depay → h26parse → nvvideoconvert - >capsfilter /

    appsink = Gst.ElementFactory.make('app_sink', 'appsink')
    appsink.set_property("emit-signals", True)
    sink_handler_id = appsink.connect("new-sample", on_new_batch)

Would this work to extract the decoded frames from the pipeline (from the on_new_batch callback I’d need to pass batched numpy frames to some queue for processing elsewhere)?
How should the signal function (on_new_batch callback) parse the nvstreammux batch to provide say a list of numpy arrays?

def on_new_batch(app_sink):
    sample = app_sink.pull_sample()
    # Parse the NvDsBatchMeta here somehow?

please refer to appsink deepstream sample \opt\nvidia\deepstream\deepstream\sources\apps\sample_apps\deepstream-appsrc-test\deepstream_appsrc_test_app.c.

Thanks, I know there is that appsink c sample. I’m wondering for Python, how does it work? If I call app_sink.pull_sample(), will the returned object be an NvDsBatchMeta that I can parse like what is done in some of the other probe examples?

nvstreammux will output batched hardware buffer, what will you do in appsink?

Really I’m just looking to decode the RTSP feeds with GPU accelerated decode, use nvstreammux to batch and attach a timestamp, then extract the batched frames so that I can test some different inference models/trackers as I do not want to commit the time to develop custom code required to implement the inference models and trackers as pgie or nvtracker.

There is no python sample of appsink, please refer to new_sample in deepstream_appsrc_test_app.c.
please refer to deepstream-test1 for nvstreammux and nvinfer’s usage.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.