Extracting frame and save to file from nvinfer

• Hardware Platform (Jetson / GPU) Javier
• DeepStream Version 6.2

I am trying to save frames from multiple rtsp streams to files. This seems to be similar to the example in deepstream_imagedata-multistream, but I keep getting the following errors:

If I try to attach a probe to the source pad of the source bin like so:

    pad = analytics.get_static_pad("src")
    pad.add_probe(
        Gst.PadProbeType.BUFFER, save_image_probe, 0
    )

Then I am able to read the metadata (using gst_buffer_get_nvds_batch_meta) but I cannot use get_nvds_buf_surface because “Currently we only support RGBA color Format”.

If I try to attach the probe to the tiler, as in the example:

    tiler_sink_pad = tiler.get_static_pad("sink")
    tiler_sink_pad.add_probe(Gst.PadProbeType.BUFFER, save_image_probe, 0)

Then I am not able to read the metadata any more, and get the error: AttributeError: 'NoneType' object has no attribute 'frame_meta_list' - the result of using gst_buffer_get_nvds_batch_meta is None. Could you tell me the correct approach to what I am trying to achieve here? I would like to be able to add a probe which will save the image from each source (not the tiled output) to different files.

Note: the sources/source bin pads are created like this:

Gst.Bin.add(new_bin, uri_decode_bin)
bin_pad = new_bin.add_pad(Gst.GhostPad.new_no_target("src", Gst.PadDirection.SRC))

Could you try to add a nvvideoconvert to convert the format to RGBA like the demo deepstream_imagedata-multistream.py.

I am already using the nvvideoconvert element. Here is the linked stages of my pipeline:

    streammux.link(streammux_pgie_queue)
    streammux_pgie_queue.link(pgie)
    pgie.link(pgie_tracker_queue)
    pgie_tracker_queue.link(tracker)
    tracker.link(tracker_analytics_queue)
    tracker_analytics_queue.link(analytics)
    analytics.link(analytics_tiler_queue)
    analytics_tiler_queue.link(nvdslogger)
    nvdslogger.link(tiler)
    tiler.link(tiler_conv_queue)
    tiler_conv_queue.link(nvvidconv)
    nvvidconv.link(conv_osd_queue)
    conv_osd_queue.link(nvosd)
    nvosd.link(osd_tee_queue)
    osd_tee_queue.link(output_tee)

As far as I can see the nvvideoconvert is configured the same as in the example. Could it be a problem with the ordering of my pipeline elements? If I add the probe to the tiler (which is after the nvvideoconvert), why am I no longer able to access the frame metadata?

  1. You didn’t add the capsfilter in your code:
    caps1 = Gst.Caps.from_string("video/x-raw(memory:NVMM), format=RGBA")
    filter1 = Gst.ElementFactory.make("capsfilter", "filter1")

2.The tiler will destroy the batch.

Where should I link the capsfilter? I tried in a few places (after analytics/before tiler/after nvvidconv) but I always get the error TypeError: argument element: Expected Gst.Element, but got gi.repository.Gst.Caps.

Given that I want the probe callback to run for every stream individually, I guess that it needs to go before the tiler (as the probe also needs to go before the tiler)?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

It can resolve the “Currently we only support RGBA color Format” issue you attached.

Yes, you are right.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.