Feed appsrc with numpy arrays of different resolution

I have a deepstream pipeline that works correctly on multiple video streams.
I would like to use the same pipeline to create APIs. In brief, instead of supplying video streams to the pipeline using uridecodebin, I would like to supply numpy arrays using appsrc.
I am using Python3, Deepstream 6.0.1 and the official Nvidia deepstream develop container. I am using a Tesla T4.
I found an example on how to push buffer to appsrc here: https://github.com/jackersson/gst-python-tutorials/blob/master/launch_pipeline/run_appsrc.py .
This seems to work fine is all the numpy arrays have the same resolution.
However in my case I would like to push to the pipeline images of different resolution.
From the official gstreamer documentation here https://github.com/jackersson/gst-python-tutorials/blob/master/launch_pipeline/run_appsrc.py it seems that I should push a gstreamer sample and signal push-sample since the documentation for push-sample states:

This function set the appsrc caps based on the caps in the sample and reset the caps if they change.

However, when doing this, the pipeline seems to work fine for a few images (usually 2 to 5 images) but then it throws the error:

0:00:14.356063457 17474      0x2b76ca0 ERROR                nvinfer gstnvinfer.cpp:1150:get_converted_buffer:<primary-inference> cudaMemset2DAsync failed with error cudaErrorIllegalAddress while converting buffer
0:00:14.356103181 17474      0x2b76ca0 WARN                 nvinfer gstnvinfer.cpp:1472:gst_nvinfer_process_full_frame:<primary-inference> error: Buffer conversion failed
MainThread 2022-09-15 12:12:40,388 - pipeline.bus_call - ERROR - Bus call: Error: gst-stream-error-quark: Buffer conversion failed (1): gstnvinfer.cpp(1472): gst_nvinfer_process_full_frame (): /GstPipeline:pipeline0/GstNvInfer:primary-inference

Unable to release device memory. 
Unable to release host memory. 
Cuda failure: status=700
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=700
Error(-1) in buffer allocation

** (python3.6:17474): CRITICAL **: 12:12:41.256: gst_nvds_buffer_pool_alloc_buffer: assertion 'mem' failed
MainThread 2022-09-15 12:12:41,256 - pipeline.bus_call - ERROR - Bus call: Error: gst-resource-error-quark: failed to activate bufferpool (13): gstbasetransform.c(1670): default_prepare_output_buffer (): /GstPipeline:pipeline0/GstBin:source-bin-00/Gstnvvideoconvert:nvvideoconvert_0:
failed to activate bufferpool

MainThread 2022-09-15 12:12:41,256 - pipeline.bus_call - ERROR - Bus call: Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstAppSrc:appsrc_0:
streaming stopped, reason error (-5)

My pipeline is huge and I can’t post the entire source code, but here’s the part of the code that creates the first elements of the pipeline, including the appsrc:

        # INTERNAL BIN ELEMENTS
        # Appsrc to feed numpy arrays to the pipeline
        appsrc = create_gst_elemement("appsrc", f"appsrc_{index}")
        caps_in = Gst.Caps.from_string("video/x-raw,format=RGBA,width=640,height=640,framerate=30/1")
        appsrc.set_property('caps', caps_in)

        # Videoconverter
        nvvideoconvert = create_gst_elemement("nvvideoconvert", f"nvvideoconvert_{index}")
        nvvideoconvert.set_property("nvbuf-memory-type", int(pyds.NVBUF_MEM_CUDA_UNIFIED))
        nvvideoconvert.set_property("output-buffers", settings.DEEPSTREAM_NVVIDEOCONVERT_OUTPUT_BUFFERS)

        # Caps filter
        caps_filter = create_gst_elemement("capsfilter", f"filter_numpy_frame_{index}")
        caps_filter.set_property("caps", Gst.Caps.from_string((f"video/x-raw(memory:NVMM), format={format}")))

        # LINK ELEMENTS
        appsrc.link(nvvideoconvert)
        nvvideoconvert.link(caps_filter)

Then I add numpy arrays to the pipeline in a loop, using the following code:

                    img = cv2.imread("/opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.jpg")
                    height, width = int(random.randint(400, 1280)), int(random.randint(400, 1280))
                    img = cv2.resize(img, (height, width))
                    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGBA)
                    # buffer = ndarray_to_gst_buffer(img)
                    # self.source_bin_appsrc.emit("push-buffer", buffer)
                    sample = ndarray_to_gst_sample(img)
                    self.source_bin_appsrc.emit("push-sample", sample)
def ndarray_to_gst_sample(array: np.ndarray) -> Gst.Sample:
    # Convert array to buffer
    buffer = Gst.Buffer.new_wrapped(array.tobytes())
    # Convert buffer to sample
    height, width, channels = array.shape
    caps_str = f"video/x-raw,format=RGBA,width={width},height={height},framerate=30/1"
    caps = Gst.Caps.from_string(caps_str)
    sample = Gst.Sample.new(buffer, caps)
    return sample

If I use push-buffer and numpy arrays of the same resolution everything works fine, but when I use numpy arrays of different resolution (as in the script) and push-sample the error above is thrown at random.
I also struggle to initialize appsrc without a predefined resolution or frame rate. I don’t know what the purpose of the frame rate would be in my situation: I will be feeding images to the pipeline when I receive an API call, I won’t have a predefined frame rate. Same for the resolution: the resolution of each image will depend upon the API call.

To sum up, how should I edit my code so that I can feed to appsrc numpy images of different resolution?

Thank you

1 Like

Attached the pipeline schema in pdf.
pipeline.pdf (32.6 KB)

Currently nvstreammux does not support dynamic buffers management. You can not change the src caps during the “PLAYING” state.

Hi @Fiona.Chen , thank you for your reply. In this case I will simply add a logic to resize the images to the same resolution before feeding them to the pipeline. It’s not a big deal.
Thank you!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.