Get gpu memory buffer from gstreamer without copying to CPU

We are having a problem of figuring out how to get the GPU buffers from gstreamer using a callback for appsink. Below is our current pipeline:

“filesrc location=test.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvideoconvert ! vvideo/x-raw(memory:NVMM),format=RGBA ! appsink sync=False emit-signals=True name=sink”

I’ve seen a number of similar questions (ex: How can I get gpu memory buffer from gstreamer?) that have potential answers but are either Jetson or deepstream specific. Neither of these cases apply for us as we aren’t using a Jetson n’or Deepstream as we are creating our own specific gstream er pipeline. But we’re stuck on how to get the actual GPU buffers without them being copied to the CPU.

Any help on how to actually get the GPU buffer would be greatly appreciated!

This is already a deepstream pipeline. Why do you say "Neither of these cases apply for us as we aren’t using a Jetson n’or Deepstream as we are creating our own specific gstreamer pipeline. "?

How will you handle the GPU buffer with appsink if you have method to get it?

Hi @Fiona.Chen thanks so much for the response!

So we will be applying our own custom transformations to the buffer that we cannot do on deepstream. The issue is simply we haven’t been able to actually grab the buffer. For context we have defined a call back on the appsink using the following 2 lines of code:

    sink = gst_bin_get_by_name(GST_BIN(pipeline), "sink");
    g_signal_connect (sink, "new-sample", G_CALLBACK (on_buffer), &call_back_data);

where in the function “on_buffer” we want to grab the actual buffer. The “on_buffer” function is shown below (with our custom transformations omitted of course). In short, if the buffer is on the cpu, I am able to access no problem simply by accessing “info.data”. But using the pipeline as defined above it is my understanding that the buffer will be on nvmm memory. So I am a bit stuck on how to access it (we want to access it while it is still on the GPU).

Any help would be greatly appreciated!

void on_buffer(GstAppSink * sink, Callback_Data* callback_data) {
    // Creating a pointer to sample
    GstSample *sample;

    // Retrieve the buffer
    g_signal_emit_by_name(sink, "pull-sample", &sample);

    // If the sample is valid
    if (sample) {
        // Getting the buffer
        GstBuffer *buffer = gst_sample_get_buffer(sample);

        // Initializing the info (we assign the actual data to the info)
        GstMapInfo info;

        // Assigning the info
        gst_buffer_map(buffer, &info, GST_MAP_READ);



        gst_buffer_unmap (buffer, &info);

    }

    // Unreferencing sample
    gst_sample_unref(sample);


}

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

What will you do if you can access it? Do some algorithm with CUDA?
Have you read the DeepStream document?
Why do you choose DeepStream plugins without any willing to do inference?

Hi @Fiona.Chen ,

  1. “What will you do if you can access it? Do some algorithm with CUDA?”

Yes, we will be applying our own custom algorithms with Cuda.

  1. “Have you read the DeepStream document?”

Yes we have investigated and used deepstream extensively. Unfortunately it does not meet our requirements as it is not flexible enough. Specifically the issue is attaching and applying the results of our algorithms on the raw frames inside deepstream to be able to subsequently apply more algorithms. Unfortunately NVidia has not released the source code for “NvBufSurfTransformAsync” defined in the header file located here: “deepstream/deepstream-6.1/sources/includes/nvbufsurftransform.h” in the deepstream codebase. As a result, we cannot modify “NvBufSurfTransformAsync” to meet our needs and thus can’t use deepstream.

  1. “Why do you choose DeepStream plugins without any willing to do inference?”

See number 2. We are not intentionally using deepstream plugins, rather using NVidia’s gstreamer plugins that do of course happen to be very integrated into deepstream since deepstream sits on top of gstreamer.

If you could provide guidance as to how to access the frame buffer while it is still on the GPU as I discussed in my original question that would be greatly appreciated!

Thanks so much!

Please tell us the following information since the solution may be different for different platforms.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version Not applicable but we use 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8+
• NVIDIA GPU Driver Version (valid for GPU only) 510.85

Please refer to the code here to get NvBufSurface from GstBuffer. Deepstream sample code snippet - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

The NvBufSurface API definition:NVIDIA DeepStream SDK API Reference: NvBufSurface Struct Reference
NVIDIA DeepStream SDK API Reference: NvBufSurfaceParams Struct Reference

Hi @Fiona.Chen ,

Apologies for the late reply, I was only able to test out your solution now. Thank you very much for it.

Unfortunately, the code you provided uses a PGIE (I assume it is an “nvinfer” type as it is not stated in the code), but as discussed above we are not using Deepstream and as a result getting the “sink” from the PGIE is not applicable in our case.

Is there a possible substitute to replace the PGIE (“nvinfer” plugin)?

Thank you,

Aidan

You can also get NvBufSurface from GstBuffer after nvvideoconvert.

if (!gst_buffer_map (inbuf, &inmap, GST_MAP_READ))

goto error;

NvBufSurface *ip_surf = (NvBufSurface *) inmap.data;

Hi @Fiona.Chen ,

Thank you very much for the response! My only question is do you have an example somewhere that shows how to implement your solution? I.e. how to define the callback such that we can get the NvBufSurface from the GstBuffer.

My understanding is that we cannot do this inside the callback we currently have defined on the appsink in the pipeline as the frame would already be on the host. So I’m guessing we may need to create a custom plugin to achieve this? So I’m wondering if there already is an example for this?

Thank you,

Aidan

@Fiona.Chen

Never mind! Figured it out! Thanks so much for your help!

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Since you are using nvvideoconvert, the APIs can be used anywhere GstBuffer is available. So you can get NvBufSurface in appsink.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.