Get gpu memory buffer from gstreamer without copying to CPU

We are having a problem of figuring out how to get the GPU buffers from gstreamer using a callback for appsink. Below is our current pipeline:

“filesrc location=test.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvideoconvert ! vvideo/x-raw(memory:NVMM),format=RGBA ! appsink sync=False emit-signals=True name=sink”

I’ve seen a number of similar questions (ex: How can I get gpu memory buffer from gstreamer?) that have potential answers but are either Jetson or deepstream specific. Neither of these cases apply for us as we aren’t using a Jetson n’or Deepstream as we are creating our own specific gstream er pipeline. But we’re stuck on how to get the actual GPU buffers without them being copied to the CPU.

Any help on how to actually get the GPU buffer would be greatly appreciated!

This is already a deepstream pipeline. Why do you say "Neither of these cases apply for us as we aren’t using a Jetson n’or Deepstream as we are creating our own specific gstreamer pipeline. "?

How will you handle the GPU buffer with appsink if you have method to get it?

Hi @Fiona.Chen thanks so much for the response!

So we will be applying our own custom transformations to the buffer that we cannot do on deepstream. The issue is simply we haven’t been able to actually grab the buffer. For context we have defined a call back on the appsink using the following 2 lines of code:

    sink = gst_bin_get_by_name(GST_BIN(pipeline), "sink");
    g_signal_connect (sink, "new-sample", G_CALLBACK (on_buffer), &call_back_data);

where in the function “on_buffer” we want to grab the actual buffer. The “on_buffer” function is shown below (with our custom transformations omitted of course). In short, if the buffer is on the cpu, I am able to access no problem simply by accessing “info.data”. But using the pipeline as defined above it is my understanding that the buffer will be on nvmm memory. So I am a bit stuck on how to access it (we want to access it while it is still on the GPU).

Any help would be greatly appreciated!

void on_buffer(GstAppSink * sink, Callback_Data* callback_data) {
    // Creating a pointer to sample
    GstSample *sample;

    // Retrieve the buffer
    g_signal_emit_by_name(sink, "pull-sample", &sample);

    // If the sample is valid
    if (sample) {
        // Getting the buffer
        GstBuffer *buffer = gst_sample_get_buffer(sample);

        // Initializing the info (we assign the actual data to the info)
        GstMapInfo info;

        // Assigning the info
        gst_buffer_map(buffer, &info, GST_MAP_READ);



        gst_buffer_unmap (buffer, &info);

    }

    // Unreferencing sample
    gst_sample_unref(sample);


}

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

What will you do if you can access it? Do some algorithm with CUDA?
Have you read the DeepStream document?
Why do you choose DeepStream plugins without any willing to do inference?

Hi @Fiona.Chen ,

  1. “What will you do if you can access it? Do some algorithm with CUDA?”

Yes, we will be applying our own custom algorithms with Cuda.

  1. “Have you read the DeepStream document?”

Yes we have investigated and used deepstream extensively. Unfortunately it does not meet our requirements as it is not flexible enough. Specifically the issue is attaching and applying the results of our algorithms on the raw frames inside deepstream to be able to subsequently apply more algorithms. Unfortunately NVidia has not released the source code for “NvBufSurfTransformAsync” defined in the header file located here: “deepstream/deepstream-6.1/sources/includes/nvbufsurftransform.h” in the deepstream codebase. As a result, we cannot modify “NvBufSurfTransformAsync” to meet our needs and thus can’t use deepstream.

  1. “Why do you choose DeepStream plugins without any willing to do inference?”

See number 2. We are not intentionally using deepstream plugins, rather using NVidia’s gstreamer plugins that do of course happen to be very integrated into deepstream since deepstream sits on top of gstreamer.

If you could provide guidance as to how to access the frame buffer while it is still on the GPU as I discussed in my original question that would be greatly appreciated!

Thanks so much!