NvBufSurfaceSyncForDevice return -1 and how to replace frame in pipeline?

• Hardware Platform (Jetson / GPU) :GPU
• DeepStream Version :5.0
• TensorRT Version : 7.0
• NVIDIA GPU Driver Version (valid for GPU only) :455.45.01
• Issue Type( questions, new requirements, bugs) : questions, bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

Hi, I’m using my custom plugin based on dsexample and I want to replace the frame image with my own picture. So, here is my steps:

  1. gst_buffer_map (inbuf, &in_map_info, GST_MAP_READWRITE)

  2. in_surf = (NvBufSurface*) in_map_info.data;

  3. NvBufSurfaceMap (in_surf, -1, -1, NVBUF_MAP_READ_WRITE)

  4. NvBufSurfaceSyncForCpu (in_surf, 0, 0);

  5. NvBufSurfaceParams frameHandle = in_surf->surfaceList[frame_meta->batch_id];

  6. case NvBufSurfaceColorFormat::NVBUF_COLOR_FORMAT_NV12:
    tmp = cv::Mat(height * 3 / 2, width, CV_8UC1, frameHandle.mappedAddr.addr[0], frameHandle.pitch);

    Mat test = imread(“timg1280_720.jpg”);
    cv::cvtColor(test, tmp, CV_BGR2YUV);
    int ret = NvBufSurfaceSyncForDevice(in_surf, -1, -1);
    g_print(“NvBufSurfaceSyncForDevice returns: %d”, ret);

  7. NvBufSurfaceUnMap(in_surf, -1, -1);

  8. gst_buffer_unmap (inbuf, &in_map_info)

I find the pictures saved in osd sink pad probe are still the original vedeo frames. And ret in 6 step is -1. I don’t know why.
How to replace the vedeo frame? Any help would be appreciated.

The dsexample plugin works in ‘in-place’ mode(GstBaseTransform). The input buffer caps should be exactly the same to the output buffer caps. You can not use a different format buffer(data) to replace the original buffer (data).
We have a sample of replace the buffer with the same format.
Deepstream sample code snippet - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

Thank you.
I read the code you mentioned in page " Deepstream sample code snippet - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums", and changed my code. It doesn’t work out. The function NvBufSurfaceSyncForDevice still returns -1 however.
I find that the surface of your demo code was NVBUF_MAP_READ. Is this need to be writable?

  •  NvBufSurfaceMap (surface, -1, -1, NVBUF_MAP_READ);

Is there anything else need to be changed ?

Any update?
How would a plugin work in ‘normal’ mode, which means I can change the memory of output buffer?

Can you upload your source code?

Hi, Sorry for reply late.
I’m able to change the frame by applying the demo you offered earlier. But, the change is inside the custom plugin. The next element after gets the original frame, but not the changed frame.

The function below do the job of change frame and it is called in gst_capture_submit_input_buffer() which is simillar to gst_dsexample_submit_input_buffer().

static GstFlowReturn

form_batch_and_push (NvDsBatchMeta* batch_meta, NvBufSurface* in_surf,

                     GstBuffer* inbuf, GstCapture* capture)


  guint num_filled = batch_meta->num_frames_in_batch;

  auto it = capture->capture_nw->begin();

  gdouble scale_ratio = 1.0;

  GstFlowReturn ret = GST_FLOW_OK;

  std::unique_ptr <GstCaptureBatch> batch = nullptr;

  int sync_device_result = 0;

  cv::Mat tmp, nv12_after;

  for (guint i = 0; i < num_filled && it != capture->capture_nw->end(); i++, it++)


    g_print ("    batch push: %u\n", i);

    if (batch == nullptr)


      batch.reset (new GstCaptureBatch);

      batch->push_buffer = FALSE;

      batch->inbuf = inbuf;

      batch->inbuf_batch_num = CAPTURE_BATCH_NUM;


    guint height = in_surf->surfaceList[i].height;

    guint width = in_surf->surfaceList[i].width;

    /* Adding a frame to the current batch. Set the frames members. */

    GstCaptureFrame frame;

    frame.scale_ratio_x = scale_ratio;

    frame.scale_ratio_y = scale_ratio;

    frame.obj_meta = nullptr;

    frame.frame_meta = nvds_get_nth_frame_meta (batch_meta->frame_meta_list, i);

    g_print("    form frameNum: %d\n", it->first);

    auto saved_mat = capture->track_frame_meta.get(it->first);

    if (saved_mat == nullptr)


      g_print("data is null\n");

      ret = GST_FLOW_ERROR;

      return ret;


    // todo: set saved_mat to input_surf_params, do this in modify_output_...

    cv::Mat frame_image(height, width, CV_8UC3,


    std::string save_name = "output/save_image_" + std::to_string(it->first) + ".jpg";

    imwrite(save_name, frame_image);

    //do transform: bgr -> nv12, and cpu_data -> device_data

    NvBufSurface* inter_buf = nullptr;

    NvBufSurfaceCreateParams create_params;

    create_params.gpuId  = in_surf->gpuId;

    create_params.width  = width;

    create_params.height = height;

    create_params.size = 0;

    create_params.colorFormat = NVBUF_COLOR_FORMAT_BGRA;

    create_params.layout = NVBUF_LAYOUT_PITCH;

#ifdef __aarch64__

    create_params.memType = NVBUF_MEM_DEFAULT;


    create_params.memType = NVBUF_MEM_CUDA_UNIFIED;


    //Create another scratch RGBA NvBufSurface

    if (NvBufSurfaceCreate (&inter_buf, 1,

                            &create_params) != 0)


      GST_ERROR ("Error: Could not allocate internal buffer ");

      return GST_FLOW_OK;


    if (NvBufSurfaceMap (inter_buf, 0, -1, NVBUF_MAP_READ_WRITE) != 0)

      g_print("map error\n");

    NvBufSurfaceSyncForCpu (inter_buf, 0, 0);

    Mat inter_buf_mat = Mat(height, width, CV_8UC4,



    // Aplly your algo which works with opencv Mat, here we only rotate the Mat for demo

    // rotate(rgba_mat, rotate_mat, ROTATE_180);

    rectangle (frame_image, Point(100, 300), Point(700, 500), Scalar(0, 255, 255), 2, 8, 0);

    cvtColor(frame_image, inter_buf_mat, COLOR_BGR2BGRA);

    sync_device_result = NvBufSurfaceSyncForDevice(inter_buf, 0, 0);

    g_print("sync_device_result inter_buf: %d\n", sync_device_result);

    inter_buf->numFilled = 1;

    NvBufSurfTransformConfigParams transform_config_params;

    NvBufSurfTransformParams transform_params;

    NvBufSurfTransformRect src_rect;

    NvBufSurfTransformRect dst_rect;

    cudaStream_t cuda_stream;

    CHECK_CUDA_STATUS (cudaStreamCreate (&cuda_stream),

                       "Could not create cuda stream");

    transform_config_params.compute_mode = NvBufSurfTransformCompute_Default;

    transform_config_params.gpu_id = in_surf->gpuId;

    transform_config_params.cuda_stream = cuda_stream;

    /* Set the transform session parameters for the conversions executed in this

      * thread. */

    NvBufSurfTransform_Error err = NvBufSurfTransformSetSessionParams (&transform_config_params);

    if (err != NvBufSurfTransformError_Success)


      g_print("NvBufSurfTransformSetSessionParams failed with error %d\n", err);

      return GST_FLOW_OK;


    /* Set the transform ROIs for source and destination, only do the color format conversion*/

    src_rect = {0, 0, width, height};

    dst_rect = {0, 0, width, height};

    /* Set the transform parameters */

    transform_params.src_rect = &src_rect;

    transform_params.dst_rect = &dst_rect;

    transform_params.transform_flag =



    transform_params.transform_filter = NvBufSurfTransformInter_Default;

    /* Transformation format conversion, Transform rotated RGBA mat to NV12 memory in original input surface*/

    err = NvBufSurfTransform (inter_buf, in_surf, &transform_params);

    if (err != NvBufSurfTransformError_Success)


      g_print("NvBufSurfTransform failed with error %d while converting buffer %d\n", err);

      return GST_FLOW_OK;


    sync_device_result = NvBufSurfaceSyncForDevice(in_surf, 0, 0);

    g_print("sync_device_result in_buf: %d\n", sync_device_result);

    NvBufSurfaceUnMap(inter_buf, 0, 0);

    tmp = cv::Mat(height * 3 / 2, width, CV_8UC1, in_surf->surfaceList[i].mappedAddr.addr[0],


    cv::cvtColor(tmp, nv12_after, CV_YUV2BGRA_NV12);

    std::string file_name = "output/nv12_after_" + std::to_string(it->first) + ".jpg";

    imwrite(file_name, nv12_after);

    // endtodo: set saved_mat to input_surf_params

    frame.frame_num = it->first;

    frame.batch_index = i;

    frame.input_surf_params = in_surf->surfaceList + i;

    batch->frames.push_back (frame);

    // Set the transform session parameters for the conversions executed in this

    // thread.

    if (batch->frames.size () == capture->max_batch_size

        || i == batch_meta->num_frames_in_batch)


      // print("point 5\n");

      g_mutex_lock (&capture->process_lock);

      g_queue_push_tail (capture->process_queue, batch.get());

      g_cond_broadcast (&capture->process_cond);

      g_mutex_unlock (&capture->process_lock);

      /* Batch submitted. Set batch to nullptr so that a new GstCaptureBatch

      * structure can be allocated if required. */

      batch.release ();


    if (ret == GST_FLOW_ERROR)


      g_print ("error occur 2\n");



  return ret;


  return GST_FLOW_ERROR;


Do you know how to change the frame in all elements and permarnently ?
Also, the function NvBufSurfaceSyncForDevice() still returns -1 in both call in the function form_batch_and_push().

Will you do all transformation with opencv?

Can opencv change format from bgra to nv12? How?

I think the problem is GstBuffer can not change frame data between custom element and other element.

Opencv can convert bgra to nv12, but it is not nv12 deepstream needs. All deepstream elements work on GPU memory and the video format is different to normal video format. The NV12 format deepstream needs can only be generated by Nvidia HW(GPU, VIC, …), you can not use opencv to do the work. If you want to do deepstream video format transformation, please use nvvideoconvert plugin.

I don’t want to change video format. My custom plugin has the nv12 input, and the nv12 output. What I want to do is change the frame image to some other image.

Do you have other demo about change frame image between two elements?

Which NV12 format? You can check the caps, if it is ‘video/x-raw,format=NV12’, it is the normal NV12 format, your plugin can not be used with deepstream. If it is ‘video/x-raw(memory:NVMM),format=NV12’, it is the Nvidia NV12 format. I don’t think there is demo or sample code for you for handling such proprietary format. Please use nvvideoconvert for transformation(scaling, crop,…)

Deepstream sample code snippet - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums is the only sample available.

Hi, I just checked the gstdsexample_optimized.cpp. The code of define pad seems to be nvidia nv12. And my custom plugin is the same.


static GstStaticPadTemplate gst_dsexample_sink_template =






                                "{ NV12, RGBA, I420 }")));

static GstStaticPadTemplate gst_dsexample_src_template =






                                "{ NV12, RGBA, I420 }")));

GST_CAPS_FEATURE_MEMORY_NVMM means it is Nvidia format.

Do you have any idea about solve this problem? Or, why the change of NvBufSurfTransform() can not be effective outside this custom plugin?

Please provide your application code (not the code of the plugin) so that we can know where your plugin is in the pipeline.

I’ve send you a message with my code of add element.