Too Small Captured Full Frame to RAM in dsexample

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) 1080 Ti
• DeepStream Version 6.0.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.2.0.6
• NVIDIA GPU Driver Version (valid for GPU only) 470.82.01
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello everyone!

I did full frame capturing in deepstream 4.1 in get_converted_mat function. Below I bring some part of this function:

static GstFlowReturn get_converted_mat (GstDsExample * dsexample, NvBufSurface *input_buf, gint idx, NvOSD_RectParams * total_rect_params,
                                        NvOSD_RectParams * window_rect_params,gchar obj_label[],
                                        gdouble & ratio, gint input_width,gint input_height, int ImageType, guint64 track_id)
{
    NvBufSurfTransform_Error err;
    NvBufSurfTransformConfigParams transform_config_params;
    NvBufSurfTransformParams transform_params;
    NvBufSurfTransformRect src_rect;
    NvBufSurfTransformRect dst_rect;
    NvBufSurface ip_surf;
    cv::Mat in_mat, out_mat;
    ip_surf = *input_buf;
    gint src_left,src_top,src_width,src_height;

    ip_surf.numFilled = ip_surf.batchSize = 1;
//    g_print(" ******************************** Left , Top , Width , Height = %d , %d , %d , %d\n",
//            crop_rect_params->left,crop_rect_params->top,crop_rect_params->width , crop_rect_params->height);
    ip_surf.surfaceList = &(input_buf->surfaceList[idx]);
    if(ImageType!=0)
    {
        src_left = GST_ROUND_UP_2(total_rect_params->left-15);
        src_left = src_left < 0 ? 0 : src_left;
        src_top = GST_ROUND_UP_2(total_rect_params->top-5);
        src_top = src_top < 0 ? 0 : src_top;
        src_width = GST_ROUND_DOWN_2(total_rect_params->width + 30);
        src_width = src_width + src_left  > input_width ? input_width - src_left : src_width;
        src_height = GST_ROUND_DOWN_2(total_rect_params->height+10);
        src_height = src_height + src_top  > input_height ? input_height - src_top : src_height;
//        g_print("total_rect_params: ltwh = %d %d %d %d \n", src_left, src_top, src_width, src_height);
    }
    else
    {
        src_left = GST_ROUND_UP_2(window_rect_params->left-15);//15
        src_left = src_left < 0 ? 0 : src_left;

        src_top = GST_ROUND_UP_2(window_rect_params->top-5);
        src_top = src_top < 0 ? 0 : src_top;

        src_width = GST_ROUND_DOWN_2(window_rect_params->width + 30);//30
        src_width = src_width + src_left  > input_width ? input_width - src_left : src_width;

        src_height = GST_ROUND_DOWN_2(window_rect_params->height+10);
        src_height = src_height + src_top  > input_height ? input_height - src_top : src_height;
//        g_print("window_rect_params: ltwh = %d %d %d %d \n", src_left, src_top, src_width, src_height);
    }

    guint dest_width , dest_height;
    dest_width = src_width;
    dest_height = src_height;

    NvBufSurface *nvbuf;
    NvBufSurfaceCreateParams create_params;
    create_params.gpuId  = dsexample->gpu_id;
    create_params.width  = dest_width;
    create_params.height = dest_height;
    create_params.size = 0;
    create_params.colorFormat = NVBUF_COLOR_FORMAT_RGBA;
    create_params.layout = NVBUF_LAYOUT_PITCH;
#ifdef __aarch64__
    create_params.memType = NVBUF_MEM_DEFAULT;
#else
    create_params.memType = NVBUF_MEM_CUDA_UNIFIED;
#endif
    NvBufSurfaceCreate (&nvbuf, 1, &create_params);

    // Configure transform session parameters for the transformation
    transform_config_params.compute_mode = NvBufSurfTransformCompute_Default;
    transform_config_params.gpu_id = dsexample->gpu_id;
    transform_config_params.cuda_stream = dsexample->cuda_stream;

    // Set the transform session parameters for the conversions executed in this
    // thread.
    err = NvBufSurfTransformSetSessionParams (&transform_config_params);
    if (err != NvBufSurfTransformError_Success)
    {
        GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,("NvBufSurfTransformSetSessionParams failed with error %d", err), (NULL));
        goto error;
    }

    // Calculate scaling ratio while maintaining aspect ratio
    ratio = MIN (1.0 * dest_width/ src_width, 1.0 * dest_height / src_height);

    if ((total_rect_params->width == 0) || (total_rect_params->height == 0))
    {
        GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,("%s:total_rect_params dimensions are zero",__func__), (NULL));
        goto error;
    }

#ifdef __aarch64__
    if (ratio <= 1.0 / 16 || ratio >= 16.0)
    {
        // Currently cannot scale by ratio > 16 or < 1/16 for Jetson
        goto error;
    }
#endif
    // Set the transform ROIs for source and destination
    src_rect = {(guint)src_top, (guint)src_left, (guint)src_width, (guint)src_height};
    dst_rect = {0, 0, (guint)dest_width, (guint)dest_height};

    // Set the transform parameters
    transform_params.src_rect = &src_rect;
    transform_params.dst_rect = &dst_rect;
    transform_params.transform_flag = NVBUFSURF_TRANSFORM_FILTER | NVBUFSURF_TRANSFORM_CROP_SRC | NVBUFSURF_TRANSFORM_CROP_DST;
    transform_params.transform_filter = NvBufSurfTransformInter_Default;

    //Memset the memory
    NvBufSurfaceMemSet (nvbuf, 0, 0, 0);

    GST_DEBUG_OBJECT (dsexample, "Scaling and converting input buffer\n");

    // Transformation scaling+format conversion if any.
    err = NvBufSurfTransform (&ip_surf, nvbuf, &transform_params);
    if (err != NvBufSurfTransformError_Success)
    {
        GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,("NvBufSurfTransform failed with error %d while converting buffer", err),(NULL));
        goto error;
    }
    // Map the buffer so that it can be accessed by CPU
    if (NvBufSurfaceMap (nvbuf, 0, 0, NVBUF_MAP_READ) != 0)
    {
        goto error;
    }

    // Cache the mapped data for CPU access
    NvBufSurfaceSyncForCpu (nvbuf, 0, 0);

    // Use openCV to remove padding and convert RGBA to BGR. Can be skipped if
    // algorithm can handle padded RGBA data.
    try
    {
        in_mat  = cv::Mat (dest_height, dest_width,CV_8UC4, nvbuf->surfaceList[0].mappedAddr.addr[0],nvbuf->surfaceList[0].pitch);
        out_mat = cv::Mat (cv::Size(dest_width, dest_height), CV_8UC3);
        cv::cvtColor (in_mat, out_mat, CV_RGBA2BGR);
    }
    catch (...)
    {
        cout << "error in Reading Mat file in get_converted_mat" << std::endl;
    }

    if (NvBufSurfaceUnMap (nvbuf, 0, 0))
    {
        goto error;
    }
    NvBufSurfaceDestroy(nvbuf);

#ifdef __aarch64__
    // To use the converted buffer in CUDA, create an EGLImage and then use
    // CUDA-EGL interop APIs
    if (USE_EGLIMAGE)
    {
        if (NvBufSurfaceMapEglImage (dsexample->inter_buf, 0) !=0 )
        {
            goto error;
        }

        // dsexample->inter_buf->surfaceList[0].mappedAddr.eglImage
        // Use interop APIs cuGraphicsEGLRegisterImage and
        // cuGraphicsResourceGetMappedEglFrame to access the buffer in CUDA

        // Destroy the EGLImage
        NvBufSurfaceUnMapEglImage (dsexample->inter_buf, 0);
    }
#endif

    /* We will first convert only the Region of Interest (the entire frame or the
   * object bounding box) to RGB and then scale the converted RGB frame to
   * processing resolution. */
    return GST_FLOW_OK;

error:
    return GST_FLOW_ERROR;
}

Currently, I’m working with deepstream 6 and I want to copy full frame as a Mat object in memory.

At first I disabled full screen mode for gstdsexample plugin in creation pipeline phase:


    custom_plugin = gst_element_factory_make ("dsexample", "nvdsgst_dsexample");
    g_object_set (G_OBJECT (custom_plugin), "full-frame", 1, NULL);

Then I test multiple solution:

  1. using nvds_obj_enc_process function in
    /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-image-meta-test but it save frames directly to disk and I could not access to them anymore. In additionm It called in pgie_src_pad_buffer_probe But I want to have full frames in get_converted_mat or gst_dsexample_transform_ip.

  2. Creating an extra surface in get_converted_mat: but it leads to decreasing in running project approximately with coefficient 2(Also CPU usage increased twice). So this solution is not a proper idea:

static GstFlowReturn get_converted_mat (GstDsExample *dsexample, NvBufSurface *input_buf,
                                        NvOSD_RectParams *Total_Rect, gint idx,
                                        guint source_id, gint frame_num, NvDsObjectMeta *obj_meta,
                                        guint64 obj_id,
                                        gdouble & ratio, gint input_width, gint input_height)
{
    NvBufSurfTransform_Error err;
    NvBufSurfTransformConfigParams transform_config_params;
    NvBufSurfTransformParams transform_params;
    NvBufSurfTransformParams transform_params_frame;

    NvBufSurfTransformRect src_rect;
    NvBufSurfTransformRect dst_rect;

    NvBufSurfTransformRect src_rect_frame;
    NvBufSurfTransformRect dst_rect_frame;

    NvBufSurface ip_surf;

    NvOSD_RectParams* crop_rect_params = &obj_meta->rect_params;

    cv::Mat in_mat, in_mat1, out_mat, in_mat_frame, out_mat_frame, croppedFace;
    cv::Rect faceRegion;
    ip_surf = *input_buf;
    ip_surf.numFilled = ip_surf.batchSize = 1;
    ip_surf.surfaceList = &(input_buf->surfaceList[idx]);
//    ip_surf.surfaceList->pitch = input_buf->surfaceList[idx].pitch;

    NvDsMetaList *l_user_meta = NULL;
    NvDsUserMeta *user_meta = NULL;
    float *user_meta_data = NULL;

    cv::Point2f pt;
    /*static*/ cv::Point2f pt_gt;
    /*static*/ Person person;
    /*static*/ bool is_in_list;
    /*static*/ bool has_user_metadata;
    /*static*/ bool new_person;
    cv::Mat warp_mat;
    std::vector<cv::Point2f> trans_pt_vec;
    std::vector<cv::Point2f> pt_vec;
    std::vector<cv::Point2f> pt_gt_vec;
    //float trans_error;
    float area_conf;


    // these lines could be put outside this function for faster speed (but it might not get considerably faster)
    pt_gt_vec.push_back(cv::Point2f(38.2946, 51.6963)); // left eye ground truth keypoint location
    pt_gt_vec.push_back(cv::Point2f(73.5318, 51.5014)); // right eye ground truth keypoint location
    pt_gt_vec.push_back(cv::Point2f(56.0252, 71.7366)); // nose ground truth keypoint location
    pt_gt_vec.push_back(cv::Point2f(41.5493, 92.3655)); // left of lip ground truth keypoint location
    pt_gt_vec.push_back(cv::Point2f(70.7299, 92.2041)); // right of lip ground truth keypoint location


    gint src_left = GST_ROUND_UP_2((unsigned int)crop_rect_params->left);
    gint src_top = GST_ROUND_UP_2((unsigned int)crop_rect_params->top);
    gint src_width = GST_ROUND_DOWN_2((unsigned int)crop_rect_params->width);
    gint src_height = GST_ROUND_DOWN_2((unsigned int)crop_rect_params->height);


    gint src_left_frame = (unsigned int)Total_Rect->left;
    gint src_top_frame = (unsigned int)Total_Rect->top;
    gint src_width_frame = (unsigned int)Total_Rect->width;
    gint src_height_frame = (unsigned int)Total_Rect->height;

    /* Maintain aspect ratio */
    double hdest = dsexample->processing_width * src_height / (double) src_width;
    double wdest = dsexample->processing_height * src_width / (double) src_height;
    guint dest_width, dest_height;

    guint dest_width_frame, dest_height_frame;
    dest_width_frame = src_width_frame;
    dest_height_frame = src_height_frame;

    NvBufSurface *nvbuf;
    NvBufSurfaceCreateParams create_params;
    create_params.gpuId  = dsexample->gpu_id;
    create_params.width  = dest_width_frame;
    create_params.height = dest_height_frame;
    create_params.size = 0;
    create_params.colorFormat = NVBUF_COLOR_FORMAT_RGBA;
    create_params.layout = NVBUF_LAYOUT_PITCH;
#ifdef __aarch64__
    create_params.memType = NVBUF_MEM_DEFAULT;
#else
    create_params.memType = NVBUF_MEM_CUDA_UNIFIED;
#endif
    NvBufSurfaceCreate (&nvbuf, 1, &create_params);

    // convert the maximum of (h,w) to 112 and the other (min(h,w)) such that it preserves aspect ratio
    if (hdest <= dsexample->processing_height)
    {
        dest_width = dsexample->processing_width;
        dest_height = hdest;
    }
    else
    {
        dest_width = wdest;
        dest_height = dsexample->processing_height;
    }

    /* Configure transform session parameters for the transformation */
    transform_config_params.compute_mode = NvBufSurfTransformCompute_Default;
    transform_config_params.gpu_id = dsexample->gpu_id;
    transform_config_params.cuda_stream = dsexample->cuda_stream;

    /* Set the transform session parameters for the conversions executed in this thread. */
    err = NvBufSurfTransformSetSessionParams (&transform_config_params);
    if (err != NvBufSurfTransformError_Success)
    {
        GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,
                           ("\033[1;31m NvBufSurfTransformSetSessionParams failed with error %d \033[0m", err), (NULL));
        goto error;
    }

    /* Calculate scaling ratio while maintaining aspect ratio */
    ratio = MIN (1.0 * dest_width/ src_width, 1.0 * dest_height / src_height);

    if ((crop_rect_params->width == 0) || (crop_rect_params->height == 0))
    {
        GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,
                           ("\033[1;31m %s:crop_rect_params dimensions are zero \033[0m",__func__), (NULL));
        goto error;
    }

    if ((Total_Rect->width == 0) || (Total_Rect->height == 0))
    {
        GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,
                           ("\033[1;31m %s:crop_rect_params dimensions are zero \033[0m",__func__), (NULL));
        goto error;
    }

#ifdef __aarch64__
    if (ratio <= 1.0 / 16 || ratio >= 16.0) {
        /* Currently cannot scale by ratio > 16 or < 1/16 for Jetson */
        goto error;
    }
#endif
    /* Set the transform ROIs for source and destination */
    src_rect = {(guint)src_top, (guint)src_left, (guint)src_width, (guint)src_height};
    //dst_rect = {0, 0, (guint)dest_width, (guint)dest_height};
    dst_rect = {0, 0, (guint)dest_width, (guint)dest_height};

    src_rect_frame = {(guint)src_top_frame, (guint)src_left_frame,
                      (guint)src_width_frame, (guint)src_height_frame};
    dst_rect_frame = {0, 0, (guint)dest_width_frame, (guint)dest_height_frame};

    /* Set the transform parameters */
    transform_params.src_rect = &src_rect;
    transform_params.dst_rect = &dst_rect;
    // resize image by cuda
    transform_params.transform_flag =
            NVBUFSURF_TRANSFORM_FILTER | NVBUFSURF_TRANSFORM_CROP_SRC | NVBUFSURF_TRANSFORM_CROP_DST;
    transform_params.transform_filter = NvBufSurfTransformInter_Default;


    transform_params_frame.src_rect = &src_rect_frame;
    transform_params_frame.dst_rect = &dst_rect_frame;
    transform_params_frame.transform_flag =
            NVBUFSURF_TRANSFORM_FILTER | NVBUFSURF_TRANSFORM_CROP_SRC | NVBUFSURF_TRANSFORM_CROP_DST;
    transform_params_frame.transform_filter = NvBufSurfTransformInter_Default;



    /* Memset the memory */
//    NvBufSurfaceMemSet (dsexample->inter_buf, 0, 0, 0);
    NvBufSurfaceMemSet (nvbuf, 0, 0, 0);

    GST_DEBUG_OBJECT (dsexample, "\033[1;36m Scaling and converting input buffer \033[0m\n");

    /* Transformation scaling+format conversion if any. */
//    err = NvBufSurfTransform (&ip_surf, dsexample->inter_buf, &transform_params);
//    if (err != NvBufSurfTransformError_Success)
//    {
//        GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,
//                           ("\033[1;31m NvBufSurfTransform failed with error %d while converting buffer \033[0m",
//                            err), (NULL));
//        goto error;
//    }

    err = NvBufSurfTransform (&ip_surf, nvbuf, &transform_params_frame);
    if (err != NvBufSurfTransformError_Success)
    {
        GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,
                           ("\033[1;31m NvBufSurfTransform failed with error %d while converting buffer \033[0m", err), (NULL));
        goto error;
    }


    /* Map the buffer so that it can be accessed by CPU */
//    if (NvBufSurfaceMap (dsexample->inter_buf, 0, 0, NVBUF_MAP_READ) != 0)
//    {
//        goto error;
//    }

    if (NvBufSurfaceMap (nvbuf, 0, 0, NVBUF_MAP_READ) != 0)
    {
        goto error;
    }

//    if(dsexample->inter_buf->memType == NVBUF_MEM_SURFACE_ARRAY)
//    {
//        /* Cache the mapped data for CPU access */
//        NvBufSurfaceSyncForCpu (dsexample->inter_buf, 0, 0);
//    }
    if(nvbuf->memType == NVBUF_MEM_SURFACE_ARRAY)
    {
        /* Cache the mapped data for CPU access */
        NvBufSurfaceSyncForCpu (nvbuf, 0, 0);
    }

    // Use openCV to remove padding and convert RGBA to BGR. Can be skipped if
    // algorithm can handle padded RGBA data.
    in_mat = cv::Mat (src_height_frame, src_width_frame,
                      CV_8UC4, nvbuf->surfaceList[0].mappedAddr.addr[0],
            nvbuf->surfaceList[0].pitch);
//    cv::imwrite("in_mat.jpg", in_mat);
    out_mat_frame = cv::Mat (cv::Size(src_width_frame, src_height_frame), CV_8UC3);

#if (CV_MAJOR_VERSION >= 4)
    cv::cvtColor (in_mat, out_mat_frame, cv::COLOR_RGBA2BGR);
#else
    cv::cvtColor (croppedFace, *dsexample->cvmat, CV_RGBA2BGR);
#endif
//    cv::imwrite("filename.jpg", out_mat_frame);

    faceRegion.x = obj_meta->rect_params.left;
    faceRegion.y = obj_meta->rect_params.top;
    faceRegion.width = obj_meta->rect_params.width;
    faceRegion.height = obj_meta->rect_params.height;

//    cv::Rect myROI( obj_meta->rect_params.left,  obj_meta->rect_params.top
//                    ,  obj_meta->rect_params.width,  obj_meta->rect_params.height);
    croppedFace = in_mat(faceRegion);
//    cv::imwrite("croppedFace.jpg", croppedFace);


#if (CV_MAJOR_VERSION >= 4)
    cv::cvtColor (croppedFace, *dsexample->cvmat, cv::COLOR_RGBA2BGR);
#else
    cv::cvtColor (croppedFace, *dsexample->cvmat, CV_RGBA2BGR);
#endif
//    cv::imwrite("dsexample.cvmat.jpg", dsexample->cvmat->clone());
//    static gint dump = 0;
//    if (dump < 10)
//    {
//        char filename[64];
//        snprintf(filename, 64, "image%03d.jpg", dump);
//        cv::imwrite(filename, out_mat);
//        dump++;
//    }


    /* Use openCV to remove padding and convert RGBA to BGR. Can be skipped if
   * algorithm can handle padded RGBA data. */

//    in_mat_frame  = cv::Mat (/*dest_height*/1080, /*dest_width*/1920,
//                                 CV_8UC4,
//                                 //                                 ip_surf.surfaceList[0].mappedAddr.addr[0],
//                                 nvbuf->surfaceList[0].mappedAddr.addr[0],
//            //            ip_surf.surfaceList[0].pitch
//            nvbuf->surfaceList[0].pitch
//            );
//    out_mat_frame = cv::Mat (cv::Size(/*dest_width*/in_mat_frame.cols,
//                                          /*dest_height*/in_mat_frame.rows),
//                                 CV_8UC3);
//    cv::cvtColor (in_mat_frame, out_mat_frame, cv::COLOR_RGBA2BGR);
////        cv::imwrite("out_mat_X.jpg", out_mat_X);

//    if (NvBufSurfaceUnMap (nvbuf, 0, 0))
//    {
//        goto error;
//    }
//    std::cout<<"ip_surf.surfaceList[0].planeParams.pitch = "<<
//               ip_surf.surfaceList[0].planeParams.pitch<<
//               " ip_surf.surfaceList[0].pitch = "<<
//               ip_surf.surfaceList[0].pitch<<
//             " dsexample->inter_buf->surfaceList[0].planeParams.pitch[0] = "<<
//               dsexample->inter_buf->surfaceList[0].planeParams.pitch[0]<<
//               " dsexample->inter_buf->surfaceList[0].pitch = "<<
//               dsexample->inter_buf->surfaceList[0].pitch<<std::endl;


//    in_mat1 = cv::Mat (dsexample->processing_height, dsexample->processing_width,
//                      CV_8UC4, dsexample->inter_buf->surfaceList[0].mappedAddr.addr[0],
//            dsexample->inter_buf->surfaceList[0].pitch);

//    cv::imwrite("in_mat1.jpg", in_mat1);
//    faceRegion.x = obj_meta->rect_params.left;
//    faceRegion.y = obj_meta->rect_params.top;
//    faceRegion.width = obj_meta->rect_params.width;
//    faceRegion.height = obj_meta->rect_params.height;

//    croppedFace = out_mat_frame(faceRegion);

//#if (CV_MAJOR_VERSION >= 4)
//    cv::cvtColor (in_mat, *dsexample->cvmat, cv::COLOR_RGBA2BGR);
//#else
//    cv::cvtColor (croppedFace, *dsexample->cvmat, CV_RGBA2BGR);
//#endif

    if (has_user_metadata == false)
         return GST_FLOW_OK;



//    if (NvBufSurfaceUnMap (dsexample->inter_buf, 0, 0))
//    {
//        goto error;
//    }

    if (NvBufSurfaceUnMap (nvbuf, 0, 0))
    {
        goto error;
    }

//    NvBufSurfaceDestroy(dsexample->inter_buf);
    NvBufSurfaceDestroy(nvbuf);

    if(dsexample->is_integrated)
    {
#ifdef __aarch64__
        /* To use the converted buffer in CUDA, create an EGLImage and then use
    * CUDA-EGL interop APIs */
        if (USE_EGLIMAGE) {
            if (NvBufSurfaceMapEglImage (dsexample->inter_buf, 0) !=0 ) {
                goto error;
            }

            /* dsexample->inter_buf->surfaceList[0].mappedAddr.eglImage
      * Use interop APIs cuGraphicsEGLRegisterImage and
      * cuGraphicsResourceGetMappedEglFrame to access the buffer in CUDA */

            /* Destroy the EGLImage */
            NvBufSurfaceUnMapEglImage (dsexample->inter_buf, 0);
        }
#endif
    }

    /* We will first convert only the Region of Interest (the entire frame or the
   * object bounding box) to RGB and then scale the converted RGB frame to
   * processing resolution. */
    return GST_FLOW_OK;

error:
    return GST_FLOW_ERROR;
}

So I ignore mentioned solution and tried this one:

At first :

g_object_set (G_OBJECT (custom_plugin), "full-frame", 0, NULL);

Now if I set


    /* Transformation scaling+format conversion if any. */
    err = NvBufSurfTransform (&ip_surf, dsexample->inter_buf, &transform_params);

I captured these images: only cropped face will be saved, while I’m looking for full frame:

dsexample_cvmat

Then I changed buffer surface transformation :

    err = NvBufSurfTransform (&ip_surf, dsexample->inter_buf, &transform_params_frame);

And I capture these images:

dsexample_cvmat

I set 1 for full-frame

g_object_set (G_OBJECT (custom_plugin), "full-frame", 0, NULL);

and


    err = NvBufSurfTransform (&ip_surf, dsexample->inter_buf, &transform_params);

results:

dsexample_cvmat

By changing

err = NvBufSurfTransform (&ip_surf, dsexample->inter_buf, &transform_params_frame);

I also have this challenge:

dsexample_cvmat

Also, pgie_src_pad_buffer_probe in /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-image-meta-test is not proper for me.

What’s your pipeline like? What’s the width and height valuse in the following API?

get_converted_mat (dsexample,
              surface, frame_meta->batch_id, &obj_meta->rect_params,
              scale_ratio, dsexample->video_info.width,
              dsexample->video_info.height)

The pipline structure:

custom_plugin = gst_element_factory_make ("dsexample", "nvdsgst_dsexample");
g_object_set (G_OBJECT (custom_plugin), "full-frame", 1, NULL);

gst_bin_add_many (GST_BIN (pipeline), pgie, tracker, custom_plugin, tiler, queue3,
                              nvvidconv, nvosd, sink, NULL);
gst_element_link_many (streammux, nvvidconv, pgie, tracker,  custom_plugin,
                                        tiler, nvosd, sink, NULL)

And for the second part of question:

The height and the width in the get_converted_mat or gst_dsexample_transform_ip

dsexample->video_info.width = 1920
dsexample->video_info.height = 1080

Notion:
The face size is 112*112, so when I want to save full frame image, it saves a 112*112 not a 1920*1080.

Ok. Could you show your code to us or just run our demo to duplicate this issue? We can analyze it more conveniently if it can run in our env. Thanks

Your code can’t work well in my env. There are some of your own customization. So you can try to refer the following schemes:
1.refer the FAQ about dsexample:
Sample of customizing gst-dsexample
2.You can add probe function to get the image data of nvbufersurface before your custom_plugin to verify whether the problem is handled by this plugin.
3.You can use one of our demo code to run the dsexample. If it duplicates in our demo code. We can debug it more conveniently. Thanks

Did you mean pgie_pad_buffer_probe function? And if I create a NvBufSurface * nvbufersurface does it decrease speed or not? Because I should create another buffer surface as follows:

NvBufSurface *nvbuf;
NvBufSurfaceCreate (&nvbuf, 1, &create_params);
NvBufSurfTransformSetSessionParams (&transform_config);
NvBufSurfaceMemSet (nvbuf, 0, 0, 0);
NvBufSurfTransform (&ip_surf, nvbuf, &transform_params);
NvBufSurfaceMap (nvbuf, 0, 0, NVBUF_MAP_READ);
if(nvbuf->memType == NVBUF_MEM_SURFACE_ARRAY)
{
       /* Cache the mapped data for CPU access */
       NvBufSurfaceSyncForCpu (nvbuf, 0, 0);
}
if (NvBufSurfaceUnMap (nvbuf, 0, 0))
{
      goto error;
}
NvBufSurfaceDestroy(nvbuf);

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

No, you needn’t create a new buffer, you can refer the FAQ below to get the nv12 data from nvbufersurface.
How to get original NV12 frame buffer

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.