How to crop the image and save

Thanks for your gift

@DaneLLL
Thank you for your response!
I am able to run the default app with the meta-image-test. so it saves from some default configuration

model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
force-implicit-batch-dim=1
int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin

However, if I want to get the same app work for model like that I am using when runnign the pipelines below.

deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/deepstream_app_source1_peoplenet.txt
 deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/deepstream_app_source1_facedetectir.txt

Do I just update the meta-test engine paths to load the models or it also will require to get somehow incorporated most of the configs of the samples below? So it will require a hybrid config file? or also the nvds_obj_encode.h will need to be customized?

So I wil ltry to merge the meta-test config file adding

[primary-gie]
enable=1
gpu-id=0
# Modify as necessary
model-engine-file=../../models/tlt_pretrained_models/facedetectir/resnet18_facedetectir_pruned.etlt_b1_gpu0_int8.engine
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=1
config-file=config_infer_primary_facedetectir.txt

right? or it doesn’t work like this?
Thank you very much!

Hi,
The content of config_infer_primary_facedetectir.txt should be similar to ds_image_meta_pgie_config.txt. If the setting is correct, you can modify deepstream-image-meta-test to load the config file:

  /* Configure the nvinfer element using the nvinfer config file. */
  g_object_set (G_OBJECT (pgie),
      "config-file-path", "config_infer_primary_facedetectir.txt", NULL);

Hi,

I’m trying to get similar image save behavior in deepstream_test1 application.
I’m trying it inside the osd_sink_pad_buffer_probe() method.
I’m only getting black jpg files, and buf->surfaceList has a length of 0

Can I do such thing? I’m trying to capture the images to a cv::Mat object…

Hello.
Basically, you need some part of deepstream_test3 or 4, to get image frame from stream. Then you put in another image frame, unpause main stream and do anything you need with that.

static GstFlowReturn
Convert_Buf_Into_Image (NvBufSurface input_buf, gint idx,
NvOSD_RectParams * crop_rect_params, gdouble Make_Ratio, gint input_width,
gint input_height, char
filename_Adder, int Cam_ID)//NvDsObjectMeta *object_meta)
{
NvBufSurfTransform_Error err;
NvBufSurfTransformConfigParams transform_config_params;
NvBufSurfTransformParams transform_params;
NvBufSurfTransformRect src_rect;
NvBufSurfTransformRect dst_rect;
NvBufSurface ip_surf;
cv::Mat in_mat, buf_mat;
int adder;
ip_surf = *input_buf;

ip_surf.numFilled = ip_surf.batchSize = 1;
ip_surf.surfaceList = &(input_buf->surfaceList[idx]);

gint src_left = (int)crop_rect_params->left;
if (crop_rect_params->left > 200) {src_left = (int)crop_rect_params->left - 200; adder = 200;}
else {src_left = 0; adder = (int)crop_rect_params->left;}
gint src_top = (int)crop_rect_params->top;
gint src_width = (int)crop_rect_params->width + adder;
gint src_height = (int)crop_rect_params->height;
//g_print(“ltwh = %d %d %d %d \n”, src_left, src_top, src_width, src_height);

guint dest_width, dest_height;
gdouble Ratio = 1.0;
dest_width = src_width;
dest_height = src_height;

NvBufSurface *nvbuf;
NvBufSurfaceCreateParams create_params;

char *p_str;

create_params.gpuId = DEFAULT_GPU_ID;
create_params.width = dest_width;
create_params.height = dest_height;
create_params.size = 0;
create_params.colorFormat = NVBUF_COLOR_FORMAT_RGBA;
create_params.layout = NVBUF_LAYOUT_PITCH;
#ifdef aarch64
create_params.memType = NVBUF_MEM_DEFAULT;
#else
create_params.memType = NVBUF_MEM_CUDA_UNIFIED;
#endif
NvBufSurfaceCreate (&nvbuf, 1, &create_params);

// Configure transform session parameters for the transformation
transform_config_params.compute_mode = NvBufSurfTransformCompute_Default;
transform_config_params.gpu_id = DEFAULT_GPU_ID;

// Set the transform session parameters for the conversions executed in this
// thread.
err = NvBufSurfTransformSetSessionParams (&transform_config_params);
if (err != NvBufSurfTransformError_Success)
{
goto error;
}

// Calculate scaling ratio while maintaining aspect ratio
//Ratio = MIN (Make_Ratio * dest_width / src_width, Make_Ratio * dest_height / src_height);
Ratio = MIN (Make_Ratio * dest_width / src_width, Make_Ratio * dest_height / src_height);

if ((crop_rect_params->width == 0) || (crop_rect_params->height == 0))
{
goto error;
}

// Set the transform ROIs for source and destination
src_rect = {(guint)src_top, (guint)src_left, (guint)src_width, (guint)src_height};
dst_rect = {0, 0, (guint)dest_width, (guint)dest_height};

// Set the transform parameters
transform_params.src_rect = &src_rect;
transform_params.dst_rect = &dst_rect;
transform_params.transform_flag =
NVBUFSURF_TRANSFORM_FILTER | NVBUFSURF_TRANSFORM_CROP_SRC |
NVBUFSURF_TRANSFORM_CROP_DST;
transform_params.transform_filter = NvBufSurfTransformInter_Default;

//Memset the memory
NvBufSurfaceMemSet (nvbuf, 0, 0, 0);

// Transformation scaling+format conversion if any.
err = NvBufSurfTransform (&ip_surf, nvbuf, &transform_params);
if (err != NvBufSurfTransformError_Success)
{
goto error;
}
// Map the buffer so that it can be accessed by CPU
if (NvBufSurfaceMap (nvbuf, 0, 0, NVBUF_MAP_READ) != 0)
{
goto error;
}

// Cache the mapped data for CPU access
NvBufSurfaceSyncForCpu (nvbuf, 0, 0);

// Use openCV to remove padding and convert RGBA to BGR. Can be skipped if
// algorithm can handle padded RGBA data.
in_mat = cv::Mat (dest_height, dest_width, CV_8UC4, nvbuf->surfaceList[0].mappedAddr.addr[0],
nvbuf->surfaceList[0].pitch);
buf_mat = cv::Mat(cv::Size(dest_width * Make_Ratio, dest_height * Make_Ratio), CV_8UC4);
Detected_Vehicle_Image = cv::Mat(cv::Size(dest_width * Make_Ratio, dest_height * Make_Ratio), CV_8UC3);
cv::cvtColor(in_mat, buf_mat, cv::COLOR_RGBA2BGR);
//cv::resize(in_mat, Detected_Vehicle_Image, cv::Size(), Make_Ratio, Make_Ratio, cv::INTER_LINEAR);
cv::resize(buf_mat, Detected_Vehicle_Image, cv::Size(), Make_Ratio, Make_Ratio, cv::INTER_LINEAR);

in_mat.release();
buf_mat.release();

if (NvBufSurfaceUnMap (nvbuf, 0, 0))
{
goto error;
}
//NvBufSurfaceDestroy(nvbuf);

return GST_FLOW_OK;

error:
return GST_FLOW_ERROR;
}

1 Like

Can I add this function to deepstream app? Now I can convert to mat, but I can’t distinguish between input streams

1 Like

Hello.
I want to set a ROI in frame that was placed for example in surface->surfaceList[0] and then create an EGlImage from this ROI.Is it possible?