Hi,
I am developing DeepStream Programing based on DS 4.0 SDK in Xavier
now, I refer to dsexample plugin source and test2 program
How do I crop the image from bbox? and then how do I save this image?
Hi,
I am developing DeepStream Programing based on DS 4.0 SDK in Xavier
now, I refer to dsexample plugin source and test2 program
How do I crop the image from bbox? and then how do I save this image?
Hi,
Please refer to the following sample:
static GstFlowReturn
get_converted_mat (GstDsExample * dsexample, NvBufSurface *input_buf, gint idx,
NvOSD_RectParams * crop_rect_params, gdouble & ratio, gint input_width,
gint input_height)
{
NvBufSurfTransform_Error err;
NvBufSurfTransformConfigParams transform_config_params;
NvBufSurfTransformParams transform_params;
NvBufSurfTransformRect src_rect;
NvBufSurfTransformRect dst_rect;
NvBufSurface ip_surf;
cv::Mat in_mat, out_mat;
ip_surf = *input_buf;
ip_surf.numFilled = ip_surf.batchSize = 1;
ip_surf.surfaceList = &(input_buf->surfaceList[idx]);
gint src_left = GST_ROUND_UP_2(crop_rect_params->left);
gint src_top = GST_ROUND_UP_2(crop_rect_params->top);
gint src_width = GST_ROUND_DOWN_2(crop_rect_params->width);
gint src_height = GST_ROUND_DOWN_2(crop_rect_params->height);
//g_print("ltwh = %d %d %d %d \n", src_left, src_top, src_width, src_height);
guint dest_width, dest_height;
dest_width = src_width;
dest_height = src_height;
NvBufSurface *nvbuf;
NvBufSurfaceCreateParams create_params;
create_params.gpuId = dsexample->gpu_id;
create_params.width = dest_width;
create_params.height = dest_height;
create_params.size = 0;
create_params.colorFormat = NVBUF_COLOR_FORMAT_RGBA;
create_params.layout = NVBUF_LAYOUT_PITCH;
#ifdef __aarch64__
create_params.memType = NVBUF_MEM_DEFAULT;
#else
create_params.memType = NVBUF_MEM_CUDA_UNIFIED;
#endif
NvBufSurfaceCreate (&nvbuf, 1, &create_params);
// Configure transform session parameters for the transformation
transform_config_params.compute_mode = NvBufSurfTransformCompute_Default;
transform_config_params.gpu_id = dsexample->gpu_id;
transform_config_params.cuda_stream = dsexample->cuda_stream;
// Set the transform session parameters for the conversions executed in this
// thread.
err = NvBufSurfTransformSetSessionParams (&transform_config_params);
if (err != NvBufSurfTransformError_Success) {
GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,
("NvBufSurfTransformSetSessionParams failed with error %d", err), (NULL));
goto error;
}
// Calculate scaling ratio while maintaining aspect ratio
ratio = MIN (1.0 * dest_width/ src_width, 1.0 * dest_height / src_height);
if ((crop_rect_params->width == 0) || (crop_rect_params->height == 0)) {
GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,
("%s:crop_rect_params dimensions are zero",__func__), (NULL));
goto error;
}
#ifdef __aarch64__
if (ratio <= 1.0 / 16 || ratio >= 16.0) {
// Currently cannot scale by ratio > 16 or < 1/16 for Jetson
goto error;
}
#endif
// Set the transform ROIs for source and destination
src_rect = {(guint)src_top, (guint)src_left, (guint)src_width, (guint)src_height};
dst_rect = {0, 0, (guint)dest_width, (guint)dest_height};
// Set the transform parameters
transform_params.src_rect = &src_rect;
transform_params.dst_rect = &dst_rect;
transform_params.transform_flag =
NVBUFSURF_TRANSFORM_FILTER | NVBUFSURF_TRANSFORM_CROP_SRC |
NVBUFSURF_TRANSFORM_CROP_DST;
transform_params.transform_filter = NvBufSurfTransformInter_Default;
//Memset the memory
NvBufSurfaceMemSet (nvbuf, 0, 0, 0);
GST_DEBUG_OBJECT (dsexample, "Scaling and converting input buffer\n");
// Transformation scaling+format conversion if any.
err = NvBufSurfTransform (&ip_surf, nvbuf, &transform_params);
if (err != NvBufSurfTransformError_Success) {
GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,
("NvBufSurfTransform failed with error %d while converting buffer", err),
(NULL));
goto error;
}
// Map the buffer so that it can be accessed by CPU
if (NvBufSurfaceMap (nvbuf, 0, 0, NVBUF_MAP_READ) != 0){
goto error;
}
// Cache the mapped data for CPU access
NvBufSurfaceSyncForCpu (nvbuf, 0, 0);
// Use openCV to remove padding and convert RGBA to BGR. Can be skipped if
// algorithm can handle padded RGBA data.
in_mat =
cv::Mat (dest_height, dest_width,
CV_8UC4, nvbuf->surfaceList[0].mappedAddr.addr[0],
nvbuf->surfaceList[0].pitch);
out_mat =
cv::Mat (cv::Size(dest_width, dest_height), CV_8UC3);
cv::cvtColor (in_mat, out_mat, CV_RGBA2BGR);
static gint dump = 0;
if (dump < 150) {
char filename[64];
snprintf(filename, 64, "/home/nvidia/image%03d.jpg", dump);
cv::imwrite(filename, out_mat);
dump++;
}
if (NvBufSurfaceUnMap (nvbuf, 0, 0)){
goto error;
}
NvBufSurfaceDestroy(nvbuf);
#ifdef __aarch64__
// To use the converted buffer in CUDA, create an EGLImage and then use
// CUDA-EGL interop APIs
if (USE_EGLIMAGE) {
if (NvBufSurfaceMapEglImage (dsexample->inter_buf, 0) !=0 ) {
goto error;
}
// dsexample->inter_buf->surfaceList[0].mappedAddr.eglImage
// Use interop APIs cuGraphicsEGLRegisterImage and
// cuGraphicsResourceGetMappedEglFrame to access the buffer in CUDA
// Destroy the EGLImage
NvBufSurfaceUnMapEglImage (dsexample->inter_buf, 0);
}
#endif
/* We will first convert only the Region of Interest (the entire frame or the
* object bounding box) to RGB and then scale the converted RGB frame to
* processing resolution. */
return GST_FLOW_OK;
error:
return GST_FLOW_ERROR;
}
It creates a NvBufSurface for each object and saves it to a jpg file.
Hi,
Thank you for your reply
I already developed crop the image from surface by using NvBufSurface and NvBufSurfTransform whenever called osd_pad callback
exactly, my source works to crop and paste images in the surface but I want to save this surface to jpg or png
your source, dsexample source used two surfaces so how do u save the surface in the destination?
static GstPadProbeReturn
osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
gpointer u_data)
{
.....
//Get NvBufSurface from GstBuffer
memset (&in_map_info, 0, sizeof (in_map_info));
if (!gst_buffer_map (buf, &in_map_info, (GST_MAP_READ|GST_MAP_WRITE))) {
g_print ("Error: Failed to map gst buffer\n");
}
surface = (NvBufSurface *) in_map_info.data;
if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE && vehicle_count <= 3) {
crop_rect_params = &obj_meta->rect_params;
gint src_left = GST_ROUND_UP_2(crop_rect_params->left);
gint src_top = GST_ROUND_UP_2(crop_rect_params->top);
gint src_width = GST_ROUND_DOWN_2(crop_rect_params->width);
gint src_height = GST_ROUND_DOWN_2(crop_rect_params->height);
// Set the transform ROIs for source and destination
src_rect.top = src_top;
src_rect.left = src_left;
src_rect.width = src_width;
src_rect.height= src_height;
dst_rect.top = 0 + (vehicle_count-1)*240 ;
dst_rect.left = MUXER_OUTPUT_WIDTH-320;
dst_rect.width = 320;
dst_rect.height =240;
// Set the transform parameters
transform_params.src_rect = &src_rect;
transform_params.dst_rect = &dst_rect;
transform_params.transform_flag = NVBUFSURF_TRANSFORM_CROP_SRC | NVBUFSURF_TRANSFORM_CROP_DST;
transform_params.transform_filter = NvBufSurfTransformInter_Default;
// Crop and Paste
<b>err = NvBufSurfTransform (surface, surface, &transform_params);</b>
if (err != NvBufSurfTransformError_Success) {
g_print ("NvBufSurfTransform failed with error %d while converting buffer\n", err);
}
Hi,
It seems not correct to configure source and destination same surface. You should create a 320x240 RGBA surface as the destination.
Hi,
Thank you for your fast reply ,
I used this source for osd sink pad callback and this source is working that bbox cropped images move to the right position now, up to 3 images.
for the purpose of a current source, show their images (fixed scale,320x240) in right position whenever find the objects. ( my test source)
but now, I want to crop the images and then save their images into jpeg ( I don’t care if used many surfaces )
so I am curious about how to save images by using surface directly
Do you have any idea? , only making my plugin?
Hi,
one more question about dsexample
every time I watch obj0 and obj1 rectangle by using dsexample (full frame 1)
how do I watch the cropped image by using dsexample?,
Hi,
In the sample we use jpeg encoder in OpenCV. If you would like to save the surface directly, it will be RGBA.
For watching the cropped images, it may not be done in dsexample but in sink(nveglglessink or nvoverlaysink). If your object is fixed 320x240, it should be possible to initialize the sink in 320x240 and show only the object.
I tried to customize the dsexample and I’m having an issue with it as mentioned in the following post. Please reply. How to add new parameters to deepstream_app_config_yoloV2_tiny.txt file ? - DeepStream SDK - NVIDIA Developer Forums
Hi DaneLLL
I implement your example but I cannot see any saved image.
Can you please provide more details on implementation?
Hi,
its very simple!
because You dont have write permission in “/home/nvidia/” directory!
run your command in sudo mode. or, in the code, change the “/home/nvidia/” directory to a directory where You have write access. also try using the .bmp format instead of .jpg
[quote]
Hi,
its very simple!
because You dont have write permission in “/home/nvidia/” directory!run your command in sudo mode. or, in the code, change the “/home/nvidia/” directory to a directory where You have write access. also try using the .bmp format instead of .jpg
Thanks barzanhayati. why .bmp format?
Hi,
Maybe you can save to “/tmp/image%03d.jpg”
I solved this issue, now I want to get the whole frame when an object is detected instead of cropping it?
I have to make changes in gstdsexample or …?
Hi,
You can set ‘full-frame=1’ to get whole frames in dsexample:
[url]NVIDIA DeepStream SDK Developer Guide — DeepStream 6.1.1 Release documentation
Hi,
You can set ‘full-frame=1’ to get whole frames in dsexample:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream_Development_Guide%2Fdeepstream_custom_plugin.html%23wwpID0E4HA
I did so, however it save the frames that dose not contain any moving object!!
Image: Screenshot-from-2019-11-05-12-48-52 — ImgBB
maybe because of these boxes that I have on my screen, which I dont know why!!!
Hi,
Please study gstdsexample.cpp to have more understanding and customize it into your usecase.
Hi,
Please study gstdsexample.cpp to have more understanding and customize it into your usecase.
Hi, i use this sample,but i have some confusion about the aspect ratio.
the example seem to pass all img to the same processing aspect.
for example , my ROI it (top,left,width,height) = (20,20,64,64)
the ROI region is the square.
but if my processing width and height set to (1920,1080)
the crop img will be resize to 1920,1080 with (1080,1080) crop img and zero padding (see as black block).
i just want to save the crop image to (64,64) i don’t want any resize and padding. how can i do this ?
Hi,
In the sample, it sets
dest_width = src_width;
dest_height = src_height;
In your usecase, you should set it to
dest_width = 64;
dest_height = 64;
Hi,
In the sample, it setsdest_width = src_width; dest_height = src_height;
In your usecase, you should set it to
dest_width = 64; dest_height = 64;
that’s very helpful