How to switch between different video sources and zoom in to full screen on Sink

Hardware Platform :Jetson AGX Orin(64GB)
DeepStream 6.3

In the deepstream pipeline I created:
1. There are two video sources as inputs;
2. The configuration of tiler is as follows:
tiler:
rows: 2
columns: 2
width: 1920
height: 1080
Sink uses nvdrmvideosink

The function I want to achieve:
I want to input the numbers corresponding to four different areas through the keyboard to select the images of the four different areas, and then enlarge them to full screen; Return to the display before switching through other keyboard keys.

Encountered problems:
I analyzed the source code of the deepstream app, and the core operation is to set the tiler attribute to select the enlarged video source:

g_object_set (G_OBJECT (tiler), "show-source", source_id, NULL);

However, since I only have two video sources as inputs and the tiler is set to 2 * 2, I have performed some image processing operations in the lower left and lower right corners. Therefore, using this method, the enlarged image of a single video source can only display the upper half.

seek help:
Is there any underlying processing program for implementing video source switching and display in the deepstream app, or are there any other methods to achieve my functionality?

I am looking forward to and grateful for your reply!

1.Did you tried the deepstream-test5 sample ? Left click on the source to switch to full screen.

./deepstream-test5-app -c configs/test5_config_file_src_infer.txt -p 1

I’m not sure if your problem is related to nvdrmvideosink, what is the resolution of your monitor and the resolution of the test stream?

Can you share sample code that reproduces the problem?

Thank you very much for your reply!
I have already used the deepstream app to zoom in and display the specified video source, and analyzed the source code. The zoom in display of deepstream test5 is the same as the programming of the deepstream app, but this method does not seem to be suitable for me.

Because I only have two video sources (640 * 512), but my tiler is 2 * 2 (tiler: 1920 * 1080), and the positions of the two video sources in the tiler display are (0, 0, 960, 540) and (960, 0, 960, 540) respectively.As shown in the following figure:

If I choose to zoom in on video source 1, it will enlarge to the (0,0,1920, 540) area instead of the full screen area (0,0, 1920, 1080).

The core code I am using is:

  g_object_set (G_OBJECT (tiler), "show-source", source_id, NULL);


I tried another method, which is to add a probe function on the src pad of the OSD in the deepstream pipeline, obtain the entire image in the probe function, then select the image of video source 1 area and zoom in. After zooming in, I put the enlarged image of video source 1 area back into the buffer of the current frame.
The core code I am using is:

static GstPadProbeReturn
nvosd_queue_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)
{
  static gint dump_count = 0;
  static gint frame_number = 0;
  cv::Mat region_image;
  cv::Mat image_clone;
  GstBuffer *original_gstbuf  = (GstBuffer *) info->data;

  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (original_gstbuf);

  // Get original raw data
  GstMapInfo in_map_info;
  if (!gst_buffer_map (original_gstbuf, &in_map_info, GST_MAP_READ)) {
      g_print ("Error: Failed to map gst buffer\n");
      gst_buffer_unmap (original_gstbuf, &in_map_info);
      return GST_PAD_PROBE_OK;
  }

  NvBufSurface *surface = (NvBufSurface *) in_map_info.data;
  NvBufSurfaceMap(surface, -1, -1, NVBUF_MAP_READ_WRITE);
  NvBufSurfaceSyncForCpu(surface, 0, 0);
  guint surface_height = surface->surfaceList[0].height;
  guint surface_width = surface->surfaceList[0].width;

  cv::Mat original_image= cv::Mat(surface_height*3/2, surface_width, CV_8UC1, surface->surfaceList[0].mappedAddr.addr[0], surface->surfaceList[0].pitch);
  if(original_image.empty()) return GST_PAD_PROBE_OK;

  cv::Mat original_image_temp;
  original_image_temp=original_image;
  
  // Press the button for the first time to switch to interface 1 in the upper left corner
  if(show_source==1)
  {
    g_print("in show_source 1\n");
    
    // choose the region
    cv::Rect selectedRegion(0, 0, surface_width / 2, surface_height / 2);
    region_image = original_image(selectedRegion);
    // NV12 to BGR
    cv::Mat bgr_region_image;
    cv::cvtColor(region_image, bgr_region_image, cv::COLOR_YUV2BGR_NV12);
    // expend to 1920*1080
    cv::Mat resized_bgr_region_image;
    cv::resize(bgr_region_image, resized_bgr_region_image, cv::Size(surface_width, surface_height)); 
    //BGR TO I420
    cv::Mat I420_original_image = cv::Mat(resized_bgr_region_image.rows * 3 / 2, resized_bgr_region_image.cols, CV_8UC1);   
    cv::cvtColor(resized_bgr_region_image, I420_original_image, cv::COLOR_BGR2YUV_I420);
    
    //I420 TO NV12
    int I420_width= I420_original_image.cols;
    int I420_height= I420_original_image.rows *2/3;
    int yuvNV12_size = I420_height * I420_width * 3 / 2;
    unsigned char *nv12_buffer = (unsigned char *)malloc(yuvNV12_size*sizeof(uchar));
    ConvertYUV_I420_to_NV12(I420_original_image,nv12_buffer);
    cv::Mat NV12_original_image(cv::Size(I420_width, I420_height * 3 / 2), CV_8UC1, nv12_buffer);

    NV12_original_image.copyTo(original_image);
    free(nv12_buffer);
    // if((!NV12_original_image.empty()))
    //  memcpy(original_image.data, NV12_original_image.data, NV12_original_image.total());

  }
    // When the button is pressed for the second time, the original image is displayed
  if(show_source==-1)
  {
    original_image=original_image_temp;
    // memcpy(original_image.data, original_image.data, original_image.total());
  }
  frame_number++;
  NvBufSurfaceUnMap(surface, -1, -1);
  gst_buffer_unmap(original_gstbuf, &in_map_info);
  return GST_PAD_PROBE_OK;
}

Excuse me, is there any problem with me writing the program this way?

This pipeline works fine on AGX Orin(DS-7.0). I think there are some issues in your code.

ffmpeg -i /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 -s 640x512 -y 640x512.h264

# disable x windows
sudo systemctl stop gdm
sudo loginctl terminate-seat seat0
sudo modprobe nvidia-drm modeset=1

# set show-source property of nvmultistreamtiler to 1.
gst-launch-1.0 uridecodebin uri=file://xxxx/640x512.h264 ! m.sink_0 uridecodebin uri=file://xxxx/640x512.h264 ! m.sink_1 nvstreammux name=m batch-size=4 width=1920 height=1080 ! nvmultistreamtiler columns=2 rows=2  width=1920 height=1080 show-source=1 ! nvdrmvideosink

Since you use opencv for scaling, you need to copy data from gpu to cpu, so there may be some impact on performance

Thank you very much for your reply!
Copying data from GPU to CPU does indeed have an impact on performance. But I encountered another problem, which is that I cannot correctly convert the image data in the surface address.

So I checked colorFormat, which is NVBUF_COLOR-FORMAT-NV12_709 (BT.709). colorFormat = surface->surfaceList[0].colorFormat;
colorFormat=NVBUF_COLOR-FORMAT-NV12_709

What is the storage format of NVBUF_COLOR-FORMAT-NV12/709 type image data in Surface?, I am unable to retrieve image data correctly according to the storage format of NV12.

How can I correctly read image data from the surface in NVBUF_COLOR-FORMAT-NV12_709 format and convert it to NVBUF_COLOR-FORMAT-RGB format?I have also seen similar issues from other posts。
https://forums.developer.nvidia.com/t/how-to-convert-nv12-709-nv12-709-er-frame-to-cv-mat-on-jetson-nx/187895/3?u=1478261730

You can refer this code snippets. NvBufSurfTransform will do all this work for you.

nvmultistreamtiler and nvdrmvideosink can achieve your needs. If it is not necessary, please avoid copying data to improve performance.

Thank you for your answer!
My new program is as follows:

static GstPadProbeReturn
nvosd_src_resized_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)
{
  GstBuffer *buf = (GstBuffer *) info->data;
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
  NvDsMetaList * l_frame = NULL;
  char file_name[128];
  static gint frame_number = 0;

  // Get original raw data
  GstMapInfo in_map_info;
  if (!gst_buffer_map (buf, &in_map_info, GST_MAP_READ)) {
      g_print ("Error: Failed to map gst buffer\n");
      gst_buffer_unmap (buf, &in_map_info);
      return GST_PAD_PROBE_OK;
  }
  NvBufSurface *surface = (NvBufSurface *)in_map_info.data;

  for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
    l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
    //TODO for cuda device memory we need to use cudamemcpy
    NvBufSurfaceMap (surface, -1, -1, NVBUF_MAP_READ);
    /* Cache the mapped data for CPU access */
    NvBufSurfaceSyncForCpu (surface, 0, 0); //will do nothing for unified memory type on dGPU
    guint height = surface->surfaceList[frame_meta->batch_id].height;
    guint width = surface->surfaceList[frame_meta->batch_id].width;   

    //Create Mat from NvMM memory, refer opencv API for how to create a Mat
    cv::Mat nv12_mat = cv::Mat(height*3/2, width, CV_8UC1, surface->surfaceList[frame_meta->batch_id].mappedAddr.addr[0],
    surface->surfaceList[frame_meta->batch_id].pitch);

    //Convert nv12 to RGBA to apply algo based on RGBA
    cv::Mat rgba_mat;
    cv::cvtColor(nv12_mat, rgba_mat, cv::COLOR_YUV2BGRA_NV12);

    // change the frames
    NvBufSurface *inter_buf = nullptr;
    NvBufSurfaceCreateParams create_params;/
    create_params.gpuId  = surface->gpuId;
    create_params.width  = width;
    create_params.height = height;
    create_params.size = 0;
    create_params.colorFormat = NVBUF_COLOR_FORMAT_RGBA; //Holds the color format of the buffer.
    create_params.layout = NVBUF_LAYOUT_PITCH;  

    #ifdef __aarch64__
      create_params.memType = NVBUF_MEM_DEFAULT;
    #else
      create_params.memType = NVBUF_MEM_CUDA_UNIFIED;
    #endif

    //Create another scratch RGBA NvBufSurface
    if (NvBufSurfaceCreate (&inter_buf, 1,
      &create_params) != 0) {
      GST_ERROR ("Error: Could not allocate internal buffer ");
      return GST_PAD_PROBE_OK;
    }
    // inter_buf->numFilled = 1;
    if(NvBufSurfaceMap (inter_buf, 0, -1, NVBUF_MAP_READ_WRITE) != 0)      
      g_print("map error");
    NvBufSurfaceSyncForCpu (inter_buf, -1, -1);
    cv::Mat choosed_resized_mat = cv::Mat(height, width, CV_8UC4, inter_buf->surfaceList[0].mappedAddr.addr[0],inter_buf->surfaceList[0].pitch);
    
    // Aplly your algo which works with opencv Mat, here we only rotate the Mat for demo
    // rotate(rgba_mat, choosed_resized_mat, ROTATE_180);
    int x = 0;
    int y = 0;
    int width_selected = width / 2;
    int height_selected = height / 2;
    // selected_region
    cv::Rect selected_region(x, y, width_selected, height_selected);
    cv::Mat selected_rgba = rgba_mat(selected_region);
    // region resized (width, height)
    cv::resize(selected_rgba, choosed_resized_mat, cv::Size(width, height));

    NvBufSurfaceSyncForDevice(inter_buf, -1, -1);//Syncs the hardware memory cache for the device.
    inter_buf->numFilled = 1;

    NvBufSurfTransformConfigParams transform_config_params;
    NvBufSurfTransformParams transform_params;
    NvBufSurfTransformRect src_rect;
    NvBufSurfTransformRect dst_rect;
    
    cudaStream_t cuda_stream;
    CHECK_CUDA_STATUS (cudaStreamCreate (&cuda_stream),
          "Could not create cuda stream");

    transform_config_params.compute_mode = NvBufSurfTransformCompute_Default;
    transform_config_params.gpu_id = surface->gpuId;
    transform_config_params.cuda_stream = cuda_stream;

    /* Set the transform session parameters for the conversions executed in this thread. */
    NvBufSurfTransform_Error err = NvBufSurfTransformSetSessionParams (&transform_config_params);
    if (err != NvBufSurfTransformError_Success) {
      g_print("NvBufSurfTransformSetSessionParams failed with error\n");
      return GST_PAD_PROBE_OK;
    }
    /* Set the transform ROIs for source and destination, only do the color format conversion*/
    src_rect = {0, 0, width, height};
    dst_rect = {0, 0, width, height};
    
    /* Set the transform parameters */
    transform_params.src_rect = &src_rect;
    transform_params.dst_rect = &dst_rect;
    transform_params.transform_flag =
      NVBUFSURF_TRANSFORM_FILTER | NVBUFSURF_TRANSFORM_CROP_SRC |
        NVBUFSURF_TRANSFORM_CROP_DST;
    transform_params.transform_filter = NvBufSurfTransformInter_Default;

    /* Transformation format conversion, Transform rotated RGBA mat to NV12 memory in original input surface*/
    // src:A pointer to input batched buffers to be transformed.
    // dst:A pointer to a caller-allocated location where transformed output is to be stored.
    err = NvBufSurfTransform (inter_buf, surface, &transform_params);
    
    if (err != NvBufSurfTransformError_Success) {
      std::cout<< "NvBufSurfTransform failed with error %d while converting buffer" << err <<std::endl;
      return GST_PAD_PROBE_OK;
      }
    
    // // access the surface modified by opencv
    cv::cvtColor(nv12_mat, rgba_mat, cv::COLOR_YUV2BGRA_NV12);
    // //dump the original NvbufSurface
     sprintf(file_name, "nvosd_probe_choose_resized_stream%2d_%03d.jpg", frame_meta->source_id, frame_number);
     cv::imwrite(file_name, rgba_mat);
     
    // cudaStreamDestroy(cuda_stream);
    NvBufSurfaceUnMap(inter_buf, -1, -1);
    // NvBufSurfaceDestroy(inter_buf);
    NvBufSurfaceUnMap(surface, -1, -1);
     frame_number++;
  }
  // gst_buffer_unmap(buf, &in_map_info);
  return GST_PAD_PROBE_OK;
}

but when I dump the original NvbufSurface,rgba_mat cannot display images correctly。rgba_mat is showed as :


Why did this situation occur?
I am very much looking forward to receiving your reply!

Hi, I have another idea, is it possible to insert an image stream as a third video source before the tiler plugin?
so that I can use g_object_set (G_OBJECT (tiler), "show-source", source_id, NULL);

It is OK to do this.

But I don’t think this is necessary. The command line I provided above shows that after setting show-source, tilter can work properly.

Can you provide a simple sample code to reproduce your problem?
I will try to reproduce it. It is not recommended to use opencv to scale nvbufsurface. I am not sure if there will be unknown problems.

Thank you for your reply. I have resolved the issue of switching to zoom in after selecting the video source.
But the color space issue I mentioned earlier still exists.I don’t have a good solution or idea right now. Looking forward to your reply!

https://forums.developer.nvidia.com/t/how-to-switch-between-different-video-sources-and-zoom-in-to-full-screen-on-sink/309618/8?u=1478261730

Can you share a runnable sample, or reproduce the problem on test1?
I can’t figure out the issuse from just this code snippet.

I’m sorry for taking so long to reply to you!
This is an example program based on the deepstream-test3 , which you can run to reproduce the green screen issue.
You can place it in the
/opt/Nvidia/devstream/devstream-6.3/sources/apps/sample-app directory,
First execute make debug
Then execute bash run.sh
deepstream-test3.zip (614.6 KB)

// err = NvBufSurfTransform (inter_buf, surface, &transform_params);
err = NvBufSurfTransform (surface, inter_buf, &transform_params);

The surface variable represents the GsBuffer in the gstreamer pipeline. You cannot modify its color format.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.