Xavier Orin AGX get images from rtsp stream

**• Hardware Platform (GPU) JetPack5.1.1
**• DeepStream Version 6.3
• TensorRT Version
**• NVIDIA GPU Driver Version (valid for GPU only) CUDA11.4
• Issue Type (questions)

The program captures streaming images with the following code.

int write_frame(GstBuffer *buf, NvDsFrameMeta *frame_meta, cv::String event_name)
{
  // guint batch_id = frame_meta->batch_id;

  GstMapInfo in_map_info;
  NvBufSurface *surface = NULL;
  memset(&in_map_info, 0, sizeof(in_map_info));
  if (!gst_buffer_map(buf, &in_map_info, GST_MAP_READ))
  {
    g_print("Error: Failed to map gst buffer\n");
  }

  surface = (NvBufSurface *)in_map_info.data;
  char *src_data = NULL;
  src_data = (char *)malloc(surface->surfaceList[frame_meta->batch_id].dataSize);
  if (src_data == NULL)
  {
    g_print("Error: failed to malloc src_data \n");
  }

  NvBufSurfaceMap(surface, -1, -1, NVBUF_MAP_READ);
  NvBufSurfacePlaneParams *pParams = &surface->surfaceList[frame_meta->batch_id].planeParams;
  unsigned int offset = 0;
  for (unsigned int num_planes = 0; num_planes < pParams->num_planes; num_planes++)
  {
    if (num_planes > 0)
      offset += pParams->height[num_planes - 1] * (pParams->bytesPerPix[num_planes - 1] * pParams->width[num_planes - 1]);
    for (unsigned int h = 0; h < pParams->height[num_planes]; h++)
    {
      memcpy((void *)(src_data + offset + h * pParams->bytesPerPix[num_planes] * pParams->width[num_planes]),
             (void *)((char *)surface->surfaceList[frame_meta->batch_id].mappedAddr.addr[num_planes] + h * pParams->pitch[num_planes]),
             pParams->bytesPerPix[num_planes] * pParams->width[num_planes]);
    }
  }
  NvBufSurfaceSyncForDevice(surface, -1, -1);
  NvBufSurfaceUnMap(surface, -1, -1);

  gint frame_width = (gint)surface->surfaceList[frame_meta->batch_id].width;
  gint frame_height = (gint)surface->surfaceList[frame_meta->batch_id].height;
  gint frame_step = surface->surfaceList[frame_meta->batch_id].pitch;
  cv::Mat frame = cv::Mat(frame_height * 3 / 2, frame_width, CV_8UC1, src_data, frame_step);
}

Why does setting

[streammux]
width=1280
height=720

allow for normal image capture, but when set to

[streammux]
width=1920
height=1080

the captured images are abnormal?

This program runs on Xavier Orin AGX.

Could you attach your whole pipeline and the location where you added the probe function? Did you use our demo or your own demo?

My program is as follows. I modified it based on the official example.

static void
all_bbox_generated(AppCtx *appCtx, GstBuffer *buf,
                   NvDsBatchMeta *batch_meta, guint index)
{
for (NvDsMetaList *l_frame = batch_meta->frame_meta_list; l_frame != NULL;
       l_frame = l_frame->next)
  {
       NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)l_frame->data;
       write_frame(buf, frame_meta, event_name)
  }
}

I’ve encountered the same issue but haven’t found a solution.

I found that setting the following also works for capturing images normally.

[streammux]
width=1280
height=1080

It could be a problem with the image format CV_8UC1. Can you print your imagecolorFormat and set the right format for the OpenCV?

Also we have a new way to save pictures, you can refer to nvds_obj_enc_process in the deepstream-image-meta-test open source. It uses jpeg hardware decoder.

surface->surfaceList[frame_meta->batch_id].colorFormat=6

But why is it able to capture images correctly in both 1280 * 720 and 1280 * 1080 situations?

Which of these two resolutions is problematic? Could you attach the abnormal image?

Sorry, I misspoke. It’s 1280 * 720 and 1280 * 1080 that can be successfully captured.

Successful as follows(1280 * 720 and 1280 * 1080)
2

Failed as follows(1920 * 1080)
1

It seems that there’s something wrong with the frame_step parameter. OpenCV may have different requirements for step parameter with NvbufSurface. Could you just print that value and try to change it to fit OpenCV?

I have noticed that in an x86 environment, for a 1920*1080 stream, the retrieved
surface->surfaceList[frame_meta->batch_id].pitch = 2048,
but on the Orin AGX, it is surface->surfaceList[frame_meta->batch_id].pitch=1920.

When I set On the Orin AGX,

[streammux]
width=1920
height=1280 

surface->surfaceList[frame_meta->batch_id].pitch=2048
I can retrieve the images normally.
Why is the pitch different for a 1920*1080 stream, and why does it cause issues with image retrieval?
Do you have any information related to this issue?

The pitch is automatically configured depending on the type of memory. The types of memory used on dgpu and jetson are different.

I found that on the Orin AGX, setting the pitch to 1920 results in not being able to obtain a proper image. However, when I configure the settings as follows:

[streammux]
width=1925
height=1280

At this point,surface->surfaceList[frame_meta->batch_id].pitch=2048
I am able to capture images correctly. Regardless of any configurations I attempt, I cannot obtain a proper image when using 1920x1280 resolution, even if I forcibly set the pitch to 2048.

OpenCV and DeepStream’s pitch may not be compatible. To save images more efficiently, you can use the hardware coding scheme I provided earlier nvds_obj_enc_process in the deepstream-image-meta-test.