Can I create depth image from NvBufSurface?

I create RGBA image before:

GstBuffer *buf = (GstBuffer *) info->data;
  NvDsInstanceBin *bin = (NvDsInstanceBin *) u_data;
  guint index = bin->index;
  AppCtx *appCtx = bin->appCtx;
  //extra  
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);
  // Get original raw data
  GstMapInfo in_map_info;
  g_mutex_lock(&buffer_lock);
  gst_buffer_map (buf, &in_map_info, GST_MAP_READ);
  NvBufSurface *surface = (NvBufSurface *)in_map_info.data;
  for (NvDsMetaList * l_frame = batch_meta->frame_meta_list; l_frame != NULL;
          l_frame = l_frame->next) {
          NvDsFrameMeta *frame_meta = l_frame->data;
          //TODO for cuda device memory we need to use cudamemcpy
          NvBufSurfaceMap (surface, -1, -1, NVBUF_MAP_READ);
          /* Cache the mapped data for CPU access */
          NvBufSurfaceSyncForCpu (surface, 0, 0); //will do nothing for unified memory type on dGPU
          guint height = surface->surfaceList[frame_meta->batch_id].height;
          guint width = surface->surfaceList[frame_meta->batch_id].width;
          Mat RGBA_mat = Mat(height, width, CV_8UC4, surface->surfaceList[frame_meta- 
          >batch_id].mappedAddr.addr[0],surface->surfaceList[frame_meta->batch_id].pitch);

about this part:

Mat RGBA_mat = Mat(height, width, CV_8UC4, surface->surfaceList[frame_meta- 
          >batch_id].mappedAddr.addr[0],surface->surfaceList[frame_meta->batch_id].pitch);

Can I also create the depth image if I am using a USB camera?

Are you asking for the supporting to the depth camera?

Yes,I may need the depth image of the camera.
So can I also create such depth images just like how I create the RGBA images?

So your camera itself can output depth image even without DeepStream, right?

Yes,it integrates a depth camera inside. And I am thinking that if I don’t use the camera API, can I also get the depth images just by using deepstream code?

No. The depth camera can’t output standard video formats. You must use the camera vendors’ API.

We have a sample in /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-3d-depth-camera DeepStream 3D Depth Camera App — DeepStream documentation to demonstrate how to integrate depth camera in DeepStream.

I used to check the source of my camera and for depth camera:


So, if my input source is a RGBA camera,then the depth images can’t be created by deepstream codes,right?

You are right. DeepStream is based on GStreamer: open source multimedia framework. There is no depth color format in GStreamer: open source multimedia framework. So DeepStream uses another way to introduce depth color format into DeepStream. DeepStream 3D Depth Camera App — DeepStream documentation (nvidia.com)

I have check the link and find that the sample is based on Realsense D435,right?

I am exactly using this camera,so I should refer to this sample to learn how to get depth images?

But it seems that the D435 camera SDK has transformed to deepstream style? I mean the ds3d library.And the code seems to be a little bit complicated.

So if I uses the SDK directly,namely the rs2 library, the progress may be easier?

Yes.

Yes.

DeepStream SDk is an inferencing framework. If you don’t need to do inferencing. You don’t need to choose DeepStream.

I do need to rely on deepstream to do inferencing tasks,but for the camera ,I just need to get the depth image and internal parameters like fx,fy,cx,cy and so on in order to get the final 3D coordinate of the detected object.

May be a few lines of codes based on D435 SDK can satisfy this.

So I should use the SDK directly or must refer to the sample?

If you don’t need to do inferencing. You don’t need to choose DeepStream. It depends on you to decide whether to use Realsense’s SDK.

So,it is OK for me to use the Realsense’s SDK directly in the deepstream right?

But I need to install some libraries and also make changes in the Makefile,right?

Please consult your camera vendor.

Assuming that I am using SDK,about the code below:

rs2::pipeline p;
p.start();

while (true) {
    rs2::frameset frames = p.wait_for_frames();
    rs2::frame depth = frames.get_depth_frame();
    const uint16_t* depth_data = (const uint16_t*)depth.get_data();
    
}

I wonder if I add such code snippet to the codes I have mentioned at first so that I can get the depth images,will there be a conflict for the camera itself if it is the input source of deepstream.

The NvBufSurface struct does not support depth data. It will be of no use.

Why do you try to use DeepStream without inferencing?

The correct way to introduce depth camera data with DeepStream has been recommended to you.

No,I am doing inferencing indeed.The sample I am using is deepstream_app sample:


And I add the codes I mentioned at first to this function:

static GstPadProbeReturn
gie_processing_done_buf_prob (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)

I use RGBA images created by NvBufSurface to apply edge detection to get the angle of detected object.
After that,I want to obtain the coordinate of the detected object, so just like the RGBA images,I also need the depth images and internal parameters of the camera to complete the transformation

Is the “edge detection to get the angle of detected object” done with neural network? If so, it is inferencing. If not, DeepStream is not suitable for your case.

The correct way to introduce depth camera data with DeepStream has been recommended to you.