Extract raw images from the pipeline

• Hardware Platform (Jetson / GPU) Geforce RTX 4080
• DeepStream Version 6.3\

My application is as follow.

I need to get RGB images to call face recognition server before image preprocessing. Higher resolution is needed for face recognition so need to get high resolution images before rescaling. How can I extract out? May I have sample or any hints?

Hi @edit_or

The more elegant solution (that requires more code) is to implement a GStreamer element based on GstVideoFilter, where you implement the virtual method transform_frame_ip. In that method, you receive a GstFrame with the raw data.

The easy solution is to add a probe with gst_pad_add_probe. Inside the probe callback, you can obtain the buffer with gst_pad_probe_info_get_buffer, map it for reading, and retrieve the raw data.

Can I use the plugin gstdsexample?
gstdsexample has gst_dsexample_transform_ip (GstBaseTransform * btrans, GstBuffer * inbuf).

I can extract full frame 4k resolution in the following place. Is that correct?

if (dsexample->process_full_frame) {
    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next)
    {
      frame_meta = (NvDsFrameMeta *) (l_frame->data);
      NvOSD_RectParams rect_params;

      /* Scale the entire frame to processing resolution */
      rect_params.left = 0;
      rect_params.top = 0;
      rect_params.width = dsexample->video_info.width;
      rect_params.height = dsexample->video_info.height;

      /* Scale and convert the frame */
      if (get_converted_mat (dsexample, surface, i, &rect_params,
            scale_ratio, dsexample->video_info.width,
            dsexample->video_info.height) != GST_FLOW_OK) {
        goto error;
      }

      /* Process to get the output */
#ifdef WITH_OPENCV
      output =
          DsExampleProcess (dsexample->dsexamplelib_ctx,
          dsexample->cvmat->data);
#else
      output =
          DsExampleProcess (dsexample->dsexamplelib_ctx,
          (unsigned char *)dsexample->inter_buf->surfaceList[0].mappedAddr.addr[0]);
#endif
      /* Attach the metadata for the full frame */
      attach_metadata_full_frame (dsexample, frame_meta, scale_ratio, output, i);
      i++;
      free (output);
    }

  }

What is the “Face recognition Server”?

Luxand_FaceSDK_Documentation.pdf (1.6 MB)

I use this Luxand face recognition library and they provide an api to call their library with an image. So I need to extract image from Deepstream pipeline.

What kind of image it needs? If it is JPEG, there is sample for how to use hardware accelerated API to encoded NvBufSurface into JPEG images. /opt/nvidia/sources/apps/sample_apps/deepstream-image-meta-test

Let me check detail and according to API is RGB image. So can I use gstdsexample to extract RGB from NvBufSurface?

If you know how to extract the video data from NvBufSurface, you can extract the data in any place where the NvBufSurface is available.

Thanks I’ll explore and get back to you.

According to this C++ interface from FaceSDK, I need to pass unsigned char* Buffer to FaceSDK.
I can call this C++ FaceSDK API using the buffer (unsigned char *)dsexample->inter_buf->surfaceList[0].mappedAddr.addr[0] from gst_dsexample_transform_ip function.

static GstFlowReturn
gst_dsexample_transform_ip (GstBaseTransform * btrans, GstBuffer * inbuf)
{

      (unsigned char *)dsexample->inter_buf->surfaceList[0].mappedAddr.addr[0]
}

Do you think ok?

If it is RGBA format, it can.

thank u

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.