Access frame pointer in deepstream-app

I’ve been looking for this information also. So far I haven’t been able to find out how to do this.

Refer to sources/gst-plugins/gst-dsexample to get frame buffer and metadata,
refer to sources/gst-plugins/gst-nvmsgconv and sources/gst-plugins/gst-nvmsgbroker to see how msg send to server.

1 Like

@ChrisDing, thanks for replying. I’ve spent the better part of the day trying to extract the data for the frame and the dsexample files are no help. All I’m looking to do is get the raw byte data for each frame so that I can process it independently. I’m trying to avoid the complexity of working with the dsexample plugin also. I currently have the object detection data as I need and I’m able to pass that data to where I need it to go, but not the frame data.

You can also get frame data in probe callback. Please refer to deepstream-test1 -> osd_sink_pad_buffer_probe()
The frame data is in “GstBuffer *buf = (GstBuffer *) info->data;”
You can refer to dsexample about how to get frame data from GstBuffer

Hi Chris,

I am trying to access frame pointer in “tracking_done_buf_prob” using the method suggested by you. However, I get segmentation fault when I just try to access the Y channel. Also, I am unable to understand the dumps I get by printing the parameters. could you please help me out?

Code:

void parse_ychannel_row_major(uint32_t width, uint32_t height,const uint8_t *Y,uint32_t Y_stride)
{
	int x,y;
	for(y=0; y< height-1; ++y)
	{
		const uint8_t *y_row_ptr=Y+y*Y_stride;

		for(x=0;x<width;++x)
		{
			uint8_t data = *(y_row_ptr+x);
			printf("Y Pixel Value : %d\n",data);
			printf("x : %d , y : %d \n",x,y);
		}

	}

}

static GstPadProbeReturn
tracking_done_buf_prob (GstPad * pad, GstPadProbeInfo * info, gpointer u_data)
{
  NvDsInstanceBin *bin = (NvDsInstanceBin *) u_data;
  guint index = bin->index;
  AppCtx *appCtx = bin->appCtx;
  GstBuffer *buf = (GstBuffer *) info->data;
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
  if (!batch_meta) {
    NVGSTDS_WARN_MSG_V ("Batch meta not found for buffer %p", buf);
    return GST_PAD_PROBE_OK;
  }

  GstMapInfo in_map_info;
  NvBufSurface *surface = NULL;

  memset (&in_map_info, 0, sizeof (in_map_info));
  if (!gst_buffer_map (buf, &in_map_info, GST_MAP_READ)) {
    g_print ("Error: Failed to map gst buffer\n");
  }

#if 1

  surface = (NvBufSurface *) in_map_info.data;  

  int batch_size= surface->batchSize;

  printf("Batch Size : %d",batch_size );


  for(int i=0; i<batch_size; ++i)
  {
	uint32_t data_size =  surface->surfaceList[i].dataSize;
	uint32_t pitch =  surface->surfaceList[i].pitch;
	uint32_t width =  surface->surfaceList[i].width;
	uint32_t height =  surface->surfaceList[i].height;
	NvBufSurfaceLayout layout = surface->surfaceList[i].layout;
	NvBufSurfacePlaneParams plane_params = surface->surfaceList[i].planeParams;
 	void *dataPtr = surface->surfaceList[i].dataPtr;
	uint8_t *data = dataPtr;
	
 	//printf("\nData at first index buffer %d : %d",i,*data);

	uint32_t num_planes=plane_params.num_planes;
	
	printf("\nNumber of Planes : %d\n\n", num_planes);

	for(int i=0 ; i < num_planes; ++i)
	{
	   uint32_t width=plane_params.width[i];
	   uint32_t height=plane_params.height[i];
	   uint32_t pitch=plane_params.pitch[i]; 
           uint32_t offset=plane_params.offset[i];
           uint32_t psize=plane_params.psize[i];
           uint32_t bytes_per_pix=plane_params.bytesPerPix[i];
		
	   printf("width of the plane %d : %d\n\n",i,width);
	   printf("height of the plane %d : %d\n\n",i,height);
	   printf("pitch of the plane  %d : %d\n\n",i,pitch);
	   printf("offset of the plane  %d : %d\n\n",i,offset);
	   printf("psize of the plane  %d : %d\n\n",i,psize);
	   printf("bytes_per_pix of the plane  %d : %d\n\n\n\n",i,bytes_per_pix);	   
	}

	printf("Size of the frame buffer : %d\n\n",data_size);
	printf("Pitch of the frame buffer : %d\n\n",pitch);
	printf("width of the frame buffer : %d\n\n",width);
	printf("height of the frame buffer : %d\n\n",height);

	NvBufSurfaceColorFormat color_format= surface->surfaceList[i].colorFormat;

        if (color_format == NVBUF_COLOR_FORMAT_NV12)
           printf("color_format: NVBUF_COLOR_FORMAT_NV12 \n");
        else if (color_format == NVBUF_COLOR_FORMAT_NV12_ER)
           printf("color_format: NVBUF_COLOR_FORMAT_NV12_ER \n");
        else if (color_format == NVBUF_COLOR_FORMAT_NV12_709)
           printf("color_format: NVBUF_COLOR_FORMAT_NV12_709 \n");
        else if (color_format == NVBUF_COLOR_FORMAT_NV12_709_ER)
           printf("color_format: NVBUF_COLOR_FORMAT_NV12_709_ER \n");
  }


   uint32_t frame_width=surface->surfaceList[0].planeParams.width[0];
   uint32_t frame_height=surface->surfaceList[0].planeParams.height[0];
   uint32_t Y_stride=surface->surfaceList[0].planeParams.pitch[0];
   uint32_t UV_stride=surface->surfaceList[0].planeParams.pitch[1];

   printf("\n\nframe_width : %d\n\n", frame_width);
   printf("\n\nframe_height : %d\n\n", frame_height);
   printf("\n\nY_stride : %d\n\n", Y_stride);
   printf("\n\nUV_stride : %d\n\n", UV_stride);


   int offset_calc=frame_width*frame_height;

   //Method 1
   const uint8_t *Y=surface->surfaceList[0].dataPtr;

   // Method 2 
   //const uint8_t *Y=surface->surfaceList[0].dataPtr + (surface->surfaceList[0].planeParams.psize[0] - offset_calc);
   
   parse_ychannel_row_major(frame_width, frame_height,Y,Y_stride);

  exit(0); 

#endif
  /*
   * Output KITTI labels with tracking ID if configured to do so.
   */
  write_kitti_track_output(appCtx, batch_meta);

  if (appCtx->primary_bbox_generated_cb)
    appCtx->primary_bbox_generated_cb (appCtx, buf, batch_meta, index);
  return GST_PAD_PROBE_OK;
}

Beginning of the log with the parameters :

Creating LL OSD context new
KLT Tracker Init
Batch Size : 1
Number of Planes : 2

width of the plane 0 : 1280
height of the plane 0 : 720
pitch of the plane  0 : 1280
offset of the plane  0 : 0
psize of the plane  0 : 1048576
bytes_per_pix of the plane  0 : 1

width of the plane 1 : 640
height of the plane 1 : 360
pitch of the plane  1 : 1280
offset of the plane  1 : 1048576
psize of the plane  1 : 524288
bytes_per_pix of the plane  1 : 2

Size of the frame buffer : 1572864
Pitch of the frame buffer : 1280
width of the frame buffer : 1280
height of the frame buffer : 720

color_format: NVBUF_COLOR_FORMAT_NV12 

frame_width : 1280
frame_height : 720
Y_stride : 1280
UV_stride : 1280

The end of the log looks like the following before it crashes:

Y Pixel Value : 0
x : 615 , y : 286 
Y Pixel Value : 0
x : 616 , y : 286 
Y Pixel Value : 0
x : 617 , y : 286 
Y Pixel Value : 0
x : 618 , y : 286 
Y Pixel Value : 0
x : 619 , y : 286 
Y Pixel Value : 0
x : 620 , y : 286 
Y Pixel Value : 0
x : 621 , y : 286 
Y Pixel Value : 0
x : 622 , y : 286 
Y Pixel Value : 0
x : 623 , y : 286 
Y Pixel Value : 0
x : 624 , y : 286 
Y Pixel Value : 0
x : 625 , y : 286 
Y Pixel Value : 0
x : 626 , y : 286 
Y Pixel Value : 0
x : 627 , y : 286 
Y Pixel Value : 0
x : 628 , y : 286 
Y Pixel Value : 0
x : 629 , y : 286 
Y Pixel Value : 0
x Segmentation fault (core dumped)

The psize of the plane is greater than the product of width and height. Is there anything wrong with the way I am accessing the frame?

Kindly help me out.

Thanks.

++bcao modified start++
pls refer the #24
++bcao modified end++

Can you refer to this code:

static GstPadProbeReturn
tiler_src_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)
{
#ifdef DUMP_JPG
    GstBuffer *buf = (GstBuffer *) info->data;
    NvDsMetaList * l_frame = NULL;
    NvDsMetaList * l_user_meta = NULL;
    NvDsUserMeta *user_meta = NULL;
    NvDsInferSegmentationMeta* seg_meta_data = NULL;
    // Get original raw data
    GstMapInfo in_map_info;
    char* src_data = NULL;
    if (!gst_buffer_map (buf, &in_map_info, GST_MAP_READ)) {
        g_print ("Error: Failed to map gst buffer\n");
        gst_buffer_unmap (buf, &in_map_info);
        return GST_PAD_PROBE_OK;
    }
    NvBufSurface *surface = (NvBufSurface *)in_map_info.data;

    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);

    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
        NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
        /* Validate user meta */
        for (l_user_meta = frame_meta->frame_user_meta_list; l_user_meta != NULL;
            l_user_meta = l_user_meta->next) {
            user_meta = (NvDsUserMeta *) (l_user_meta->data);
            if (user_meta && user_meta->base_meta.meta_type == NVDSINFER_SEGMENTATION_META) {
                seg_meta_data = (NvDsInferSegmentationMeta*)user_meta->user_meta_data;
            }
        }

        src_data = (char*) malloc(surface->surfaceList[frame_meta->batch_id].dataSize);
        if(src_data == NULL) {
            g_print("Error: failed to malloc src_data \n");
            continue;
        }
        cudaMemcpy((void*)src_data,
                   (void*)surface->surfaceList[frame_meta->batch_id].dataPtr,
                   surface->surfaceList[frame_meta->batch_id].dataSize,
                   cudaMemcpyDeviceToHost);
        dump_jpg(src_data,
                 surface->surfaceList[frame_meta->batch_id].width,
                 surface->surfaceList[frame_meta->batch_id].height,
                 seg_meta_data, frame_meta->source_id, frame_meta->frame_num);

        if(src_data != NULL) {
            free(src_data);
            src_data = NULL;
        }
    }
    gst_buffer_unmap (buf, &in_map_info);
#endif

Hi ChrisDing,

I am working on Jetson Nano. I tried what you suggested but cudaMemcpy does not seem to work. May be because in the documentation it states “Not valid for NVBUF_MEM_SURFACE_ARRAY or NVBUF_MEM_HANDLE” ? when I checked memory type I got “NVBUF_MEM_SURFACE_ARRAY”.

Thanks

What error do you get when you run the code snippet?

Hi cshah,

When I try to print the error returned by cudamemcpy I get code 11. Please find the code that I used below. Additionally, I am attaching the detailed logs and output jpeg image obtained for your reference.

int write_jpeg_file( char *filename, unsigned char* rgb_image , int width, int height, int bytes_per_pixel, J_COLOR_SPACE color_space )
{

	struct jpeg_compress_struct cinfo;
	struct jpeg_error_mgr jerr;
	
	JSAMPROW row_pointer[1];
	FILE *outfile = fopen( filename, "wb" );
	
	if ( !outfile )
	{
		printf("Error opening output jpeg file %s\n!", filename );
		return -1;
	}
	cinfo.err = jpeg_std_error( &jerr );
	jpeg_create_compress(&cinfo);
	jpeg_stdio_dest(&cinfo, outfile);

	cinfo.image_width = width;	
	cinfo.image_height = height;
	cinfo.input_components = bytes_per_pixel;
	cinfo.in_color_space = color_space; //JCS_RGB

	jpeg_set_defaults( &cinfo );

	jpeg_start_compress( &cinfo, TRUE );

	while( cinfo.next_scanline < cinfo.image_height )
	{
		row_pointer[0] = &rgb_image[ cinfo.next_scanline * cinfo.image_width *  cinfo.input_components];
		jpeg_write_scanlines( &cinfo, row_pointer, 1 );
	}

	jpeg_finish_compress( &cinfo );
	jpeg_destroy_compress( &cinfo );
	fclose( outfile );

	return 1;
}

/**
 * Buffer probe function after tracker.
 */
static GstPadProbeReturn
tracking_done_buf_prob (GstPad * pad, GstPadProbeInfo * info, gpointer u_data)
{
  NvDsInstanceBin *bin = (NvDsInstanceBin *) u_data;
  guint index = bin->index;
  AppCtx *appCtx = bin->appCtx;
  GstBuffer *buf = (GstBuffer *) info->data;
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
  if (!batch_meta) {
    NVGSTDS_WARN_MSG_V ("Batch meta not found for buffer %p", buf);
    return GST_PAD_PROBE_OK;
  }

  GstMapInfo in_map_info;
  NvBufSurface *surface = NULL;

  memset (&in_map_info, 0, sizeof (in_map_info));
  if (!gst_buffer_map (buf, &in_map_info, GST_MAP_READ)) {
    g_print ("Error: Failed to map gst buffer\n");
  }

  surface = (NvBufSurface *) in_map_info.data;  
  NvDsMetaList * l_frame = NULL;
  NvDsMetaList * l_user_meta = NULL;
  NvDsUserMeta *user_meta = NULL;

  for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
        NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);

	uint32_t frame_width  =  surface->surfaceList[frame_meta->batch_id].width;
	uint32_t frame_height =  surface->surfaceList[frame_meta->batch_id].height;
	uint32_t Y_stride     =  surface->surfaceList[frame_meta->batch_id].pitch;
	uint32_t buffer_size = surface->surfaceList[frame_meta->batch_id].dataSize;
	uint32_t est_size = frame_width*frame_height;

        void *src_data = (void*) malloc(surface->surfaceList[frame_meta->batch_id].dataSize);
        if(src_data == NULL) {
            g_print("Error: failed to malloc src_data \n");
            continue;
        }

	printf("Buffer size : %d\n", buffer_size);
	printf("Estimated size : %d\n", est_size);
	printf("frame_width : %d\n", frame_width);
	printf("frame_height : %d\n", frame_height);

        cudaError_t err = cudaMemcpy((void*)src_data,
                   (void*)surface->surfaceList[frame_meta->batch_id].dataPtr,
                   surface->surfaceList[frame_meta->batch_id].dataSize,
                   cudaMemcpyDeviceToHost);

	printf("\nError returned by cudamemcpy : %d\n", err);

	char filename[200];
	sprintf(filename,"file_y_%dx%d.jpg",frame_width,frame_height);

        write_jpeg_file( filename, src_data , frame_width, frame_height, 1, JCS_GRAYSCALE );

        if(src_data != NULL) {
            free(src_data);
            src_data = NULL;
        }
    }
  gst_buffer_unmap (buf, &in_map_info);

  /*
   * Output KITTI labels with tracking ID if configured to do so.
   */
  write_kitti_track_output(appCtx, batch_meta);

if (appCtx->primary_bbox_generated_cb)
    appCtx->primary_bbox_generated_cb (appCtx, buf, batch_meta, index);
  return GST_PAD_PROBE_OK;
}

The log file has been generated using the argument --gst-debug=6

Thanks

log.zip (3.79 MB)

Can you print like this cudaGetErrorString (cudaError_t num)

Hi ChrisDIng,

It prints “invalid argument”.

Thanks

Can you check dataPtr and size ?

Hi ChrisDing,

I used the code below to print the values:

...
	printf("\ndataPtr : %p\n", surface->surfaceList[frame_meta->batch_id].dataPtr);
	printf("\nsize : %d\n", surface->surfaceList[frame_meta->batch_id].dataSize);
...

Output:

dataPtr : 0x7f18074590
size : 1572864

Thanks

Hi neophyte1, did you ever manage to figure this out? I’m running into the same issue as you.

Hi
dataPtr and size look good.
Can you check more about which argument has problem?

I also encountered this problem, I also want to get the decoded frame data, can share the solution? thank you

Hi ChrisDing,

I’m managing to save the raw data into a file to analyze later. However, when I open the file it looks that it’s just an indexed greyscale image. There doesn’t seem to be any RGB data. I’m using the code you shared in post #7. Any suggestions on how to get the RGB data?

Hi Cbstryker
What’s your platform? What’s your input stream type?

Hi ChrisDing,

Platform is Uubntu 18.04. Input stream type is h264/mp4 (1080p 30fps).

I’ve actually managed to get it working by using your example from one of the other posts asking a similar question. To be honest, I’m not sure why one gives only a single channel and the other gives all three. In either case, I have what I need now, thank you.

I’m not sure why one gives only a single channel and the other gives all three.
It depends on the input stream type.