How to get person images by extracting the output of deepstream/tensorRT?

Hi,

deepstream_sdk_on_jetson/sources/apps/sample_apps/deepstream-app/deepstream-app can be executed on my Xavier well, if I want to get person images by extracting the output of deepstream/tensorRT, how to do it?
if it does, does the person images can be forwarded to another server by kafka? can any expert tell me how to do it?

my environments: Jetpack4.1.1, DeepStream3.0 and deepstream-plugins

Hi,

Suppose that the ‘person image’ is the ROI image from the person bounding box.
You can extract it with the input frame and bounding box.

For kafka, they don’t provide a prebuilt package for ARM system.
You will need to build it from the source on the Xavier.

Thanks.

@AastaLLL
Yes, the ‘person image’ is the ROI image from the person bounding box and the stream is came form web cam.

where is the ‘input frame’ stored?
and where is the person bounding box source of deepstream_sdk_on_jetson?
if I know the stored place and person bounding box source, how to extract the person image?

Hi,

Input and bbox is the middle output of the pipeline componenet.
You will need to write some code to get the data.

It’s recommended to start from the plugin dsexample example.

frame: get the data from the input buffer

static GstFlowReturn
gst_dsexample_transform_ip (GstBaseTransform * btrans, GstBuffer * inbuf)
{
  ... 
  int in_dmabuf_fd = 0;
  Mat in_mat;

  ExtractFdFromNvBuffer (in_map_info.data, &in_dmabuf_fd);
  get_converted_mat (dsexample, in_dmabuf_fd, &rect_params, *dsexample->cvmat, scale_ratio);
  ...

bbox: bbox information is stored in the meta data

gst_meta = gst_buffer_iterate_meta (inbuf, &state))
ivameta = (IvaMeta *) gst_meta;

if(ivameta->meta_type == NV_BBOX_INFO)
    bbparams = (BBOX_Params *) ivameta->meta_data;

Thanks.

@AastaLLL

Does there has timing issue? that is, the bbox is not applied for the frame when I get the bbox and frame at same time but different elements in pipeline.

Does the person image can be cropped in the object tracking stage of deepstream? because that stage could be completed, deepstream must has frame, bbox and object ID. If it is, I can corp the person image, am I right? and can you tell me where is this processing code?

Hi,

Does *dsexample->cvmat in get_converted_mat() is frame data? Could you give an example to get frame data and corp the object within bbox to a image file?

Hi,

You can find the source code in our sample code.
Location should be ${ds root}/sources/gst-plugins/gst-dsexample/gstdsexample.cpp.

Thanks.

@AastaLLL

I know gstdsexample.cpp, do you suggest us to create a plugin like gst-plugins?
or write code using ExtractFdFromNvBuffer(), get_converted_mat() etc., such as adding these functions to deepstream_test1_app.c?
if the latter, the parameter “dsexample” in get_converted_mat() is a plugin, how can I resolve or use it?
It’s rushed for me; so, could you give me a complete example to extract person objects from input frame and transfer them to another server?

Hi,

YES.

You will need to create a gstreamer plugin to extract the ROI.
There is no available component for this.

Hi,
Is this IvaMeta specific to gst-Tegra?
We are unable to initialize this on Tesla architecture.
Error: IvaMeta does not name a type is what we get.

Have you resolve your problem? I already get the frame by this code

void *rgb_buf_gpu;
memset (&in_map_info, 0, sizeof (in_map_info));
if (!gst_buffer_map (buf, &in_map_info, GST_MAP_READ)) {
  g_print ("Error: Failed to map gst buffer\n");
}
surface = (NvBufSurface *) in_map_info.data;

CHECK_CUDA_STATUS (cudaMalloc ((void **) (&rgb_buf_gpu),
  DEFAULT_PROCESSING_WIDTH * DEFAULT_PROCESSING_HEIGHT *
  RGB_BYTES_PER_PIXEL), "Could not allocate cuda device buffer");

CHECK_NPP_STATUS (nppiSwapChannels_8u_C4C3R ((const Npp8u *) surface->buf_data[eventMsg->batch_id],
  DEFAULT_PROCESSING_WIDTH * RGBA_BYTES_PER_PIXEL, (Npp8u *) rgb_buf_gpu,
  DEFAULT_PROCESSING_WIDTH * RGB_BYTES_PER_PIXEL, oSrcSize, aDstOrder),
  "Failed to convert RGBA to RGB");

In my case, the frame is in device memory so i have to use cuda malloc. Also you have to define the batch_id, in my case the batch_id can be found from frame meta and I save it to eventMsg->batch_id