Adding Preprocessing to Frames RTSP

Hi,

Currently using deepstream to run a Nasnet-based model, but was hoping for some guidance regarding how to add additional preprocessing (in lieu of what the pipeline does) to the frames (sent over RTSP) before they are sent to nv-infer for inferencing.

Right now, since the model was created in tensorflow, I have python code that performs preprocessing on images (.jpg) before it is sent for inferencing. Can someone point me to the file and/or function where I can add similar C++ code?

Thanks!

1 Like

Hi,

I am also looking for the same possibility.
I would like to pre-process the RTSP stream by changing the colours with some colour effect (I am dealing with a greyscale camera).
I understand that this can be achieved through videobalance
https://gstreamer.freedesktop.org/documentation/videofilter/videobalance.html?gi-language=c

How can I add that to the pipeline of deepstream?

I hope that this simple example might help #qsu as well.

Thank you very much in advance

Hi,

I have some update.
I was not able to integrate the preprocessing inside deepstream pipeline, but I was able to test the filter that I would need with gstreamer, save the processed stream in mp4 and run the deepstream pipeline on that file.

Here is the gstreamer pipeline:

gst-launch-1.0 -e rtspsrc location=rtsp:<your-url> ! rtph264depay ! decodebin ! videoconvert ! coloreffects preset=sepia ! videoconvert ! video/x-raw,width=1280,height=720,format=NV12 ! x264enc ! mp4mux ! filesink location=camera.mp4

In particular, I need this preprocessing on a camera that has greyscale video as output.
By applying a simple “sepia” filter I have find an increase in the performance of object detection with respect to original greyscale images.

I hope that you could help me to integrate the coloureffect as a preprocessing integrated into the deepstream pipeline.

Thank you again

1 Like

PLease check this section in development guide.

After installation through SDKMAnager, you may follow the README to enable it:
deepstream_sdk_v4.0.2_jetson\sources\gst-plugins\gst-dsexample\README

Hi DaneLLL,

Thanks for getting back to me!

I need to admit that the source code in gst-dsexample is not very clear to me for it comes without comments.
Could you kindly point me towards some documentation or examples which shows how to integrate gstramer elements in the deepstream pipeline?

Alternatively, do you suggest any other method to apply some filter to frames before they are processed by nvinfer?

Thanks again!!

I would like to add a clearer explanation of my intentions.

I aim to process inputs coming from RTSP streamings (e.g. IP cameras).
Those inputs are black and white.
Pretrained models do not have good performance on those inputs. I guess that this happens because the models have been trained on a color image dataset.

I have done some tests and I have reported an increment in the performance if I simply apply a color filter (e.g. sepia effect) on the images.
Thus, I aim to insert in the pipeline that filtering effect on the inputs.
Please see the following diagram for a clearer explanation:


I have all the pipeline already working. What is missing is the pre-processing.
I hope you could give me some advice, for I am stuck in a dead end.
(I am also open to learn alternative solutions.)

Thanks again!

Could you use gst-dsexample and place it just before the pgie element?

Hi Jason,

Thanks for your suggestion!

Yes, I am able to insert in the pipeline the “dsexample” element.

preprocessing = gst_element_factory_make ("dsexample", "pre-processing");

...
 
#ifdef PLATFORM_TEGRA
gst_bin_add_many (GST_BIN (pipeline), preprocessing, pgie, tiler, nvvidconv, nvosd, transform, sink,
  NULL);
/* we link the elements together
* nvstreammux -> nvinfer -> nvtiler -> nvvidconv -> nvosd -> video-renderer */
if (!gst_element_link_many (streammux, preprocessing, pgie, tiler, nvvidconv, nvosd, transform, sink,
      NULL)) {
g_printerr ("Elements could not be linked. Exiting.\n");
return -1;
}
#else

What I do not understand is how to customize the gst-dsexample element to perform a color filtering.
Would you be so kind to give me some suggestion on that?

Thank you very much again!

Have you had a look at the code provided in /opt/nvidia/deepstream/deepstream-5.0/sources/gst-plugins/gst-dsexample. In this code you can see how to access the frame for opencv. OpenCV has such a huge user-base its pretty easy to google-code your way from there to work out how to implement your color filtering.

1 Like

Dear Jason,

Thanks for your suggestion.
You are right, we can find many examples on OpenCV about how to achieve the sepia filtering.
However, I do not understand how to modify the gstdsexample.cpp.

I have added the element in the pipeline as described above. The pipeline is generated correctly.

I believe that cv.transform would do the job of filtering:

cv::Mat kernel =
(cv::Mat_<float>(3, 3)
	<<
	0.272, 0.534, 0.131,
	0.349, 0.686, 0.168,
	0.393, 0.769, 0.189);
[...]
cv::transform(input_img, output_img, kernel);

I tried to do the cv::transform inside the function “get_converted_mat”, just after the color conversion:

  in_mat =
  cv::Mat (dsexample->processing_height, dsexample->processing_width,
  CV_8UC4, dsexample->inter_buf->surfaceList[0].mappedAddr.addr[0],
  dsexample->inter_buf->surfaceList[0].pitch);

 out_mat =
  cv::Mat (dsexample->processing_height, dsexample->processing_width,
  CV_8UC4);

#if (CV_MAJOR_VERSION >= 4)
  cv::cvtColor (in_mat, out_mat, cv::COLOR_RGBA2BGR);
#else
  cv::cvtColor (in_mat, out_mat, CV_RGBA2BGR);
#endif 

  cv::transform(out_mat, *dsexample->cvmat, kernel);

The function get_converted_mat is called by the function gst_dsexample_transform_ip, thus I forced the latter to process the full frame. Below you can see how get_converted_mat is called:

if (true) {
    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next)
    {
      frame_meta = (NvDsFrameMeta *) (l_frame->data);
      NvOSD_RectParams rect_params;

      /* Scale the entire frame to processing resolution */
      rect_params.left = 0;
      rect_params.top = 0;
      rect_params.width = dsexample->video_info.width;
      rect_params.height = dsexample->video_info.height;

      /* Scale and convert the frame */
      if (get_converted_mat (dsexample, surface, i, &rect_params,
            scale_ratio, dsexample->video_info.width,
            dsexample->video_info.height) != GST_FLOW_OK) {
        goto error;
      }

      /* Process to get the output */
      output =
          DsExampleProcess (dsexample->dsexamplelib_ctx,
          dsexample->cvmat->data);
      /* Attach the metadata for the full frame */
      //attach_metadata_full_frame (dsexample, frame_meta, scale_ratio, output, i);
      i++;
      free (output);
    }
  }

Unfortunately, the only result that I obtain from the console when I run the pipeline described in my previous post is the following message after each frame processing:
nvbufsurface: Wrong buffer index (0)

I hope you could help me to understand where to edit the gstdsexample.cpp to correctly apply the cv.transform function and output the processed frames to the next element of the deepstream pipelin.

Thank you very much again!!

Good morning,

unfortunately I am still stuck on the same point.
Is there anyone willing to give me some hint about how to integrate some opencv code in the gst-dsexample?
@jasonpgf2a @DaneLLL

Thank you very much again!!

Hi,
You may start with enabling dsexample in deepstream-app. Add the following in config file:

[ds-example]
enable=1
processing-width=640
processing-height=480
full-frame=0
#batch-size for batch supported optimized plugin
batch-size=1
unique-id=15
gpu-id=0

You can check and modify get_converted_mat() to understand how it works.

Hi @DaneLLL,

Thanks again for getting back to me.
I was indeed trying to modify get_converted_mat.
(Please see my previous message)
However, I am not able to understand how to insert the new opencv code for the Sepia Filtering.

I have the opencv code for the Sepia Filtering.
I am able to run the dsexample.
But, I am not able to understand how to modify get_converted_mat.

Could you please help me or provide me with an example in which someone edited get_converted_mat with custom opencv code?

Thank you very much!

Hi,
We don’t have experience of using the filter. May see if other users can share experience.

Thanks anyway @DaneLLL.
I hope that I could receive some hints, I have not resolved yet this point.
Maybe @jasonpgf2a do you have some idea about how to integrate that opencv script in the dsexample ?

Thanks you very much in advance.

Good morning,

I still have this point open.
I am still not able to integrate a sepia filter within the dsexample.
As you can see above: Adding Preprocessing to Frames RTSP - #12 by borelli.g92
I have tried some implementations, but I had no positive results.
Do you have any suggestion regarding filtering with a very simple opencv filter the frames before the PGIE?

Thank you again!!

Good morning again,

One update.
I am able to place a dsexample instance after the PGIE, filter the frames with the opencv sepia filtering and save the image on the disk:

  // Use openCV to remove padding and convert RGBA to BGR. Can be skipped if
  // algorithm can handle padded RGBA data.
  in_mat =
      cv::Mat (dest_height, dest_width,
      CV_8UC4, nvbuf->surfaceList[0].mappedAddr.addr[0],
      nvbuf->surfaceList[0].pitch);
  out_mat =
      cv::Mat (cv::Size(dest_width, dest_height), CV_8UC3);
  saved_mat =
      cv::Mat (cv::Size(dest_width, dest_height), CV_8UC3);

  cv::cvtColor (in_mat, out_mat, cv::COLOR_RGBA2BGR);
  cv::transform(out_mat, saved_mat, kernel);
  

  time( &rawtime );
  info = localtime( &rawtime );
  
  static gint dump = 0;
  char filename[64];
  snprintf(filename, 64, "/home/jnano/jnanoImages/%04d_%02d_%02d_%02d_%02d_%02d.jpg", info->tm_year+1900, info->tm_mon+1, info->tm_mday+1, info->tm_hour, info->tm_min, info->tm_sec);
  cv::imwrite(filename, saved_mat);

This is a step forward, but I am not able to execute such filtering before the PGIE on all the frames.

I hope that you could give me some advice.
Thanks!!

UPDATE:

Hello,

I have tried a number of different approaches during the weekend.
I have tried to modify the gst-dsexample and follow the same approach that it is used for blurring object, but I obtain a segmentation fault:

/* Cache the mapped data for CPU access */
    NvBufSurfaceSyncForCpu (surface, frame_meta->batch_id, 0);
    in_mat =
                cv::Mat (surface->surfaceList[frame_meta->batch_id].planeParams.height[0],
                surface->surfaceList[frame_meta->batch_id].planeParams.width[0], CV_8UC4,
                surface->surfaceList[frame_meta->batch_id].mappedAddr.addr[0],
                surface->surfaceList[frame_meta->batch_id].planeParams.pitch[0]);
    
    cv::transform(in_mat, in_mat, kernel);
    
    in_mat.convertTo(in_mat,CV_8UC4);
    /* Cache the mapped data for device access */
    NvBufSurfaceSyncForDevice (surface, frame_meta->batch_id, 0);

However, in the execution of cv::transform(in_mat, in_mat, kernel); I get a segmentation fault :(

Moreover, I I get
5168 Segmentation fault (core dumped)
Even if I try to copy the buffer instead of doing a transform:
cv::Mat image_copy = in_mat.clone();
However, no segmentation fault at all if I use another opencv function such as:
cv::filter2D(in_mat, in_mat,-1, kernel);
The problem is that of course the function is not doing what I need to do…
But the PGIE is correctly receiving the filtered images:
image
This is the result of the filter2D that of course is doing a convolution of the pixels and not what I need that is a meshing of the different channels.

I think that this give an interesting information. The problem might come from the specific operation that the opencv function cv::transform is doing on my buffer.

Finally, I have also tried to edit the function get_converted_mat as follows:

static GstFlowReturn
filter_frame (GstDsExample * dsexample, NvBufSurface *input_buf, gint idx,
    NvOSD_RectParams * crop_rect_params, gdouble & ratio, gint input_width,
    gint input_height)
{
  NvBufSurfTransform_Error err;
  NvBufSurfTransformConfigParams transform_config_params;
  NvBufSurfTransformParams transform_params;
  NvBufSurfTransformRect src_rect;
  NvBufSurfTransformRect dst_rect;
  NvBufSurface ip_surf;
  cv::Mat in_mat, out_mat, filtered_mat;
  ip_surf = *input_buf;
  
  time_t rawtime;
  struct tm *info;

  ip_surf.numFilled = ip_surf.batchSize = 1;
  ip_surf.surfaceList = &(input_buf->surfaceList[idx]);

  /*
  gint src_left = GST_ROUND_UP_2(crop_rect_params->left);
  gint src_top = GST_ROUND_UP_2(crop_rect_params->top);
  gint src_width = GST_ROUND_DOWN_2(crop_rect_params->width);
  gint src_height = GST_ROUND_DOWN_2(crop_rect_params->height);
  */
  gint src_left = crop_rect_params->left;
  gint src_top = crop_rect_params->top;
  gint src_width = crop_rect_params->width;
  gint src_height = crop_rect_params->height;
  //g_print("ltwh = %d %d %d %d \n", src_left, src_top, src_width, src_height);

  guint dest_width, dest_height;
  dest_width = src_width;
  dest_height = src_height;

  NvBufSurface *nvbuf;
  NvBufSurfaceCreateParams create_params;
  create_params.gpuId  = dsexample->gpu_id;
  create_params.width  = dest_width;
  create_params.height = dest_height;
  create_params.size = 0;
  create_params.colorFormat = NVBUF_COLOR_FORMAT_RGBA;
  create_params.layout = NVBUF_LAYOUT_PITCH;
#ifdef __aarch64__
  create_params.memType = NVBUF_MEM_DEFAULT;
#else
  create_params.memType = NVBUF_MEM_CUDA_UNIFIED;
#endif
  NvBufSurfaceCreate (&nvbuf, 1, &create_params);

  // Configure transform session parameters for the transformation
  transform_config_params.compute_mode = NvBufSurfTransformCompute_Default;
  transform_config_params.gpu_id = dsexample->gpu_id;
  transform_config_params.cuda_stream = dsexample->cuda_stream;  
  
  // Set the transform session parameters for the conversions executed in this
  // thread.
  
  
  err = NvBufSurfTransformSetSessionParams (&transform_config_params);
  if (err != NvBufSurfTransformError_Success) {
    GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,
        ("NvBufSurfTransformSetSessionParams failed with error %d", err), (NULL));
    goto error;
  }
  

  // Calculate scaling ratio while maintaining aspect ratio
  ratio = MIN (1.0 * dest_width/ src_width, 1.0 * dest_height / src_height);

  
  if ((crop_rect_params->width == 0) || (crop_rect_params->height == 0)) {
    GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,
        ("%s:crop_rect_params dimensions are zero",__func__), (NULL));
    goto error;
  }
  

#ifdef __aarch64__
  if (ratio <= 1.0 / 16 || ratio >= 16.0) {
    // Currently cannot scale by ratio > 16 or < 1/16 for Jetson
    goto error;
  }
#endif
  // Set the transform ROIs for source and destination
  src_rect = {(guint)src_top, (guint)src_left, (guint)src_width, (guint)src_height};
  dst_rect = {0, 0, (guint)dest_width, (guint)dest_height};

  // Set the transform parameters
  transform_params.src_rect = &src_rect;
  transform_params.dst_rect = &dst_rect;
  transform_params.transform_flag =
    NVBUFSURF_TRANSFORM_FILTER | NVBUFSURF_TRANSFORM_CROP_SRC |
      NVBUFSURF_TRANSFORM_CROP_DST;
  transform_params.transform_filter = NvBufSurfTransformInter_Default;

  //Memset the memory
  NvBufSurfaceMemSet (nvbuf, 0, 0, 0);

  GST_DEBUG_OBJECT (dsexample, "Scaling and converting input buffer\n");

  // Transformation scaling+format conversion if any.
  
  
  err = NvBufSurfTransform (&ip_surf, nvbuf, &transform_params);
  if (err != NvBufSurfTransformError_Success) {
    GST_ELEMENT_ERROR (dsexample, STREAM, FAILED,
        ("NvBufSurfTransform failed with error %d while converting buffer", err),
        (NULL));
    goto error;
  }
  
  // Map the buffer so that it can be accessed by CPU
  //if (NvBufSurfaceMap (nvbuf, 0, 0, NVBUF_MAP_READ) != 0){
  if (NvBufSurfaceMap (nvbuf, 0, 0, NVBUF_MAP_READ_WRITE) != 0){
    goto error;
  }

  // Cache the mapped data for CPU access
  NvBufSurfaceSyncForCpu (nvbuf, 0, 0);

  // Use openCV to remove padding and convert RGBA to BGR. Can be skipped if
  // algorithm can handle padded RGBA data.
  in_mat =
      cv::Mat (dest_height, dest_width,
      CV_8UC4, nvbuf->surfaceList[0].mappedAddr.addr[0],
      nvbuf->surfaceList[0].pitch);
  out_mat =
      cv::Mat (cv::Size(dest_width, dest_height), CV_8UC3);
  filtered_mat =
      cv::Mat (cv::Size(dest_width, dest_height), CV_8UC3);

  //cv::cvtColor (in_mat, out_mat, cv::COLOR_RGBA2BGR);
  //cv::transform(out_mat, filtered_mat, kernel);
  
  //cv::cvtColor (in_mat, in_mat, cv::COLOR_RGBA2BGR);
  cv::transform(in_mat, in_mat, kernel);
  //cv::cvtColor (in_mat, in_mat, cv::COLOR_BGR2RGBA);
  
  /* Cache the mapped data for device access */
  NvBufSurfaceSyncForDevice (nvbuf, 0, 0);
/*
  time( &rawtime );
  info = localtime( &rawtime );
  
  static gint dump = 0;
  char filename[64];
  snprintf(filename, 64, "/home/jnano/jnanoImages/%04d_%02d_%02d_%02d_%02d_%02d.jpg", info->tm_year+1900, info->tm_mon+1, info->tm_mday+1, info->tm_hour, info->tm_min, info->tm_sec);
  cv::imwrite(filename, saved_mat);
*/
  if (NvBufSurfaceUnMap (nvbuf, 0, 0)){
    goto error;
  }
  NvBufSurfaceDestroy(nvbuf);

#ifdef __aarch64__
  // To use the converted buffer in CUDA, create an EGLImage and then use
  // CUDA-EGL interop APIs
  if (USE_EGLIMAGE) {
    if (NvBufSurfaceMapEglImage (dsexample->inter_buf, 0) !=0 ) {
      goto error;
    }

    // dsexample->inter_buf->surfaceList[0].mappedAddr.eglImage
    // Use interop APIs cuGraphicsEGLRegisterImage and
    // cuGraphicsResourceGetMappedEglFrame to access the buffer in CUDA

    // Destroy the EGLImage
    NvBufSurfaceUnMapEglImage (dsexample->inter_buf, 0);
  }
#endif

  /* We will first convert only the Region of Interest (the entire frame or the
   * object bounding box) to RGB and then scale the converted RGB frame to
   * processing resolution. */
  return GST_FLOW_OK;

error:
  return GST_FLOW_ERROR;
}

With this approach I do not get any segmentation fault, but the PGIE is NOT receiving the filtered image. Instead what the PGIE receive is the original image. It seems that I am not writing in the buffer as it was happening in the previous example.

I hope that you could give me some suggestion.

Thank you very much!!

Hi borelli.g92,

Please help to open a new topic for your issue. Thanks