Adding Preprocessing to Frames RTSP

Hi,

Currently using deepstream to run a Nasnet-based model, but was hoping for some guidance regarding how to add additional preprocessing (in lieu of what the pipeline does) to the frames (sent over RTSP) before they are sent to nv-infer for inferencing.

Right now, since the model was created in tensorflow, I have python code that performs preprocessing on images (.jpg) before it is sent for inferencing. Can someone point me to the file and/or function where I can add similar C++ code?

Thanks!

Hi,

I am also looking for the same possibility.
I would like to pre-process the RTSP stream by changing the colours with some colour effect (I am dealing with a greyscale camera).
I understand that this can be achieved through videobalance
https://gstreamer.freedesktop.org/documentation/videofilter/videobalance.html?gi-language=c

How can I add that to the pipeline of deepstream?

I hope that this simple example might help #qsu as well.

Thank you very much in advance

Hi,

I have some update.
I was not able to integrate the preprocessing inside deepstream pipeline, but I was able to test the filter that I would need with gstreamer, save the processed stream in mp4 and run the deepstream pipeline on that file.

Here is the gstreamer pipeline:

gst-launch-1.0 -e rtspsrc location=rtsp:<your-url> ! rtph264depay ! decodebin ! videoconvert ! coloreffects preset=sepia ! videoconvert ! video/x-raw,width=1280,height=720,format=NV12 ! x264enc ! mp4mux ! filesink location=camera.mp4

In particular, I need this preprocessing on a camera that has greyscale video as output.
By applying a simple “sepia” filter I have find an increase in the performance of object detection with respect to original greyscale images.

I hope that you could help me to integrate the coloureffect as a preprocessing integrated into the deepstream pipeline.

Thank you again

PLease check this section in development guide.

After installation through SDKMAnager, you may follow the README to enable it:
deepstream_sdk_v4.0.2_jetson\sources\gst-plugins\gst-dsexample\README

Hi DaneLLL,

Thanks for getting back to me!

I need to admit that the source code in gst-dsexample is not very clear to me for it comes without comments.
Could you kindly point me towards some documentation or examples which shows how to integrate gstramer elements in the deepstream pipeline?

Alternatively, do you suggest any other method to apply some filter to frames before they are processed by nvinfer?

Thanks again!!

I would like to add a clearer explanation of my intentions.

I aim to process inputs coming from RTSP streamings (e.g. IP cameras).
Those inputs are black and white.
Pretrained models do not have good performance on those inputs. I guess that this happens because the models have been trained on a color image dataset.

I have done some tests and I have reported an increment in the performance if I simply apply a color filter (e.g. sepia effect) on the images.
Thus, I aim to insert in the pipeline that filtering effect on the inputs.
Please see the following diagram for a clearer explanation:


I have all the pipeline already working. What is missing is the pre-processing.
I hope you could give me some advice, for I am stuck in a dead end.
(I am also open to learn alternative solutions.)

Thanks again!

Could you use gst-dsexample and place it just before the pgie element?

Hi Jason,

Thanks for your suggestion!

Yes, I am able to insert in the pipeline the “dsexample” element.

preprocessing = gst_element_factory_make ("dsexample", "pre-processing");

...
 
#ifdef PLATFORM_TEGRA
gst_bin_add_many (GST_BIN (pipeline), preprocessing, pgie, tiler, nvvidconv, nvosd, transform, sink,
  NULL);
/* we link the elements together
* nvstreammux -> nvinfer -> nvtiler -> nvvidconv -> nvosd -> video-renderer */
if (!gst_element_link_many (streammux, preprocessing, pgie, tiler, nvvidconv, nvosd, transform, sink,
      NULL)) {
g_printerr ("Elements could not be linked. Exiting.\n");
return -1;
}
#else

What I do not understand is how to customize the gst-dsexample element to perform a color filtering.
Would you be so kind to give me some suggestion on that?

Thank you very much again!

Have you had a look at the code provided in /opt/nvidia/deepstream/deepstream-5.0/sources/gst-plugins/gst-dsexample. In this code you can see how to access the frame for opencv. OpenCV has such a huge user-base its pretty easy to google-code your way from there to work out how to implement your color filtering.

Dear Jason,

Thanks for your suggestion.
You are right, we can find many examples on OpenCV about how to achieve the sepia filtering.
However, I do not understand how to modify the gstdsexample.cpp.

I have added the element in the pipeline as described above. The pipeline is generated correctly.

I believe that cv.transform would do the job of filtering:

cv::Mat kernel =
(cv::Mat_<float>(3, 3)
	<<
	0.272, 0.534, 0.131,
	0.349, 0.686, 0.168,
	0.393, 0.769, 0.189);
[...]
cv::transform(input_img, output_img, kernel);

I tried to do the cv::transform inside the function “get_converted_mat”, just after the color conversion:

  in_mat =
  cv::Mat (dsexample->processing_height, dsexample->processing_width,
  CV_8UC4, dsexample->inter_buf->surfaceList[0].mappedAddr.addr[0],
  dsexample->inter_buf->surfaceList[0].pitch);

 out_mat =
  cv::Mat (dsexample->processing_height, dsexample->processing_width,
  CV_8UC4);

#if (CV_MAJOR_VERSION >= 4)
  cv::cvtColor (in_mat, out_mat, cv::COLOR_RGBA2BGR);
#else
  cv::cvtColor (in_mat, out_mat, CV_RGBA2BGR);
#endif 

  cv::transform(out_mat, *dsexample->cvmat, kernel);

The function get_converted_mat is called by the function gst_dsexample_transform_ip, thus I forced the latter to process the full frame. Below you can see how get_converted_mat is called:

if (true) {
    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next)
    {
      frame_meta = (NvDsFrameMeta *) (l_frame->data);
      NvOSD_RectParams rect_params;

      /* Scale the entire frame to processing resolution */
      rect_params.left = 0;
      rect_params.top = 0;
      rect_params.width = dsexample->video_info.width;
      rect_params.height = dsexample->video_info.height;

      /* Scale and convert the frame */
      if (get_converted_mat (dsexample, surface, i, &rect_params,
            scale_ratio, dsexample->video_info.width,
            dsexample->video_info.height) != GST_FLOW_OK) {
        goto error;
      }

      /* Process to get the output */
      output =
          DsExampleProcess (dsexample->dsexamplelib_ctx,
          dsexample->cvmat->data);
      /* Attach the metadata for the full frame */
      //attach_metadata_full_frame (dsexample, frame_meta, scale_ratio, output, i);
      i++;
      free (output);
    }
  }

Unfortunately, the only result that I obtain from the console when I run the pipeline described in my previous post is the following message after each frame processing:
nvbufsurface: Wrong buffer index (0)

I hope you could help me to understand where to edit the gstdsexample.cpp to correctly apply the cv.transform function and output the processed frames to the next element of the deepstream pipelin.

Thank you very much again!!

Good morning,

unfortunately I am still stuck on the same point.
Is there anyone willing to give me some hint about how to integrate some opencv code in the gst-dsexample?
@jasonpgf2a @DaneLLL

Thank you very much again!!

Hi,
You may start with enabling dsexample in deepstream-app. Add the following in config file:

[ds-example]
enable=1
processing-width=640
processing-height=480
full-frame=0
#batch-size for batch supported optimized plugin
batch-size=1
unique-id=15
gpu-id=0

You can check and modify get_converted_mat() to understand how it works.

Hi @DaneLLL,

Thanks again for getting back to me.
I was indeed trying to modify get_converted_mat.
(Please see my previous message)
However, I am not able to understand how to insert the new opencv code for the Sepia Filtering.

I have the opencv code for the Sepia Filtering.
I am able to run the dsexample.
But, I am not able to understand how to modify get_converted_mat.

Could you please help me or provide me with an example in which someone edited get_converted_mat with custom opencv code?

Thank you very much!

Hi,
We don’t have experience of using the filter. May see if other users can share experience.

Thanks anyway @DaneLLL.
I hope that I could receive some hints, I have not resolved yet this point.
Maybe @jasonpgf2a do you have some idea about how to integrate that opencv script in the dsexample ?

Thanks you very much in advance.