Data preprocessing before sgie

Please provide complete information as applicable to your setup.

• Hardware Platform: GTX 1080
• DeepStream Version: 5.0
• NVIDIA GPU Driver Version (valid for GPU only): 440.33.01
I am working on a pipeline which face detection as primary inference and generate face feature vector as secondary inference. Before the secondary inference, I have a face landmark detector which is added to user-meta-data. I want to align the faces using these landmarks points to feed the sgie.
Presently I am achieving this by changing the(custom plugin dsexample) blur-objects function to align-objects where I change the ROI of faces(in_mat) to aligned faces. But in this case, the original frames are distorted.
Is there a way to define some function which preprocesses the in_mat for the sgie before feeding it.
Any help will be appreciated.


I have the same problem. Has anyone solved it yet? please help us.

Currently we don’t support customizing preprocess, we will discuss internally to see if we will support it later.
If you use nvifnerserver, here is a WAR DeepStream 5.0 nvinferserver how to use upstream tensor meta as a model input

hi @bcao,
Customizing preprocess will make deepstream very flexible and will be useful for many more test cases if it is supported later.
The reference provided seems to be different than my use-case. It suggests how to provide one models output as input to other as ensemble model for trition(In the post they have face detection and landmark given to align-model).
For my case I do not use model for alignment rather a algorathim approach to create new MAT for aligned objects. If there is some way I could attach those new aligned-Mat(containing the aligned faces) to the buffers and direct the secondary inference directly on these buffers without changing the orig frame.

You can also refer DeepStream 5.0 nvinferserver how to use upstream tensor meta as a model input if you are using nvinferserver.
For nvinfer, we will support the feature in later release.

Hello @duttaneil16 ,

Have you found any solution about how to include preprocessing within deepstream?
I am still stuck on this point since last May …

Thank you and have a nice day!

hi @borelli.g92
The solution @bcao suggest is not what I was looking for. The only way I found was to do some kind of preprocessing in the original image itself using a custom plugin and pass it downstream. That is not a good way for most use cases as I preprocess in the Orig frame itself.

hi @duttaneil16,

thanks for your reply!
I am also trying to customized one element of deepstream: gst-dsexample.
Unfortunately I have a segmentation fault when I try to apply the opencv function.
I thought the problem is related to opencv, thus I posted a question here:

Does something similar happened to you?
Thanks again!

Hey @borelli.g92
the buffer is on the GPU so applying opencv filters may be issue. Either u have to use the buffer transform api of deepstream. Also look at the blur object option on dsexample there the objects are cropped and blurred.

Hi @duttaneil16 ,

Yes indeed, I have tried to look at that example.

/* Cache the mapped data for CPU access */
    NvBufSurfaceSyncForCpu (surface, frame_meta->batch_id, 0);
    in_mat =
                cv::Mat (surface->surfaceList[frame_meta->batch_id].planeParams.height[0],
                surface->surfaceList[frame_meta->batch_id].planeParams.width[0], CV_8UC4,
    cv::transform(in_mat, in_mat, kernel);
    /* Cache the mapped data for device access */
    NvBufSurfaceSyncForDevice (surface, frame_meta->batch_id, 0);

However, in the execution of cv::transform(in_mat, in_mat, kernel); I get a segmentation fault :(

Hi again, one quick update.

I have just realized that I get
5168 Segmentation fault (core dumped)
Even if I try to copy the buffer instead of doing a transform:
cv::Mat image_copy = in_mat.clone();
However, no segmentation fault at all if I use another opencv function such as:
cv::filter2D(in_mat, in_mat,-1, kernel);
The problem is that of course the function is not doing what I need to do…
But the PGIE is correctly receiving the filtered images:

This is the result of the filter2D that of course is doing a convolution of the pixels and not what I need that is a meshing of the different channels.

Finally, I think that this give an interesting information. The problem might come from the specific operation that the opencv function cv::transform is doing on my buffer.

Do you have any suggestion?

Hi @borelli.g92,
I would not know the answer to your query for sure, but I have also tried a few affine transformations (wrapAffine) on the crop of the orig frame. It worked for me. I won’t be able to tell if there is an issue with some specific transformation. I won’t be able to share any code as it is proprietary.

Thanks anyways.

Hi @duttaneil16 .

For future reference, I was able to apply the filtering.
I have quickly described what I did here:


1 Like