Add preprocessing for Engine

Please provide complete information as applicable to your setup.

• Hardware Platform (JGPU)
**• DeepStream Version 5.0
**• TensorRT Version7.0
**• NVIDIA GPU Driver Version 460
**• Issue Type questions
• How to reproduce the issue ? (Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. )

Dear friend:
I hope to add some preprocessing for tracking by YoloV4. Such as enhance the image. The yolov4 is the pytorch version. I check the issue in this forums. And I find a method that can add the preprocess in the “nvdinfer_context_impl.cpp”. Maybe, it is a little difficult for me.
In my mind, I hope to add the preprocessing in the python code of YoloV4, and then transform it to ONNX then Engine file. Dose this method work? That you very much. If this method is invalid, I must modify the “nvdsinfer_context_impl.cpp” in the “opt/nvidia/deepstream/source/libs/nvdsinfer”. So how to add add the preprocess into the “deepstream-app.txt”?
Thank you very much

Hi @yangyi,
Sorry! I’m confused. Why is pre-processing related to transforming model to onnx ?
Transforming model to onnx is done offline before deploying the model into DeepStream application.

If the pre-processing is “enhance the image”, besides modifying “nvdinfer_context_impl.cpp”, you can add a GStreamer plugin between decoder and nvstreammux. The GStreamer plugin sample is gst-dsexample

Thank you very much for your response.
I get the yolov4, the pytorch code. I want to enhanced the input image. The easy method for me is to modify the pytorch code. But I found it is diffcult to change the onnx file. I don’t know if the method is right or wrong.
I will try to use [gst-dsexample]. But it is C++ code, that is difficult for me. But if there is no other way, I will do my best to overcome the difficult.
Thank you very much.

Thank you. I am trying to use the dsexample for the preprocessing.
However, I don’t know which one is the input. I check many answers in this forums, and maybe I can get the input in “get_converted_mat()”.

So I try the “in_mat” and ‘dsexample->inter_buf’, and use cv.imrite, But the in_mat is the sgmentation of the objcet. And get error of ‘dsexample->inter_buf’.

Please kindly tell me which one in the input video that I can do some preprocesses for it? Thank you very much.

I don’t understand how modifying model can enhance the input image.

please share the diff to the native ds-example code.

Thank you mchi.

I just want to verify a simple algorithm and I haven’t write the code. It is similer like below:
new_image = torch.ones((image.shape[0], image.shape[0],image.shape[0],image.shape[0]))
new_image = image + image_new

the new_iamge is the resulte of my processing. and I can add other process such as enhancement.

I check the answer from official pytorch forum. There is a similar question. And a gentleman give a example code. But I found pyTorch could not add preprocessing for a model in the deepstream. Maybe I am wrong. So I chose your advice to modify “gst_dsexample_start()”

Yestoday, I learning this code and read the documents in the NVIDIA. But I find it is little difficult for me. I have the following questions, please help me.
I want to process the input image with CV , then send the result to deepstream.

(1) Which one is the stream that I want to preprocess. Is the “in_mat”? “*dsexample->cvmat”? or the orthers?

(2) If I finish the process, how to send the result to deepstream?

Thank you very much.

pleas share the diff/patch. I even have not idea where they are added.

Frankly, I would recommend you take one to two days to read some GStreamer introduction doc so that you can understand ds-example better and how DS works. It’s hard to explain for you if you don’t have the basic GST knowledge.

Thank you reply. The following sample is what I found:

class MyModel(nn.Module):
def init(self, transform):
super(MyModel, self).init()
self.conv = nn.Conv2d(3, 1, 3, 1, 1)
self.transform = transform

def forward(self, x):
    xs = []
    for x_ in x:
        x_ = self.transform(x_)
    xs = torch.stack(xs)
    x = self.conv(xs)
    return x

transform = transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
model = MyModel(transform)
x = torch.randn(1, 3, 24, 24)
output = model(x)

But I think this method cannot solve my problem. I want to add noise to the picture. But here the Normalize is out of the model. So this model cannot cover the preprocess in the ONNX file.

Thank you for your help. I found the place to add my preprocess in “get_converted_mat” . The code as:


#if (CV_MAJOR_VERSION >= 4) // the CV version
cv::cvtColor (in_mat, *dsexample->cvmat, cv::COLOR_RGBA2BGR);
cv::cvtColor (in_mat, *dsexample->cvmat, CV_RGBA2BGR); // change the color type

*dsexample->cvmat = addSaltNoise(*dsexample->cvmat, 3000);
//cv::line(*dsexample->cvmat, cv::Point(100,100), cv::Point(200,100), cv::Scalar(155, 140 , 100), 3);
cv::cvtColor (*dsexample->cvmat, in_mat, CV_BGR2RGBA); // change the color type

err = NvBufSurfTransform (dsexample->inter_buf, &ip_surf, &transform_params);
if (err != NvBufSurfTransformError_Success) {
(“NvBufSurfTransform failed with error %d while converting buffer”, err),
goto error;
GST_DEBUG_OBJECT (dsexample, “Scaling and converting input buffer\n”);
if (NvBufSurfaceUnMap (dsexample->inter_buf, 0, 0)){
goto error;

The up code can get my aim. But I still doubt that is right or not. Does the noise added before the inference as a preprocess or just as the post process?

Thank you very much

Unfortunately, I test, if I add noise in the “get_converted_mat”, it looks like add them in the postprocess. I will try.
Please Help me. Thank you very much

I want to add a GStreamer plugin as the gstdsexample be between decoder and nvstreammux. However, I don’t know where is the place I need to add.

Hi @yangyi ,
Sorry for delay! Havr you got it solved?


Thank you for you reply. I am trying to add myself plugin based on gstexample.
when I encounter trouble, I will ask for your help.
Thank you again.

1 Like