Classifier result on onnx doesn't match Deepstream result

Thanks, we are able to download the model file.
Will share more information with you later.


Thanks for your sample and data.
We can reproduce this problem internally and pass it to our internal team for checking.

We will share more information here once we found anything.


Hello, is this issue resolved or not ? thanks.

Not yet.


Sorry that we are still checking this issue internally.
Will update more information with you later.


I have some updates regarding this issue. I managed to deserialize the engine that generated from DeepStream in Tensorrt and I make sure that trt_sample.cpp (8.5 KB) the result is correct and same as the actual model, I am sure now the issue is coming from deepstream preprocessing. I reproduced the issue to understand what is the actual preprocessing in deepstream. I deleted all preprocessing steps in deepstream also tensorrt and I am trying to get the same result.,
attached tenserrt code and config file on deepstream.

The pre-processing function that I read in nvidia document is:
where is the code in deepstream for preprocessing so I can updated?

config_seat_belt_violation.txt (721 Bytes)


Thanks for sharing this information.
You can find the pre-processing in the nvdsinfer component:


NvDsInferStatus InferPreprocessor::transform(
    NvDsInferContextBatchInput& batchInput, void* devBuf,
    CudaStream& mainStream, CudaEvent* waitingEvent)


it seems that the difference because the way that Deepstream read the images. when we compare the pixels value in deepstream before preprocessing it is not same as how opencv read the image, why there is a difference? and how can we get the same exact confidence value of the actual model


Do you meet some issue in the image color format?

In OpenCV, it by default use BGR color format.
In Deepstream, the data color format is decided by the configure file.

To make it similar to the OpenCV, please set model-color-format to 2:
Below is our configuration document for your reference:


Hi, there is no difference when I change it.

You will find in the link the TensorFlow model, Onnx model, TensorFlow testing code, Onnx testing code, Images, DeepStream testing code and config file. In addition to an excel sheet with the testing result.
After conducting this testing, I found that if the result of the TensorFlow model larger than 0.9999 I got the same result on DeepStream. so, for this reason, I added some random and not related images for testing to get less confidence values. and when I test this on DeepStream I got a different result.

I hope it is helpful to reproduce the issue.


Thanks for sharing the detailed data with us.

In testing_result.xlsx, we don’t find the TensorRT confidence result.
Could you test it and add the information to the excel file?

More, do you always get the similar confidence value with same input?



Thanks for your patience.
We are looking into this issue and will try to provide a solution in one of our upcoming release.

Currently, you can change the scaling hardware from default VIC into GPU.
This helps to get a closer confidence value compared to the TensorFlow + OpenCV.
dstest_image_decode_pgie_config.txt (1.9 KB)


More, we would like to know more about your use case.
Could you share how and how much the accuracy impacts your use case?


Hi asaldossari,

We would like to know more about your use case.
Could you share how and how much the accuracy impacts your use case?

Hello, any update about this issue? seem the pre-processing still face this problem in DS 5.1.


Please check the suggestion shared on 24 Jun first.
Use GPU for scaling can improve the accuracy issue.

If you are using images as the pipeline, there is a bug fix in the jpeg decoder.
The fix will be available in our upcoming release.


1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.