Classifier result on onnx doesn't match Deepstream result

Thanks, we are able to download the model file.
Will share more information with you later.

Hi,

Thanks for your sample and data.
We can reproduce this problem internally and pass it to our internal team for checking.

We will share more information here once we found anything.

Thanks.

Hello, is this issue resolved or not ? thanks.

Not yet.

Hi,

Sorry that we are still checking this issue internally.
Will update more information with you later.

Thanks.

Hi,
I have some updates regarding this issue. I managed to deserialize the engine that generated from DeepStream in Tensorrt and I make sure that trt_sample.cpp (8.5 KB) the result is correct and same as the actual model, I am sure now the issue is coming from deepstream preprocessing. I reproduced the issue to understand what is the actual preprocessing in deepstream. I deleted all preprocessing steps in deepstream also tensorrt and I am trying to get the same result.,
attached tenserrt code and config file on deepstream.

The pre-processing function that I read in nvidia document is:
https://docs.nvidia.com/metropolis/deepstream/5.0DP/plugin-manual/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_details.3.01.html
where is the code in deepstream for preprocessing so I can updated?

config_seat_belt_violation.txt (721 Bytes)

Hi,

Thanks for sharing this information.
You can find the pre-processing in the nvdsinfer component:

/opt/nvidia/deepstream/deepstream-5.0/sources/libs/nvdsinfer/nvdsinfer_context_impl.cpp

NvDsInferStatus InferPreprocessor::transform(
    NvDsInferContextBatchInput& batchInput, void* devBuf,
    CudaStream& mainStream, CudaEvent* waitingEvent)
{
    ...
}

Thanks.