Output of tensorflow model in deepstream not accurately same to python inference

Please provide complete information as applicable to your setup.

• Hardware Platform: RTX 2080
• DeepStream Version: 5.0
• NVIDIA GPU Driver Version (valid for GPU only): 440.33.01
• Issue Type( questions, new requirements, bugs)

Hi,
I am testing out a few of my models but there seems to be a difference of few pixels in output which can negatively impact the accuracy of the models and also secondary inference models are very sensitive to the output of primary inference.

I have tried the same with ssd_inception_v2 model provided in deepstream models_repo. Even there I find the raw output(before postprocessing) to be slightly different from the python inference. I have used the sample_720p.jpg to compare the result.
here is the python script-
infer.txt (5.9 KB)
I use config_infer_primary_detector_ssd_inception_v2_coco_2018_01_28.txt for deepstream inference and print the raw inference output in nvdsinfer_custombboxparser.cpp to compare.

Thanks.

Hi,

A common issue is from the image pre-processing stage.
Have you applied the similar scaling and normalization to the deepstream?

Thanks.

Hi,
I have provided scale_factor=1 and channel_offsets:[0,0,0] and also the maintain_aspect_ratio is set to 0. I will include the config file to compare with python script. There is no obvious difference that I see.
config_infer_primary_detecter_ssd_inception_v2_coco_2018_01_28.txt (1.3 KB)

Hi,

Sorry for the late.
Could you help to check if this issue can be reproduced with our standard example below:

Thanks.