Discrepancy in Results between DeepStream and TensorRT Inference

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) V100
• DeepStream Version 6.2
• TensorRT Version 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only) 530.30.02
• Issue Type( questions, new requirements, bugs) bugs

Title: Discrepancy in Results between DeepStream and TensorRT Inference

Description:

I have converted a CRNN model for license plate recognition and encountered the following issues:

  1. The model performs well with TensorRT API when adding the context.set_input_shape function.
  2. However, the results of deploying the engine to DeepStream differ significantly from those obtained using TensorRT, despite having identical preprocessing steps.

Comparison:

Preprocessed image dumped from DeepStream pipeline:
lp_0_0
Preprocessed save from Python inference (TensorRT API):
lp_1_0

First 10 elements from deepstream output:
0.954593 0.000150864 0.0187389 0.000560478 0.000220662 0.000276796 0.000556525 2.14888e-05 0.000744259 9.76586e-05

First 10 elements from Python output(TensorRT API):
9.7854286e-01 8.3139632e-05 8.5415142e-03 3.5654227e-04 1.2933527e-04 2.0175985e-04 3.0744128e-04 1.3458267e-05 2.8787853e-04 4.1447085e-05

Thank you in advance.

Can you provide complete DeepStream app and TensorRT app for us to reproduce the comparasion?

Sorry, I couldn’t do that.
Do you have any suggestions for me to experiment with?
I tried to dump both preprocessing and postprocessing output but haven’t found the problems, they are identical to the outputs in the Python app.

How did you dump the preprocess output?

Is your input a JPEG picture?

I used the instruction here DeepStream SDK FAQ - #9 by mchi

I created a video from a single image for debugging

The TensorRT and DeepStream apps both use the video as input, right?

Yes, they are. And the image contains only one object for easily debugging

You need to check the whole precessing before inferencing. We don’t know what you have done in your TensorRT app, how did you decode the video, how do you get the video frame data and do the preprocessing. Some DeepStream plugins will involve extra conversion. E.G. The nvstreammux can scale video if your input video resolution is different to the “width” and “height” of nvstreammux properties.
You need to make sure all the processing before inference are the same in your DeepStream app and TensorRT app.

Thank you very much. I’ll check on that.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.