Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) V100 • DeepStream Version 6.2 • TensorRT Version 8.5.2 • NVIDIA GPU Driver Version (valid for GPU only) 530.30.02 • Issue Type( questions, new requirements, bugs) bugs
Title: Discrepancy in Results between DeepStream and TensorRT Inference
Description:
I have converted a CRNN model for license plate recognition and encountered the following issues:
The model performs well with TensorRT API when adding the context.set_input_shape function.
However, the results of deploying the engine to DeepStream differ significantly from those obtained using TensorRT, despite having identical preprocessing steps.
Comparison:
Preprocessed image dumped from DeepStream pipeline:
Preprocessed save from Python inference (TensorRT API):
First 10 elements from deepstream output:
0.954593 0.000150864 0.0187389 0.000560478 0.000220662 0.000276796 0.000556525 2.14888e-05 0.000744259 9.76586e-05
First 10 elements from Python output(TensorRT API):
9.7854286e-01 8.3139632e-05 8.5415142e-03 3.5654227e-04 1.2933527e-04 2.0175985e-04 3.0744128e-04 1.3458267e-05 2.8787853e-04 4.1447085e-05
Sorry, I couldn’t do that.
Do you have any suggestions for me to experiment with?
I tried to dump both preprocessing and postprocessing output but haven’t found the problems, they are identical to the outputs in the Python app.
You need to check the whole precessing before inferencing. We don’t know what you have done in your TensorRT app, how did you decode the video, how do you get the video frame data and do the preprocessing. Some DeepStream plugins will involve extra conversion. E.G. The nvstreammux can scale video if your input video resolution is different to the “width” and “height” of nvstreammux properties.
You need to make sure all the processing before inference are the same in your DeepStream app and TensorRT app.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one.
Thanks