Questions/observations re: Gst-nvdspreprocess and degradation of inference confidence

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Both
• DeepStream Version
6.0.1
• JetPack Version (valid for Jetson only)
4.6.1
• TensorRT Version
8.2.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
Question/observation
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

We’ve been trying to use the Gst-nvdspreprocess plugin for a few scenarios but we’ve been observing some unexpected results… or perhaps we don’t quit understand how the preproc plugin works.

So just as a simple test…

Using a sample set of 1920x1080 pixel images as a single standard, we inferred on the images using three different approaches with the results show below… all using the same TensorRT model. Average inference confidence is show in the far right column.

  • the first table is produced using Pytorch.
  • the second table is produced with a DeepStream pipeline.
  • the third table is produced with the same DeepStream pipeline but with the preproc plugin and a single ROI of 1920x1080.

The difference between the first and second tables are understandable, but we’re trying to understand why adding the preproc would have such an effect on the results in the third table. One would think they would be near identical to the second table. Can anyone explain these results?

Thanks,
Robert.

Hi @prominence_ai ,
Got. This may be caused by different pre-processing algorithm, we will look into and get back to you.

Thanks for reporting this!

@mchi jus FYI, the above results were produces with YOLOv5. We also tried with YOLOv3 just to make sure it wasn’t model related… but similar results.

1 Like

HI @prominence_ai
Could you share the DS config files of all these tests?

Thanks!

@mchi for YOLOv3
config_infer_primary_yoloV3.txt (837 Bytes)
config_preprocess_v3.txt (2.9 KB)
yolov3.cfg (8.1 KB)
labels.txt (621 Bytes)

the weights file for the YOLOv3 exceeds the upload limit. Let me know if you need it.

@mchi any update on this issue?

Hi @prominence_ai ,
We are still working on this, need more time.

As you claimed that this is reproduced on both platforms but gave Jetson version, would like to confirm with you if you can also reproduce this on dGPU platform?

And, have you tried “scaling-pool-compute-hw=1” to use GPU to do the scale.
And, did you try different “scaling-filter=” ?

@mchi the results in the table above were produced on dGPU, and we’ve confirmed that we see similar results on Jetson.

On dGPU, we’ve tried setting scaling-pool-compute-hw=1 and different scaling-filter= values, but we see only minor difference in the results.

Hi @prominence_ai
Thanks! So “scaling-pool-compute-hw=0/1” is the key which cause the accuracy difference, right? If it’s, that’s almost aligned with our observation.

@mchi no, sorry for the confusion… I’m saying the opposite, that “scaling-pool-compute-hw=0/1” makes little to no difference in the results.

@mchi this issue has been discussed and solved here

Regards,
Robert

Hi @prominence_ai
Thank you so much for the sharing!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.