Detection result is different between Xavier and Orin for the same model and weights

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU): Xavier AGX / Orin NX
**• DeepStream Version: 6.0 / 6.2
**• JetPack Version (valid for Jetson only): 4.6 / 5.1
**• TensorRT Version: 8.0.1.6 / 8.5.2.2
**• NVIDIA GPU Driver Version (valid for GPU only): Xavier AGX / Orin NX
**• Issue Type( questions, new requirements, bugs): Question

We use the same weights to convert the trt engine through the “objectDetector_Yolo” plugin in Deepstream SDK. Afterwards, we use Deepstream for image inference. However, the same image gets different inference results on different machines.

Hi @kpernos9 .
Can you elaborate what difference is ? Data bit difference, BBOX difference? If it’s BBOX difference, which one is incorrect? Can you please share more details?

Hi @mchi,
The main difference lies in the difference in confidence. When inferring through an image, the two machines will have a large variance in confidence and slightly different BBOX, as shown in the result below.

The format is image name, class, confidence, x, y, w, h.

Hi @mchi ,
Take the detection result of 67.jpg as example.
There are two objects detected on both of Xavier and Orin.
Their confidence scores differs significantly between Xavier and Orin.
If threshold is 0.5, these two objects will be ignored on Orin device.
Please help to clarify.

Hi @kpernos9 , @lin.bruno
Sorry for long delay!

I think the application along with its configs are exactly same on Xavier and Orin, right?
Could you refer to DeepStream SDK FAQ - #9 by mchi to dump the input and output of inference on Xavier and Orin to check if the difference is introduced in which part?

What’s the inference precision?

Thanks. We are integrating patch now.
And the inference precision is FP16.

Hi @mchi ,
This is our input and output data, and the original image is also included.
Detection_result_different_io_data.7z (1.7 MB)
And I noticed that the input images outputted by Xavier and Orin are slightly different.

Ok, could you share the DeepStream config files?

This is my config file, and I executed the result using deepstream-image-decode-app.
dstest_image_decode_pgie_config.txt (818 Bytes)

Can you share the application deepstream config? Want to check the pre-processing part, e.g. if using VIC or GPU to do the processing.

I am using the deepstream-image-decode-app application and it only has this config, dstest_image_decode_pgie_config.txt.
This is a sample application of the Deepstream SDK, and its path is /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-image-decode-test.

Hi @kpernos9 ,
Could you try to use GPU instead of VIC to do scaling and conversion for nvinfer, nvstreammux and nvvideoconvert plugins?

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvstreammux.html

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvvideoconvert.html

Hi @mchi ,
We observed that the input file dumped varies on different scaling-compute-hw values which means GPU/VIC is used.
We want to clarify that why the same settings get different scaling input. And it leads to different detection results.

I’m sorry, I uploaded the wrong configuration file earlier. This correct one is
dstest_image_decode_pgie_config.txt (780 Bytes), which doesn’t have the settings for scaling-filter and scaling-compute-hw.

Hi @mchi, I tried modifying the code and config file to use GPU, and this is my output data.
Detection_result_different_io_data_gpu.7z (1.4 MB)

Hi @kpernos9 @lin.bruno
What’s the DeepStream version are you using?

image

Both are used, Xavier AGX using DS6.0 and Orin NX using DS6.2.

Hi @mchi ,
Is the difference of scaled input resulted from DS version or HW platform?
please advise. Thanks.

Hi @lin.bruno ,
Different HW (GPU, VIC) could cause little different output.
But, if using GPU/CUDA kernel, I think DS 6.0 and DS6.2 should generate the same output.
We are reproducing and looling into this issue, will get back to you later.

Sorry for long delay!

Please help us dump data before data enter nvinfer plugin.

Below is a patch for reference:

apply attached into
deepstream_image_decode_app_dump_data.patch (7.8 KB)

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-image-decode-test

It’s will be dump data from nvv4l2decoder,nvstreamux and nvvideoconvert
so that we can confirm which plugin cause the difference

thanks