Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU): Xavier AGX / Orin NX
**• DeepStream Version: 6.0 / 6.2
**• JetPack Version (valid for Jetson only): 4.6 / 5.1
**• TensorRT Version: 8.0.1.6 / 8.5.2.2
**• NVIDIA GPU Driver Version (valid for GPU only): Xavier AGX / Orin NX
**• Issue Type( questions, new requirements, bugs): Question
We use the same weights to convert the trt engine through the “objectDetector_Yolo” plugin in Deepstream SDK. Afterwards, we use Deepstream for image inference. However, the same image gets different inference results on different machines.
Hi @kpernos9 .
Can you elaborate what difference is ? Data bit difference, BBOX difference? If it’s BBOX difference, which one is incorrect? Can you please share more details?
Hi @mchi,
The main difference lies in the difference in confidence. When inferring through an image, the two machines will have a large variance in confidence and slightly different BBOX, as shown in the result below.
The format is image name, class, confidence, x, y, w, h.
Hi @mchi ,
Take the detection result of 67.jpg as example.
There are two objects detected on both of Xavier and Orin.
Their confidence scores differs significantly between Xavier and Orin.
If threshold is 0.5, these two objects will be ignored on Orin device.
Please help to clarify.
I think the application along with its configs are exactly same on Xavier and Orin, right?
Could you refer to DeepStream SDK FAQ - #9 by mchi to dump the input and output of inference on Xavier and Orin to check if the difference is introduced in which part?
Hi @mchi ,
This is our input and output data, and the original image is also included. Detection_result_different_io_data.7z (1.7 MB)
And I noticed that the input images outputted by Xavier and Orin are slightly different.
I am using the deepstream-image-decode-app application and it only has this config, dstest_image_decode_pgie_config.txt.
This is a sample application of the Deepstream SDK, and its path is /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-image-decode-test.
Hi @mchi ,
We observed that the input file dumped varies on different scaling-compute-hw values which means GPU/VIC is used.
We want to clarify that why the same settings get different scaling input. And it leads to different detection results.
I’m sorry, I uploaded the wrong configuration file earlier. This correct one is dstest_image_decode_pgie_config.txt (780 Bytes), which doesn’t have the settings for scaling-filter and scaling-compute-hw.
Hi @lin.bruno ,
Different HW (GPU, VIC) could cause little different output.
But, if using GPU/CUDA kernel, I think DS 6.0 and DS6.2 should generate the same output.
We are reproducing and looling into this issue, will get back to you later.