Detection result is different between Xavier and Orin for the same model and weights

This is my config file, and I executed the result using deepstream-image-decode-app.
dstest_image_decode_pgie_config.txt (818 Bytes)

Can you share the application deepstream config? Want to check the pre-processing part, e.g. if using VIC or GPU to do the processing.

I am using the deepstream-image-decode-app application and it only has this config, dstest_image_decode_pgie_config.txt.
This is a sample application of the Deepstream SDK, and its path is /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-image-decode-test.

Hi @kpernos9 ,
Could you try to use GPU instead of VIC to do scaling and conversion for nvinfer, nvstreammux and nvvideoconvert plugins?

Hi @mchi ,
We observed that the input file dumped varies on different scaling-compute-hw values which means GPU/VIC is used.
We want to clarify that why the same settings get different scaling input. And it leads to different detection results.

I’m sorry, I uploaded the wrong configuration file earlier. This correct one is
dstest_image_decode_pgie_config.txt (780 Bytes), which doesn’t have the settings for scaling-filter and scaling-compute-hw.

Hi @mchi, I tried modifying the code and config file to use GPU, and this is my output data.
Detection_result_different_io_data_gpu.7z (1.4 MB)

Hi @kpernos9 @lin.bruno
What’s the DeepStream version are you using?


Both are used, Xavier AGX using DS6.0 and Orin NX using DS6.2.

Hi @mchi ,
Is the difference of scaled input resulted from DS version or HW platform?
please advise. Thanks.

Hi @lin.bruno ,
Different HW (GPU, VIC) could cause little different output.
But, if using GPU/CUDA kernel, I think DS 6.0 and DS6.2 should generate the same output.
We are reproducing and looling into this issue, will get back to you later.

Sorry for long delay!

Please help us dump data before data enter nvinfer plugin.

Below is a patch for reference:

apply attached into
deepstream_image_decode_app_dump_data.patch (7.8 KB)


It’s will be dump data from nvv4l2decoder,nvstreamux and nvvideoconvert
so that we can confirm which plugin cause the difference


Hi @junshengy,
Here is the output data based on the patch you provided, including the content of the patch linke to DeepStream SDK FAQ - #9 by mchi.
Detection_result_different_io_data_with_plugin.7z (2.8 MB)

From the data of each plugin dump

1.There is no problem in nvv4l2decoder, It’s ok.

2.nvstreammux scale width and height from 512x432 to 512x416,
This plugin caused a big difference.
If also using gpu scaling like you describe, a little hard to explain.
Can you share the property settings of nvstreammux?

3.And on xavier, It looks like nvvideoconvert converts the data to rgb.
I think need to confirm the negotiation of the caps from nvvideoconvert to nvinfer

Can you update the jetpack and deepstream version on xavier ?
Differences caused by different versions are difficult to explain


I think I’m using VIC for scaling, is that the default?

I have provided the modified source code and config file in the document below. The code is modified based on the patch file.

There will be two folders to distinguish files for different machines, namely “Xavier” and “Orin”.

The difference in their code lies in the modification I made the “Xavier” app to enable it to save image files, while the “Orin” app remains unchanged in this aspect. And, both apps will output inference results.

Sorry, I cannot update the jetpack and deepstream version on Xavier.

detection_result_diff-relative_data.7z (37.6 KB)

I checked your patch,There are two places worth noting

1.on orin and Xavier,make sure nvstreammux scale use gpu.

g_object_set (G_OBJECT (streammux), "compute-hw", 1, NULL);

2.on orin,remove nvvideoconv before nvinfer.

// delete 
nvvideoconv = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter1");

nvstreammux can be input directly to nvinfer,to reduce some preprocessing

make sure the pipeline is the same for both Orin and Xavier

Hey @junshengy,

I modified the code to make nvstreammux and nvvideoconvert use the GPU.
And I found that the results of GPU inference are strange, while the results of VIC inference are normal.

In the “orin” folder, you can find the results, along with the corresponding inputs and outputs data of VIC inference. The inference results for test17.jpg and test21.jpg are included.

In the “orin-gpu” folder, you will see test0~4.jpg, where there are no objects inside. There were originally objects present. Additionally, there is test22.jpg, and its inference result is strange as it appears to have merged two images. These are all inference results obtained using GPU.

The code used for this test is also included in the compressed file, “0523_detection_result_different_io_data.7z (2.7 MB)”.

I also tested this feature, and when I deleted the code related to nvvideoconv, I found that it seems unable to be removed, which results in the following error message.

ERROR from element source: Internal data stream error.
Error details: gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:dstest-image-decode-pipeline/GstBin:source-bin-00/GstMultiFileSrc:source:
streaming stopped, reason not-negotiated (-4)
Returned, stopping playback

Sorry for long delay!

1.jpegenc use software buffer is not compatible with gpu buffer.

remove the code,will be ok

g_object_set (G_OBJECT (nvvideoconvert), "compute-hw", 1, NULL);

2.Sorry for the mistake of the previous reply.
Due to the difference of version, remove this plugin will cause pipeline not work, please ignore this patch

3.I set propety of compute-hw,let streammux and videoconvert use gpu scaling.
Use test17.jpg and test21.jpg as input image,The inference results is null
It’s a bug,We will look into this issue and will be back once there is any progress.

g_object_set (G_OBJECT (streammux), "compute-hw", 1, NULL);
g_object_set (G_OBJECT (nvvideoconv), "compute-hw", 1, NULL);


Please update once you have any finding.

Need to wait for new release.

At present, it seems that 6.0 and 6.2 are not fully compatible,We recommend using the same version on the production environment.