TensorRT gives different results

For the same ONNX model and the same set of code, one is running on the PC side with an RTX 3060 graphics card and TensorRT version 8.5.3, while the other is running on the Jetson Orin Nano with TensorRT version 8.5.2. The RTX3060 inference normally, but the jetson inference bad.

on jetson orin nano. when i set --fp16 there are lots of wrong detection.
whent i set --npTF32 there is no detection

on jetson orin agx. when i set --useDLACore=0 the result looks good . when i do not use DLA,the results is wrong
I suspect that it’s an issue with floating-point calculations, but I’m unable to solve it. Could you please help me?

i change the torch.Einsum() node and export onnx with opset=11.it works

There is no update from you for a period, assuming this is not an issue anymore.
Hence, we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

Could you also attach the input data and the script that generates the output so we can reproduce this issue on our side?

Thanks