Precision loss when migrating to jetson nano

hello,

we recently migrated the code of a detection algorithm that we developed using the C++ API of TensorRT, to a Jetson Nano platform.

With the same configuration, and same library versions we observe a different behavior in the output predictions; this results in a shift of our predicted bounding box towards the bottom right, plus errors in predicted scores; this happens both with FP32 and FP16 modes;

is there any known precision loss associated to one o more layers when running on Nano?
we tested the same algorithm in many other desktop platforms and never had any issue;

we use CUDA 10.2 and TensorRT 7.1.3.4

thanks in advance,

f

Hi,

Suppose the error should be smaller enough to be ignored.

Not sure if this is an issue or some different handling in the pre-processing stage.
Would you mind to share a simple source that can reproduce the comparison between TensorRT and your original frameworks?

Thanks.

Hi,
thanks for replying; it will take me some time to produce some minimal code to reproduce the issue;

meanwhile, i can assure that the error magnitude is relevant as can be seen from the same experiment run on a 2080ti vs a Jetson Nano; pre-processing, post-processing and all the ML work is performed by the same exact code

thanks,

f

output RTX2080ti

output Jetson Nano

P.S. the algorithm was implemented directly with TensorRT C++ API, so no parsing operations involved from other frameworks

Can’t you just fix this manually? As peculiar as this error looks, it appears to be uniform on each box. Might be best to just scale the size of the boxes by whatever factor you calculate them to be in excess of the correct boxes. Good luck with your project!