Performance impact with jit coverted model using by libtorch on Jetson Xavier

H Aasta,

Any updated information? This is a importance feature for our product, could you kindly help to highlight and proceed forwards? Thanks!


Sorry that since Torch is a third-party library, the discussion and resources internally are quite limited.

Just want to confirm again.
Do you also see the performance difference between libtorch and PyTorch on a desktop environment?
If yes, have you checked with the torch team about this issue?



Just some information.

We test this issue again on Orin with the latest JetPack 5.0.2 and PyTorch v1.13.0+nv22.07.
The performance between libtorch and PyTorch are almost the same.


$ ./main
warming up,pls wait...
infer cost 1.35902 s
train cost 7.94993 s


$ python3
warming up,pls wait...
infer cost 1.661548376083374
train cost 7.868820667266846


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.