Why nano run faser than orin nano when i inference cyclegan with pytorch

I used the same code to reason cyclegan in orin nano and nano, but orin nano requires around 0.28s and nano requires around 0.24s. When I added in the time to transfer the data cpu to the gpu, the orin nano was much faster than the nano.
But I think the orin nano should also reason faster, and I wonder what I should do?
image

Hello,

Thanks for visiting the forums. Your topic will be best served in the Jetson category.

I will move this post over for visibility.

Cheers,
Tom

Hi,

Have you tried to maximize the device’s performance?

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.