Recommended PyTorch version for JetPack 5.1

Hi, I am trying to compare performance across bare-metal and docker on my AGX Orin devkit (JetPack version: 5.1).

For bare-metal, the official PyTorch 2.0 wheel seems to be torch-2.0.0+nv23.05-cp38 from here

For L4T-ML container, the PyTorch 2.0 version seems to be torch-2.0.0a0+ec3941ad.nv23.2-cp38 from here

How different are these 2 PyTorch versions? Is there any reason to choose different versions on bare-metal v/s container?


These two are all based on the PyTorch 2.0.0 but in different snapshots.
Since we have ~ monthly PyTorch releases, they just pick up different packages.


Thanks for your response. A couple of follow-up questions:

I’m seeing major performance differences across these two versions (2-2.5x slowdown). Is there a way I could compare the code changes that went into both of these?

torch-2.0.0+nv23.05-cp38 is newer than torch-2.0.0a0+ec3941ad.nv23.2-cp38 right?

Hi @virtual.ramblings, yes I believe that torch-2.0.0+nv23.05-cp38-cp38-linux_aarch64.whl is newer. Which is the version that is faster for you?

Also, the containers have been updated to use these updated wheels (and PyTorch 2.1) - you can rebuild them or pull the updated images from DockerHub:

Thanks for your reply!

23.02 is much faster than 23.05 on the workload we are looking at. (BERT for question answering)

Do you recommend moving to 23.05? Is this a more stable version even if slower?


As above mentioned, we have a container with the newer PyTorch version (PyTorch 2.1 nv23.06).
It’s recommended to try our latest release.