On the downloads page, a new version (2.5.0…) of pytorch appears for jetpack 6.1.
However:
The installation (as per the guide) requires installation of libcusparse, and the install script is not supported by the CUDA version shipped with Jetpack 6.1, as the script only supports up to 12.4. The script also has issues with root install.
The numpy version it was compiled with seems to be 1.2(6?), meaning that simply installing numpy which gives version 2.1.x will result in a broken pytorch version.
It would be good to see a pytorch install that supports numpy 2.0 (pytorch 2.3 already supports this), as well as updated install documentation and libcusparse scripts - the current link appearing on the page to the docs is broken.
That doesn’t resolve the issue. The issue is not the pytorch version (the one you linked to is identical to the one mentioned in the original post), but rather the fact that the libcusparselt script and the documentation isn’t updated.
As brian.wu mentioned above, the cuSPARSE issue is solvable by hacking the install script, though the correct way is likely via the cuSPARSE link you provided, and should probably should be updated in the docs as well.
The pytorch version issue in JP 6.1 still remains - it’s a release candidate (2.5.0rcX) compiled against a old version of python (3.10) as well as an old version of numpy (1.26). It would be good to get the build instructions so that users can try building their own combinations.
Those build form source instructions haven’t been updated for quite a while - the latest patch is for pytorch 1.11 (released March 2022). Are there updated patches or instructions available? or are none needed in modern versions?
Are there any differences between a compiled version and the prepacked wheel in terms of optimization?
For recent PyTorch (2.x), it can be built without a custom patch.
Orin GPU architecture is already added in the building config so you don’t do that manually.