Building torch with cuda 10 support on Jetpack 5.0 (cuda 11)

Hello,
I currently upgraded my building machine - Xavier AGX to be Jetpack 5.0.2, which comes with builtin cuda 11.4 support.
Now I would like to build a pytorch wheel with cuda 10.2 support, as my edge devices (Jetson Nano/TX2) currently supported only on Jetpack 4.6 (which has cuda 10.2).
What are my options?

  • I Didn’t find an easy apt install way to install cuda 10.2 version on the Xavier AGX machine as the new repository in Jetpack doesn’t have it.

Hi @timor.kalerman, I don’t believe there’s an officially-supported way to downgrade JetPack 5 to CUDA 10.2, so what is recommended would be to re-flash your device with JetPack 4. I haven’t tried this, but you could attempt adding deb https://repo.download.nvidia.com/jetson/common r32.7 main to your apt sources and installing cuda-toolkit-10-2, but no guarantees that it wouldn’t break your environment in some way. Also you would probably need to install the older libcudnn as well.

Could you use one of the pre-built PyTorch wheels from this thread instead? PyTorch for Jetson

IIRC newer versions of PyTorch have dropped support for Python 3.6, so those don’t build on JetPack 4 / Ubuntu 18.04 anyways.

Thanks for the answer. But my situation is slightly different.
I need to have both cuda 11 and cuda 10.2 available so I can compile pytorch for both versions on the same machine.
Also my python version should be 3.10 so I can’t use pre-build wheels as they all come with 3.8.
I thought using maybe docker for this purpose but it probably still needs the container toolkit for mapping the cuda to the container, and then It only maps only the active cuda which is 11.

On JetPack 5, the CUDA Toolkit doesn’t get mounted into the containers - it’s installed inside them (e.g. in l4t-jetpack, l4t-pytorch, ect). You could try building a container for CUDA 10.2 using the apt sources for L4T R32.7 and then use l4t-jetpack container for CUDA 11, and build PyTorch inside these containers. If I recall correctly, building PyTorch doesn’t actually need to run CUDA/cuDNN (just need nvcc and the CUDA+cuDNN libs/headers available for compiling)

ok thank you, I will check this out and update if it worked.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.