Hello, I want to install torch and torchvision to my jetson orin nano devkit. I found this step by step guide (PyTorch for Jetson). I installed torch and there were no any problems with it. but when I started torchvision installation it shows this error
@developer.makarov changing CUDA version shouldn’t be required, did you build torchvision from source (like shown under the Installation section of PyTorch for Jetson thread) or did you pip install torchvision ? If the later, you need to build it from source.
Also, unsure with how your virtualenv could be impacting the CUDA version it is reporting. What happens outside of virtualenv? You can also use l4t-pytorch container or torchvision container from jetson-containers - those have the working PyTorch/torchvision wheels in them that you can extract/use even if you don’t want to use container.
Thanks for reply. @dusty_nv of course I build from source, because installation from pip returns me torch with no CUDA support. Recently I fixed problem with nvcc. The solution was in adding correct paths :
but now I have problem with libcudnn.so.8 (system can’t find this file, while importing torch module ).
I can’t find this one in /usr/local/cuda-12.2/lib64 folder. Maybe I did some mistakes with cudnn installation.
I used this guide
but replace last row from sudo apt-get -y install cudnn to sudo apt-get -y install cudnn-cuda-12. I will try container, but if you can help me with this it will be great
OK great, glad to hear you got it working @developer.makarov! I think that is good that your system was reflashed with JetPack 6, then it has CUDA 12.2 on it straight away and the system doesn’t get confused selecting the correct version (as you have found). For these kind of reasons, I typically put different CUDA versions in different containers to keep them all straight.
I have also started uploading the pre-compiled pip wheels for torchvision/ect that are built by jetson-containers, as detailed in this post: