I’m setting up pytorch on the Jetson AGX Orin (Developer Kit), I have installed it using the following wheel: torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
However, I can’t find the version of torchvision that is compatible with this torch version. I tried installing torchvision 0.16.0 which should be compatible with my torch version, but I got the following error message:
ERROR: pip’s dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. torchvision 0.16.0 requires torch==2.1.0, but you have torch 2.1.0a0+41361538.nv23.6 which is incompatible.
System specifications:
Jetpack 5.1.3
L4T version - R35.5.0
Cuda 11.4
Any suggestions on which torchvision version to use ?
I have tried installing torchvision==0.16.2 but unfortunately it is not compatible (error message provided below):
ERROR: pip’s dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. torchvision 0.16.2 requires torch==2.1.2, but you have torch 2.1.0a0+41361538.nv23.6 which is incompatible.
I have researched about the compatible version but I can’t find any.
Any suggestion on what would be the fix ?
Regarding using jetson-containers, I have also looked it up but I can’t find the specific container for my jetpack version:
Jetpack 5.1.3
L4T version - R35.5.0
@hry7999 you either need to build torchvision from source (like under the Installation section of this post) or use a jetson-container with torchvision in it. The containers for JetPack 5.1.2 / L4T R35.4 are compatible with JetPack 5.1.3, you don’t need them to match exactly (or you can rebuild them if desired)
I have tried pulling the docker image: dustynv/pytorch:2.1-r35.4.1
However, torchvision is not installed in the container. Then when trying to install torchvision, torch is uninstalled and a different version of torch is installed which does not detect my GPU and returns false to torch.cuda.is_available().