Help with Pytorch, torchvision on Jetpack 6

Hello,
I’m a newbie on Nvidia. Should have just gotten an x86. Anyways, have a Jetson Orin Nano (aarch64) and I managed to download and successfully install Pytorch:
install https://developer.download.nvidia.cn/compute/redist/jp/v60dp/pytorch/torch-2.3.0a0+ebedce2.nv24.02-cp310-cp310-linux_aarch64.whl
But, I do get an error that Ultralytics requires torchvision. But when I try to pip install torchvision, it uninstalls the Nvidia version of Torch and installs the other one it wants intead. When I run my model with the torch version I don’t want, it works fine but only uses the CPU. If I try and run the the model (yolo5) with the Nvidia GPU CUDA supported version of Torch without installing torchvision, it obviously fails because torchvision is not there. Here are the versions of what I have:

  • Ubuntu 22.04 on Jetson Orin Nano
  • NVIDIA-SMI 540.2.0
  • (BADCATCONDA1) martin@ubuntu:~$ nvcc --version
    nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2023 NVIDIA Corporation
    Built on Tue_Aug_15_22:08:11_PDT_2023
    Cuda compilation tools, release 12.2, V12.2.140
    Build cuda_12.2.r12.2/compiler.33191640_0

FYI, when I go into python and run the two commands to detect the Pytorch installation from Nvidia, it does detect CUDA. Returns True. I’ve been at this for a couple of weeks. I tried installing torchvision without dependencies - didn’t work of course… and because of the recursive nature of dependencies, that’s a black hole I don’t want to enter. This is my first post. There has to be an easier way.

@martin225 sorry for your troubles, the easier way is to just use l4t-pytorch container which already includes PyTorch, torchvision, torchaudio, ect already compiled/installed with CUDA support:

The 2nd easier way is to build torchvision from source, from the Installation section of this thread:

Thanks Dusty. I’ve only done Docker a few times. I’d have to install it. But, once installed and running the Container, do I have to do anything special regarding where I run my model? Right now, I run train.py in /home/martin/yolo5. I will try your 2nd easier way first I think.

Ah I stand corrected. I had that PIP wheel 2.2.0 already - that’s the one I was trying to use and it recognizes CUDA. Problem is, of course, torchvision. I guess I’ll roll with the Container instead. I’m just not used to working with Containers.

Actually, maybe I will try and just install torchvision from source and see what happens. Inside a conda env.

You’re a genius!!! Installing from source worked. I installed torchvision 0.17 and now I see GPU Mem being hit. I am ridiculously stoked! Big thanks. Now I just don’t understand this “training” thing. lol. Once its done, I’m not really sure what the next step is… lol!

1 Like

OK, great that you got it working. And I forgot to mention, even if you don’t want to use the containers, mine have all the wheels they build under /opt. So you can just pull those out of the container and install them outside if you want.

When working on projects, I just mount a local folder into the container, and any changes you make are reflected inside the container. So you’d just mount a directory with your code/models in it, and run your python scripts or whatever from inside the container. Only now it will be able to import torch and such

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.