Problems with torch and torchvision Jetson Nano

I’ve been trying to install this YoloV5 tracker ([ByteTrack]) on my Nano (jetpack 4.6) and can’t seem to get torchvision/torch to work.

So I started with with a simple “pip3 install torch torchvision” and then launched a demo code (yes the versions are ok)
I then get this error code: “AssertionError: Torch not compiled with CUDA enabled”
So then I followed the tutorial “PyTorch for Jetson” to install torch and torchvision.

I first installed “torch-1.9.0-cp36-cp36m-linux_aarch64.whl” and verified the Cuda installed with torch.cuda.is_available() and it showed true.

I then followed the steps to install torchvision from source (pip3 install torchvision kept giving me “couldn’t load custom c++ ops” when running the tracker demo).

So I downloaded torchvision (v0.10.0 per the matrix) and ran the setup.py file but the compilation failed with “fatal error: ATen/cuda/Atomic.cuh: No such file or directory”

Hi @marcb19990, on my end the PyTorch 1.9 wheel doesn’t contain that Atomic.cuh file, and torchvision v0.10.0 doesn’t reference it either. Are you sure that you are using torchvision v0.10.0? Did you clone it like this:

git clone --branch v0.10.0 https://github.com/pytorch/vision torchvision

If you continue having problems, you could use the l4t-pytorch container, which already comes with PyTorch and torchvision pre-installed. Since you are on JetPack 4.6 (L4T R32.6.1), you would use nvcr.io/nvidia/l4t-pytorch:r32.6.1-pth1.9-py3

Like I said I followed “PyTorch for Jetson” which made me do exactly

git clone --branch v0.10.0 https://github.com/pytorch/vision torchvision

I then installed it from there and it fails giving me that error. I am also not using containers since I have no idea how they work and I am not feeling like speeding hours understanding them. I will do a fresh install of pytorch 1.9 and torchV 0.10.0 tomorrow and paste the whole error log.

There are no references to Atomic.cuh in torchvision v0.10.0 source code, so I’m not sure where that error is coming from.

One of the points of containers is to save time with installation issues and such, so in the long run it may be to your benefit to give them a try if you can. This page explains how to start the container and mount in additional files if you want: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.