AssertionError: Torch not compiled with CUDA enabled

Hello dear all,

I want to implement a deep learning project which has been developed with Anaconda on a computer GPU to my Jetson Nano 4GB. The project uses VGG19 and as benchmark declares it is okay to inference that network on Nano, which I’m able to run the project on CPU without any error and I get good results too. But my problem is on one line of the project, where it says

     self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") if device == None else device

But I changed it into
self.device = torch.device(“cuda”)

Because when I don’t change it either works on CPU (which is not what I want) or it gets AssertionError: Torch not compiled with CUDA enabled. Even though I downloaded the enabled version.

My system info’ s are:
JetPack 4.6 L4T32.6.1
CUDA 10.2
Python: 3.7.1
PyTorch as 1.7.1
Pyyaml: 5.4.1
Pillow: 8.2.0

Note: I applied some sample CUDA Deep Learning projects with the same card and they work fine on GPU, so the CUDA is ofc downloaded with JetPack image but I think since the project I want to apply is built with Anaconda and Anaconda makes it’s own CMake files, when I apply this project to my Nano I’m not able to reach CUDA?

PyCUDA

ENV PATH=“/usr/local/cuda/bin:${PATH}”
ENV LD_LIBRARY_PATH=“/usr/local/cuda/lib64:${LD_LIBRARY_PATH}”
RUN echo “$PATH” && echo “$LD_LIBRARY_PATH”

I applied this to my Dockerfile.pytorch as dusty_nv suggested that on this jetson-containers/Dockerfile.pytorch at 6333e058495e689fcceebb6bc04eb72b5ab43893 · dusty-nv/jetson-containers · GitHub question but I’m still not able to run the project on GPU

Hi @senasahin, how did you install PyTorch? If it was through Anaconda, then my guess is it installed a CPU-only wheel. Instead please install one of the PyTorch wheels from this topic that are built with GPU acceleration:

There are also the l4t-pytorch and l4t-ml containers from NGC if those are easier for you to use.

I came across that Nano aarch64 doesn’t supports Anaconda so I downloaded PyTorch already with these PyTorch for Jetson wheels to the Nano. The project was suggested to run on Anaconda environment and does not includes CMake or Makefile files so I downloaded required software by myself, with wheels. But still Torch doesn’t runs GPU enabled sir

Hi,

Could you test the following command on your environment first?

$ python3
>>>  import torch
>>> torch.cuda.is_available()

More, do you run the sample with an environment management system like conda?
Thanks.

Hi, it returns False sir.

And yes, in my computer I run the project with conda but on Nano I’m not able to download conda. So I tried runing the project in a python environment where I downloaded the project requierements manually

1 Like

Hi @senasahin, I recommend first uninstalling PyTorch, and then reinstall with one of the wheels that I linked to:

pip3 uninstall torch
sudo pip3 uninstall torch
python3 -c 'import torch'      # this should not be successful

Then install one of the wheels that was built with GPU support from this topic: https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048

Hi and thanks a lot @dusty_nv and @AastaLLL

For those who came here to check, my problem was solved after trying these steps as moderators suggested me so,

  1. Added CUDA’ s path to bash.rc (not needed if you downloaded JetPack from NGC)

  2. Built Docker container for a fresh build from GitHub - dusty-nv/jetson-containers: Machine Learning Containers for NVIDIA Jetson and JetPack-L4T

  3. Refollowed the steps of GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson. to make sure I get cmake configurations correctly

  4. Deleted default coming PyTorch 1.6 (with previous steps) and after making sure that version is uninstalled, reinstalling wheel formatted PyTorch 1.7 ( since I needed that version ) from PyTorch for Jetson

Maybe only last step seems like what solved the issue but I actually tried uninstalling and reinstalling the versions before 1,2,3 and it didn’t changed the error I was getting. So just make sure the containers are well builded on your system.

Kind regards again to @dusty_nv @AastaLLL

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.