My jetson nano board returns 'False' to torch.cuda.is_available() in local directory

I try to see whether my Jetson nano board appropriately run CUDA, however it doesn’t.

import torch
torch.cuda.is_available()
False

But in docker container, the result is TRUE.
I don’t know why.
Environment info in written below.

  • NVIDIA Jetson Nano (Developer Kit Version)
    • Jetpack 4.5.1 [L4T 32.5.1]
    • NV Power Mode: MAXN - Type: 0
    • jetson_stats.service: active
  • Libraries:
    • CUDA: 10.2.89
    • cuDNN: 8.0.0.180
    • TensorRT: 7.1.3.0
    • Visionworks: 1.6.0.501
    • OpenCV: 4.1.1 compiled CUDA: NO
    • VPI: ii libnvvpi1 1.0.15 arm64 NVIDIA Vision Programming Interface library
    • Vulkan: 1.2.70

torch 1.9.0

Python 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0] on linux

Hi @kth397272, did you install the PyTorch wheel from this topic?

Can you build & run the deviceQuery sample from /usr/local/cuda?

Hi, the thing is I’m having trouble with installing pycuda as well.
I’m trying to install pycuda with pip3 install pycuda --user, but seeing extremely long red error code.


This picture is only part of it.

FYI, if you run a command on command line, then you can easily get a log without resorting to screenshots. Example, contrived with ls:
ls -l /dev/* 2>&1 | tee log_ls.txt

The “2>&1” sets any error text to standard output, then the “| tee log_ls.txt” simultaneously prints the output to the end user as what the user would expect anyway, followed by writing a copy to “log_ls.txt” (use whatever log name you want).

1 Like

You have to add your CUDA toolkit paths to your environment before attempting to build PyCUDA - you can check the paths I set while building it in container:

https://github.com/dusty-nv/jetson-containers/blob/6333e058495e689fcceebb6bc04eb72b5ab43893/Dockerfile.pytorch#L111

Thanks!! I successfully installed pycuda and torch.cuda.is_available() returns True as well. :D

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.