I would like to build docker image with l4t-base:r34.1. But seems my installation process (build CuPy) cannot find CUDA, with error like this: fatal error: cuda.h: No such file or directory
And the nvcc command not found
nvcc --version
bash: nvcc: command not found
And I try l4t-pytorch:r34.1.0-pth1.12-py3 image which can find CUDA.
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Nov_11_23:44:05_PST_2021
Cuda compilation tools, release 11.4, V11.4.166
Build cuda_11.4.r11.4/compiler.30645359_0
Previously, I use l4t-base:r32.7.1 that also work fine.
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Feb_28_22:34:44_PST_2021
Cuda compilation tools, release 10.2, V10.2.300
Build cuda_10.2_r440.TC440_70.29663091_0
Would you have any advice to make the l4t-base:r34.1 container find CUDA?
Starting with the r34.1 release (JetPack 5.0 Developer Preview), the l4t-base will not bring CUDA, CuDNN and TensorRT from the host file system. The l4t-base is meant to be used as the base container for containerizing applications for Jetson. Users can apt install Jetson packages and other software of their choice to extend the l4t-base dockerfile (see above) while building application containers. All JetPack components are hosted in the Debian Package Management server here.
I encounter another problem about CUDA is not work.
My system is Jetson Xavier NX with L4T 32.5.2 system.
Doker image “nvcr.io/nvidia/l4t-pytorch:r34.1.0-pth1.12-py3” is great for testing in Jetpack 5.0, python 3.8 and pytorch 1.12 with cuda environment, but when using this image , running python3 and some cuda problem araised like below:
(1) pytorch cannot use cuda.
import torch
torch.cuda.is_available()
False
(2) pycuda cannot use cuda.
import pycuda.autoinit
Traceback (most recent call last):
File “”, line 1, in
File “/usr/local/lib/python3.8/dist-packages/pycuda/autoinit.py”, line 1, in
import pycuda.driver as cuda
File “/usr/local/lib/python3.8/dist-packages/pycuda/driver.py”, line 65, in
from pycuda._driver import * # noqa
ImportError: /usr/lib/aarch64-linux-gnu/tegra/libcuda.so.1: file too short
Hi @kogran, the issue is that you are using a Docker container image built for L4T R34.1 (JetPack 5.0 DP) on L4T R32.5.2. Instead you should run one of these l4t-pytorch images built for L4T R32.5: