"libcudnn.so.8: file too short" running Pytorch in docker

Hello,
I’m trying to execute a docker application on my Jetson Xavier NX.
I pull and run the image from NVIDIA L4T ML | NVIDIA NGC
(version for JetPack 4.6 (L4T R32.6.1)) but when I import torch in python3 I get:

OSError: /usr/lib/aarch64-linux-gnu/libcudnn.so.8: file too short

Outside the docker Pytorch1.9 works fine.

This is the output of nvcc --version:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Feb_28_22:34:44_PST_2021
Cuda compilation tools, release 10.2, V10.2.300
Build cuda_10.2_r440.TC440_70.29663091_0

Hi @andrea.macri92, what version of JetPack-L4T are you running (you can check this with cat /etc/nv_tegra_release) and what is the docker run command you are using to start the container?

Are you starting it with --runtime nvidia?

I’m using L4T R32.6.1

# R32 (release), REVISION: 6.1, GCID: 27863751, BOARD: t186ref, EABI: aarch64, DATE: Mon Jul 26 19:36:31 UTC 2021

And yes i’m using the nvidia runtime, this is the command

nvidia@tegra-ubuntu:~$ sudo nvidia-docker run -it --rm --runtime nvidia --network host nvcr.io/nvidia/l4t-ml:r32.6.1-py3

Are you running the Xavier NX devkit SD card image? Or did you flash with SDK Manager and allow SDK Manager to complete the post-install steps? Both ways should have installed the needed Docker components.

Here is another test you can try to confirm you have a working Docker / CUDA installation: try starting l4t-base container (with --runtime nvidia) and run python3 -c 'import tensorrt'

I flashed with SDK manager and installed all components. Actually at the beginning I didn’t encounter this problem, it appeared later.

Anyway I’m able to import tensorrt.

Hmm…had you recently done an apt upgrade by chance?

If so, can you try the steps from this post?

Also, here is what my libcudnn looks like on L4T R32.6.1:

# outside of container
ls -ll /usr/lib/aarch64-linux-gnu/libcudnn.so.8*                                          
lrwxrwxrwx 1 root root     17 May 24  2021 /usr/lib/aarch64-linux-gnu/libcudnn.so.8 -> libcudnn.so.8.2.1
-rw-r--r-- 1 root root 162280 May 24  2021 /usr/lib/aarch64-linux-gnu/libcudnn.so.8.2.1

# inside l4t-ml container
ls -ll /usr/lib/aarch64-linux-gnu/libcudnn.so.8*
lrwxrwxrwx 1 root root     17 Dec  2 17:08 /usr/lib/aarch64-linux-gnu/libcudnn.so.8 -> libcudnn.so.8.2.1
-rw-r--r-- 1 root root 162280 May 24  2021 /usr/lib/aarch64-linux-gnu/libcudnn.so.8.2.1

How does it look for you?

Hi,

Do you use l4t-ml:r32.6.1-py3 on a JetPack 4.6 environment?
Just confirm that the PyTorch library can be loaded without error.

$ sudo docker run -it --rm --runtime nvidia --network host nvcr.io/nvidia/l4t-ml:r32.6.1-py3
...
root@nvidia-desktop:/# python3
Python 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>>

Thanks.

Yes it was probably caused by that, i also had a bug on nvidia runtime which was then fixed by the latest updates. Anyway the solution you linked doesn’t solve my problem.

Here my libcudnn

# outside of container
ls -ll /usr/lib/aarch64-linux-gnu/libcudnn.so.8* 
lrwxrwxrwx 1 root root     17 May 24  2021 /usr/lib/aarch64-linux-gnu/libcudnn.so.8 -> libcudnn.so.8.2.1
-rw-r--r-- 1 root root 162280 May 24  2021 /usr/lib/aarch64-linux-gnu/libcudnn.so.8.2.1

# inside of container
ls -ll /usr/lib/aarch64-linux-gnu/libcudnn.so.8*                                          
lrwxrwxrwx 1 root root 17 Aug  5 16:44 /usr/lib/aarch64-linux-gnu/libcudnn.so.8 -> libcudnn.so.8.2.1
-rw-r--r-- 1 root root  0 Jul 27 22:24 /usr/lib/aarch64-linux-gnu/libcudnn.so.8.2.1

I am noticing now that inside the container the libcudnn has dimension 0, which explains the error. How is it possible?

Yes I do, thanks

Can you confirm the contents of your /etc/nvidia-container-runtime/host-files-for-container.d/cudnn.csv file?

$ cat /etc/nvidia-container-runtime/host-files-for-container.d/cudnn.csv
lib,  /usr/lib/aarch64-linux-gnu/libcudnn.so.8.2.1
sym,  /usr/lib/aarch64-linux-gnu/libcudnn.so.8
sym,  /usr/lib/aarch64-linux-gnu/libcudnn.so
lib,  /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.2.1
sym,  /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8
sym,  /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so
lib,  /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.2.1
sym,  /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8
sym,  /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so
lib,  /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.2.1
sym,  /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8
sym,  /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so
lib,  /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.2.1
sym,  /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8
sym,  /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so
lib,  /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.2.1
sym,  /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8
sym,  /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so
lib,  /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.2.1
sym,  /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8
sym,  /usr/include/cudnn_adv_infer.h
sym,  /usr/include/cudnn_adv_train.h
sym,  /usr/include/cudnn_backend.h
sym,  /usr/include/cudnn_cnn_infer.h
sym,  /usr/include/cudnn_cnn_train.h
sym,  /usr/include/cudnn.h
sym,  /usr/include/cudnn_ops_infer.h
sym,  /usr/include/cudnn_ops_train.h
sym,  /usr/include/cudnn_version.h
lib,  /usr/include/aarch64-linux-gnu/cudnn_adv_infer_v8.h
lib,  /usr/include/aarch64-linux-gnu/cudnn_adv_train_v8.h
lib,  /usr/include/aarch64-linux-gnu/cudnn_backend_v8.h
lib,  /usr/include/aarch64-linux-gnu/cudnn_cnn_infer_v8.h
lib,  /usr/include/aarch64-linux-gnu/cudnn_cnn_train_v8.h
lib,  /usr/include/aarch64-linux-gnu/cudnn_ops_infer_v8.h
lib,  /usr/include/aarch64-linux-gnu/cudnn_ops_train_v8.h
lib,  /usr/include/aarch64-linux-gnu/cudnn_v8.h
lib,  /usr/include/aarch64-linux-gnu/cudnn_version_v8.h
sym,  /etc/alternatives/libcudnn
sym,  /etc/alternatives/libcudnn_adv_infer_so
sym,  /etc/alternatives/libcudnn_adv_train_so
sym,  /etc/alternatives/libcudnn_cnn_infer_so
sym,  /etc/alternatives/libcudnn_cnn_train_so
sym,  /etc/alternatives/libcudnn_ops_infer_so
sym,  /etc/alternatives/libcudnn_ops_train_so
sym,  /etc/alternatives/libcudnn_so
sym,  /etc/alternatives/cudnn_adv_infer_h
sym,  /etc/alternatives/cudnn_backend_h
sym,  /etc/alternatives/cudnn_cnn_train_h
sym,  /etc/alternatives/cudnn_ops_train_h
sym,  /etc/alternatives/cudnn_adv_train_h
sym,  /etc/alternatives/cudnn_cnn_infer_h
sym,  /etc/alternatives/cudnn_ops_infer_h
sym,  /etc/alternatives/cudnn_version_h
sym,  /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer_static_v8.a
sym,  /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train_static_v8.a
lib,  /usr/lib/aarch64-linux-gnu/libcudnn_static_v8.a
lib,  /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer_static.a
lib,  /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train_static.a
sym,  /usr/lib/aarch64-linux-gnu/libcudnn_static.a

If this issue persists, you may want to backup your work and re-flash the device or SD card rather than spend much time debugging it further. Or you can try re-installing these packages with apt:

libnvidia-container-tools - NVIDIA container runtime library (command-line tools)
libnvidia-container0 - NVIDIA container runtime library
nvidia-container-csv-cuda - Jetpack CUDA CSV file
nvidia-container-csv-cudnn - Jetpack CUDNN CSV file
nvidia-container-csv-tensorrt - Jetpack TensorRT CSV file
nvidia-container-csv-visionworks - Jetpack VisionWorks CSV file
nvidia-container-runtime - NVIDIA container runtime
nvidia-container-toolkit - NVIDIA container runtime hook
nvidia-docker2 - nvidia-docker CLI wrapper
nvidia-container - NVIDIA Container Meta Package
nvidia-container-csv-opencv - Jetpack OpenCV CSV fil

I can confirm the problem was solved after re-installing the packages as suggest by @dusty_nv .
Many thanks for the help.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.