L4t-ml:r32.4.3-py3 Import torch error

I’m using the docker image nvcr.io/nvidia/l4t-ml:r32.4.3-py3 without modification and import torch gives me the error

python3

Python 3.6.9 (default, Apr 18 2020, 01:56:04)
[GCC 8.4.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.

import torch
Traceback (most recent call last):
File “”, line 1, in
File “/usr/local/lib/python3.6/dist-packages/torch/init.py”, line 188, in
_load_global_deps()
File “/usr/local/lib/python3.6/dist-packages/torch/init.py”, line 141, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File “/usr/lib/python3.6/ctypes/init.py”, line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: libcurand.so.10: cannot open shared object file: No such file or directory

Hmm I just tested this again on Nano with JetPack 4.4 (L4T R32.4.3), and did not see this issue.

Are you sure you are on JetPack 4.4 production release (L4T R32.4.3) and not JetPack 4.4 Developer Preview (L4T R32.4.2)?

If you are on JetPack 4.4 DP, pull nvcr.io/ea-linux4tegra/l4t-ml:r32.4.2-py3 instead.

If you are indeed on JetPack 4.4 / L4T R32.4.3, it seems something is off with your CUDA toolkit or Docker service, so you might want to re-install.

I did not add the --runtime nvidia command, it works now with it:
docker run -it --rm --runtime nvidia --network host nvcr.io/nvidia/l4t-ml:r32.4.3-py3

Aha ok, that makes sense - without --runtime nvidia, the CUDA libraries wouldn’t get mounted into the container from your host device. If you’d like to avoid having to set this flag every time you launch a container, you could instead set the default-runtime to nvidia in your Docker daemon configuration file:

https://github.com/dusty-nv/jetson-containers#docker-default-runtime