Importing PyTorch fails in L4T R32.3.1 Docker image on Jetson Nano after successful install

I’m trying to install PyTorch 1.2.0 and Torchvision 0.4.0 from the l4t-base:r32.3.1 image on my Jetson Nano. Here are the relevant parts of my Dockerfile:

    numpy \
    pandas \
    cloudpickle \
    Cython \
    boto3 \
    && \
    zlib1g \
    zlib1g-dev \
    libjpeg-dev \
    && \
wget -O torch-1.2.0a0+8554416-cp36-cp36m-linux_aarch64.whl && \
pip3 install torch-1.2.0a0+8554416-cp36-cp36m-linux_aarch64.whl \
    && \

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-10.0/targets/aarch64-linux/lib/ && \ 
ls /usr/local/cuda-10.0/targets/aarch64-linux/lib/ && \ # Just to see files in folder

git clone --branch v0.4.0 torchvision && \
cd torchvision && \
sudo python3 install && \
cd ../  && \

I found the above from instructions from this post..

The build fails when attempting to import torch when running the Torchvision file:

Successfully installed torch-1.2.0a0+8554416
# 4 lines below are from the ls command in the above Dockerfile
Cloning into 'torchvision'...
Traceback (most recent call last):
  File "", line 13, in <module>
    import torch
  File "/usr/local/lib/python3.6/dist-packages/torch/", line 81, in <module>
    from torch._C import *
ImportError: cannot open shared object file: No such file or directory

Doing some searching, I did find this user having similar issues on these forums but the user did not provide more information on how it was solved after solving it and the user was missing a different file. As you can see in my Dockerfile, I followed some of the instructions from the Nvidia employee in that thread (tried to export the path, also searched the contents of the folder which also confirmed the file was missing).

Any ideas why this file might be missing? FWIW I was able to get this to install fine directly onto my Jetson Nano without Docker.

Many thanks.


Upon further searching and reading, it looks like this might be a CUDA version issue? Am I supposed to bring CUDA in from my host system to the Docker container before I can install Torchvision?


Please noticed that there is a limitation in our L4T docker:

/usr/local/cuda is readonly

One of the limitations of the beta is that we are mounting the cuda directory from the host. This was done with size in mind as a development CUDA container weighs 3GB, on Nano it’s not always possible to afford such a huge cost. We are currently working towards creating smaller CUDA containers.

So the CUDA library is mounted only when the docker container launched.
If you want to link CUDA library when building stage, please copy the folder to image manually.