Hello, my team and I are trying to use a neural network on the Xavier for image segmentation. We decided to use a docker container after having a lot issues trying to install various versions of python packages including pytorch and torchvision in conda on the arm architecture. We came across this resource: https://ngc.nvidia.com/catalog/containers/nvidia:l4t-ml
and successfully ran the l4t-ml:r32.4.3-p4 container.
We wanted to change the version of pytorch, torchvision, and add other python packages such as matplotlib. So we went to: https://github.com/dusty-nv/jetson-containers to look at the original dockerfiles and scripts to run the dockerfiles. We attempted to alter the files for simple installation of packages like matplotlib and changed the pytorch and torch versions as well. After running docker_build.sh to install the pytorch version of the container we kept getting errors such as:
OSError: libnvToolsExt.so.1: cannot open shared object file: No such file or directory
[ImportError: libcublas.so.10.0: cannot open shared object file: No such file or directory]
My guess is that we need CUDA 10.0. But the l4t-ml container specifically asked for Jetpack 4.4, which installs CUDA 10.2 by default.
Any help would be appreciated, whether it is a work around for building the dockerfile, or a way to install any version of a package in our conda environment on arm architecture (pip and conda install sometimes just do not have the package versions available for installation). The latter would be the most useful, as learning the nuances of Docker has already used up a lot of time that we do not have.