Cross-compilation issues with docker-nvidia and l4t-base image

Hey All,

I think I’m having what seems to be a fairly niche issue:

I’m trying to set up an environment with which to build docker images to run on the Xavier, on an x86 machine.

For context, I’m running docker 19.03 on my Ubuntu 18.04 install, with CUDA 10.1, everything installed normally.

I’ve got the qemu aarch64 interpreter set up via docker with:
sudo docker run --rm --privileged hypriot/qemu-register

and I know it works because I can run the l4t-base image with:
sudo docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-base:r32.3.1

However, I’m trying to build OpenCV inside this l4t-base container, and it fails at some point saying:
/bin/sh: 1: cicc: not found

I’ve done a little research and basically found that the CUDA compiler in the container can’t find nvvm. And I confirmed this by looking in /usr/local/cuda to find that the nvvm directory does not exist. So I assume that the container should be looking to my host OS to find nvvm, in which case it does exist but the container cannot find it. I’ve thought of simply mounting the directory as a volume but I don’t think that would work because of the differences in architecture…but maybe the interpreter would handle that?

Any advice would be appreciated. Thanks.

Hi @weisman_zachary, /usr/local/cuda is typically mounted from the Jetson host into the containers using the CSV mount plugins:

https://github.com/NVIDIA/nvidia-docker/wiki/NVIDIA-Container-Runtime-on-Jetson#mount-plugins

However since you are building on x86, this isn’t done. Instead, you could COPY the nvvm dir from your Jetson into the container you are building. Or you could install the CUDA packages from your Jetson (found under /var/cuda-repo-*-local-*/) into your container. You are correct that you would need to use the version from Jetson as opposed to x86 version.

Okay, so the only way to do this is to copy in a pre-built aarch64 CUDA installation into the container?

Other than that there shouldn’t be any issue running the aarch64 CUDA code within the container on x86 as long as I have the interpreter running, and the nvidia-container-runtime set in my docker daemon?

The aarch64 CUDA code isn’t actually going to run, because the x86 system doesn’t have Jetson’s integrated GPU. The L4T drivers are different from the discrete GPU PCIe drivers, so aarch64 CUDA isn’t going to be able to use the discrete GPU on your x86 host. It would need to be run on the Jetson itself. However, you should be able to compile CUDA code with nvcc, because that doesn’t actually use the GPU.

You could also try this approach: GitHub - NVIDIA/nvidia-docker: Build and run Docker containers leveraging NVIDIA GPUs