Cicc: not found when building jetson-inference from Dockerfile

Hi friends,

I am attempting to build jetson-inference from a Dockerfile so that I can run code out of a docker container.

I have created the following Dockerfile that essentially follows the build steps from the Hello AI World example.

FROM nvcr.io/nvidia/l4t-ml:r32.4.2-py3
RUN apt-get update
RUN apt-get install git cmake libpython3-dev python3-numpy
RUN git clone --recursive https://github.com/dusty-nv/jetson-inference
# I found this line was required
RUN ln -s /usr/lib/aarch64-linux-gnu/libnvparsers.so.7.1.0 /usr/lib/aarch64-linux-gnu/libnvcaffe_parser.so
RUN echo $PATH && cd jetson-inference && mkdir build && cd build && cmake ../
# This build step fails. If this step is removed, and I ssh into the container created from the following steps, this step succeeds
RUN cd jetson-inference/build && make -d -j1 && sudo make install && sudo ldconfig

When I run the Dockerfile:

sudo docker build -t jetson-inference-build .

The build fails with the following error:

sh: 1: cicc: not found

The really strange thing is if I omit the last step and ssh into the image with the following command I am able to build successfully.

sudo docker run --gpus all --privileged -it --rm --runtime nvidia --network host jetson-inference-build
cd jetson-inference/build
make -d -j1

From digging around, it seems like /usr/local/cuda/nvvm/bin/cicc is available in the running container but not when I try building from the Dockerfile.

I’m new to Docker so I’m assume I’m missing some important nuance.

Many thanks for your help!

Hi @chrisk, can you try setting your default Docker runtime to nvidia?

https://github.com/dusty-nv/jetson-containers#docker-default-runtime

This will make it so that the nvidia runtime is used during docker build operations, and those CUDA binaries should be mounted then.

That worked. Thanks!

Unfortunately the proposed solution of setting –runtime nvidia doesn’t work (can’t work) when cross compiling. :(

Is there another solution?