I am trying to build the YOLOv4 Custom BBox Parser library (more info here) in a multistage docker container.
However, I am unable to access nvidia libraries provided by
runtime: nvidia during the docker image build.
My Dockerfile is as follows:
FROM nvcr.io/nvidia/deepstream-l4t:6.2-triton as libnvds_infercustomparser_builder # Install Custom BBox Parser # # Install Git LFS WORKDIR /workspace RUN curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash RUN apt-get install git-lfs RUN git lfs install # Install Custom BBox Parser WORKDIR /opt/nvidia/deepstream/deepstream/sources/ RUN git clone -b release/tao3.0_ds6.2ga https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps WORKDIR /opt/nvidia/deepstream/deepstream/sources/deepstream_tao_apps ENV CUDA_VER=11.4 RUN make
My docker-compose file is as follows:
services: dev: runtime: nvidia container_name: dev build: context: . entrypoint: - /bin/sh
RUN make, I get the following output:
Building dev [+] Building 2.6s (13/14) => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 3.81kB 0.0s => [internal] load metadata for nvcr.io/nvidia/deepstream-l4t:6.2-triton 2.0s => [auth] nvidia/deepstream-l4t:pull,push token for nvcr.io 0.0s => [ 1/10] FROM nvcr.io/nvidia/deepstream-l4t:6.2-triton@sha256:17e6c798d9772fa85d88594121fbdce9c4e25a94cf24196255ec08d49d5299f6 0.0s => CACHED [ 2/10] WORKDIR /workspace 0.0s => CACHED [ 3/10] RUN curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash 0.0s => CACHED [ 4/10] RUN apt-get install git-lfs 0.0s => CACHED [ 5/10] RUN git lfs install 0.0s => CACHED [ 6/10] WORKDIR /opt/nvidia/deepstream/deepstream/sources/ 0.0s => CACHED [ 7/10] RUN git clone -b release/tao3.0_ds6.2ga https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps 0.0s => CACHED [ 8/10] WORKDIR /opt/nvidia/deepstream/deepstream/sources/deepstream_tao_apps 0.0s => ERROR [ 9/10] RUN make -C post_processor 0.6s ------ > [ 9/10] RUN make -C post_processor: #0 0.418 make: Entering directory '/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/post_processor' #0 0.427 deepstream-app: error while loading shared libraries: libnvdla_compiler.so: cannot open shared object file: No such file or directory #0 0.428 g++ -o libnvds_infercustomparser_tao.so nvdsinfer_custombboxparser_tao.cpp -I/opt/nvidia/deepstream/deepstream-/sources/includes -I/usr/local/cuda-11.4/include -Wall -std=c++11 -shared -fPIC -Wl,--start-group -lnvinfer -lnvparsers -L/usr/local/cuda-11.4/lib64 -lcudart -lcublas -Wl,--end-group #0 0.480 nvdsinfer_custombboxparser_tao.cpp:25:10: fatal error: nvdsinfer_custom_impl.h: No such file or directory #0 0.480 25 | #include "nvdsinfer_custom_impl.h" #0 0.480 | ^~~~~~~~~~~~~~~~~~~~~~~~~ #0 0.480 compilation terminated. #0 0.504 make: Leaving directory '/opt/nvidia/deepstream/deepstream-6.2/sources/deepstream_tao_apps/post_processor' #0 0.504 make: *** [Makefile:49: libnvds_infercustomparser_tao.so] Error 1 ------ Dockerfile:43 -------------------- 41 | WORKDIR /opt/nvidia/deepstream/deepstream/sources/deepstream_tao_apps 42 | ENV CUDA_VER=11.4 43 | >>> RUN make -C post_processor 44 | RUN ls /usr/lib/aarch64-linux-gnu/tegra 45 | -------------------- ERROR: failed to solve: process "/bin/sh -c make -C post_processor" did not complete successfully: exit code: 2 ERROR: Service 'dev' failed to build : Build failed
During my exploration, I found that the directory
/usr/lib/aarch64-linux-gnu/tegra/ does not exist during image build.
Should I omit the
make command, and run it in the started container, the command succeeds as it has access to the nvidia runtime. I don’t particularly want to have to do this as I want the resulting library file to be provided to the next stage of the docker container using a
COPY --from= libnvds_infercustomparser_builder ... command.
How do I run the
make command that depends on the nvidia runtime in the image build stage?
I appreciate any suggestion on how to overcome this problem.
Thanks in advance,