Hi,
We’re checking with options on how to customize the’nvcr.io/nvidia/tensorflow:20.03-tf2-py3’ image for our specific CUDA requirements. Is it possible to find the Dockerfile for this image?
Hi,
Do you want a docker image that can run on Jetson platform?
If yes, there are only two available images currently.
nvidia:l4t-base : with CUDA toolkit installed only.
nvidia:deepstream-l4t : with CUDA, cuDNN, TensorRT and Deepstream installed.
Thanks.
Hi AastaLLL,
Is it possible to get their Dockerfiles?
You can use docker image history --no-trunc
or docker history --no-trunc
to get the history of how an image was built. You can then build a dockerfile from that. You can also use docker image save
to get filesystem images. Other docker image
commands might be useful which you can find with docker image help
.
Here is a Dockerfile for a ubuntu image with nvidia’s apt repos enabled. You can then install whatever cuda packages you want with apt-get, however some of the nvidia-l4t
packages will not install for the moment (without ugly hacks). I will put up a proper repo later this week, but for now, here it is (use it under the MIT license).
FROM ubuntu:bionic
# This determines what <SOC> gets filled in in the nvidia apt sources list:
# valid choices: t210, t186, t194
ARG SOC="t210"
# because Nvidia has no keyserver for Tegra currently, we DL the whole BSP tarball, just for the apt key.
ARG BSP_URI="https://developer.nvidia.com/embedded/dlc/r32-3-1_Release_v1.0/t210ref_release_aarch64/Tegra210_Linux_R32.3.1_aarch64.tbz2"
ARG BSP_SHA512="13c4dd8e6b20c39c4139f43e4c5576be4cdafa18fb71ef29a9acfcea764af8788bb597a7e69a76eccf61cbedea7681e8a7f4262cd44d60cefe90e7ca5650da8a"
WORKDIR /tmp
# install apt key and configure apt sources
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
wget \
&& BSP_SHA512_ACTUAL="$(wget --https-only -nv --show-progress --progress=bar:force:noscroll -O- ${BSP_URI} | tee bsp.tbz2 | sha512sum -b | cut -d ' ' -f 1)" \
&& [ ${BSP_SHA512_ACTUAL} = ${BSP_SHA512} ] \
&& echo "Extracting bsp.tbz2" \
&& tar --no-same-permissions -xjf bsp.tbz2 \
&& cp Linux_for_Tegra/nv_tegra/jetson-ota-public.key /etc/apt/trusted.gpg.d/jetson-ota-public.asc \
&& chmod 644 /etc/apt/trusted.gpg.d/jetson-ota-public.asc \
&& echo "deb https://repo.download.nvidia.com/jetson/common r32 main" > /etc/apt/sources.list.d/nvidia-l4t-apt-source.list \
&& echo "deb https://repo.download.nvidia.com/jetson/${SOC} r32 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list \
&& rm -rf * \
&& apt-get purge -y --autoremove \
wget \
&& rm -rf /var/lib/apt/lists/*
in my case I needed
cuda-compiler-10-0 \
cuda-minimal-build-10-0 \
cuda-libraries-dev-10-0 \
and that was enough.
You can purge the cuda dependencies once you’re done building with them, so your layer might look like this:
RUN apt-get update && apt-get install -y --no-install-recommends \
cuda-thing-10-0 \
other-cuda-thing-10-0 \
&& build_your_thing.sh \
&& apt-get purge -y --autoremove \
cuda-thing-10-0 \
other-cuda-thing-10-0 \
&& rm -rf /var/lib/apt/lists/*
When you docker run
with --runtime nvidia
it will add bind mount /usr/local/cuda
from the host, though I have been told this behavior will change, so you may wish to leave the cuda runtime dependencies in the image (-dev packages can still go).