- I’m looking at the NGC catalog entry for l4t-base (https://ngc.nvidia.com/catalog/containers/nvidia:l4t-base) and I can’t find a link to the GitHub repo / Dockerfile. Is there one and is it public?
- Is there any reason I can’t add the repository
deb https://repo.download.nvidia.com/jetson/common r32.4 mainto an image built from it? There are some packages there I’d like to use inside the containers I’m building from the image.
- The image is public. It needs no login to pull. I don’t think there is a public Dockerfile, so you’ll have to inspect the image for that.
- You can absolutely add the apt repositories inside l4t-base. I have no idea why this isn’t the default. Here is an image where that’s arleady done and the Dockerfile is linked off that page.
OK … I really only need the “common” repo; I want to stay device-independent in the images I’m building.
You can use
--build-arg SOC=t210 at
docker build to specify the SOC, but these files are mostly mounted over anyway at
docker run by
--runtime nvidiaso it doensn’t matter what you pick at build time. I’d recommend doing something like:
RUN apt-get update && apt-get install -y --no-install-recommends \ libcudafoo-dev \ && build_the_thing.sh \ && apt-get purge -y --autoremove \ libcudafoo-dev \ && rm -rf /var/lib/apt/lists/*
Basically, since these packages aren’t available normally at build time, but will be mounted at runtime, just install them temporarily, build your thing, and then remove the packages you installed unless they are some that are not mounted.
Has some .csv files listing what’ll be mounted by the nvidia runtime. You can add or remove from that folder if you want to change it’s behavior.
Lastly, if you do include everything in your image, and don’t use
--runtime nvidia. my undrestanding is that stuff from t210 (nano, tx1) should be compatible with everything.