Docker image sizes for r35.4.1, trimming libraries?

Since the libraries are now included in Docker images starting with R34, and the container sizes are so much larger (10GB for nvcr.io/nvidia/l4t-jetpack:r35.4.1), I’m wondering if there are any guidelines for slimming down the base images in the cases where the applications we’re running don’t require an entire set of all libraries. A 10GB container (actually over 12GB by the time we install our own code) isn’t too bad for an initial deployment, but it’s a LOT when doing remote upgrades.

Are there any guidelines for creating SMALLER images that don’t necessarily include all of the libraries? We can likely do it by hand if need be, but reverse engineering all the dependencies will be quite onerous and I’d prefer to avoid it.

Similarly (but not quite as large), I notice that most of the containers in GitHub - dusty-nv/jetson-containers: Machine Learning Containers for NVIDIA Jetson and JetPack-L4T have build-essentials as a dependency - but do we really need to have full C compilers in containers that are just running pytorch-based applications?

Hi @riz94107, the containers from jetson-containers are essentially build/development containers that include the toolchains and dev packages. After your application’s container is built, you can make the slimmed-down version the way you typically would in docker (i.e. with a multi-stage build, copying the essential files/packages that you need for runtime, ect).

There are smaller base container images than l4t-jetpack to derive your final deployment/production container off of, for example l4t-cuda:runtime and l4t-tensorrt:runtime

I don’t know of a way to automatically determine/extract only the runtime dependencies of any given application - it’s an open-ended problem given that they can need packages installed from pip, apt, other configuration files, ect. Otherwise I’d probably have that feature in jetson-containers to automatically build the ‘deployment’ container.

OK, that’s actually helpful to know that there are smaller images. I guess I will look more closely at what’s there.

I do understand that there isn’t an automatic way to determine the dependencies - I was just taken aback that it seemed like the base was everything-and-the-kitchen-sink.

Thanks for the timely response!
+j

Are there any other smaller container images, or at least a description of how l4t-cuda:runtime and l4t-tensorrt:runtime were built? I’m currently fighting with trying to figure out how to add needed dependencies and only needed dependencies; the l4t-cuda:11.4.19-runtime image seemed the perfect place to start, except that adding (for example) pytorch to it eventually dies with:
ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory

…I guess I have to learn everything there is to learn about the infrastructure of all nvidia apps in order to build images that are of a reasonable runtime size? Where do I go to learn about what the difference between “cuda” and “cudnn” are, and why the latter wouldn’t be available in a “cuda runtime” image?

@riz94107 cuDNN is a separate library from the CUDA Toolkit (likewise with TensorRT). Here is an example Dockerfile for l4t-jetpack:

You can take that, and change the libraries that it installs to have just what you want/need.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.