Minimal CUDA runtime install on Ubuntu

I’m trying to create a MINIMAL Ubuntu 16.04 install with CUDA runtime support but I’m having trouble with the “minimal” part.
From a fresh raw Ubuntu (using about 450MB on disk) , it’s straightforward to download the .deb install file from NVidia’s website, use dpkg to register it, and issue

sudo apt-get install nvidia-390 cuda

to install a full CUDA environment that’s ready to use for running CUDA apps or doing full development. This installs everything from the CUDA libraries to GCC to the CUDA visual tools to cuBLAS. This also uses about 4 GB of space… far far more than the bare Ubuntu itself. I want a tiny install to make it fast and cheap to snapshot the whole disk without worry.

I’m trying to make a MINIMAL install with as few files as possible, just the runtime. No need for compilers or profilers, just the runtime libs and probably “nvidia-smi”. (without installing CUDA you don’t have the shared libraries like libcudart.so.8.0)

Unfortunately I don’t see an easy way to do this, beyond installing CUDA then going wild using “rm” on all the CUDA files I can find (mostly in /usr/local/cuda) except the runtime library itself.

I also looked at the legacy .run runfile installer to see if there was a minimal install option, but you basically only have the option for driver, toolkit, and samples, not “minimal runtime.”

Any suggestion for making this minimal “can run CUDA apps but with no developer bloat” environment?

As an alternative, is it possible to statically link all the cuda libs into the CUDA executable? I’d expect this is a questionable idea (can’t get any CUDA updates/bugfixes) even if it were possible.

Thanks!

For use of CUDA (i.e. CUDA compiled application support, not development support), the only thing necessary to install on your machine is a proper GPU driver install. (This will also include nvidia-smi).

No aspect of the CUDA toolkit is required, for basic functionality.

For applications written to conform to the driver API (e.g. vectorAddDrv) nothing else is required.

For applications written to conform to the runtime API (e.g. vectorAdd), the cudart library is required (and not installed by the GPU driver) but the default compilation option for apps compiled/linked with nvcc is that cudart is statically linked to the app. So in this case, nothing else is required other than the same GPU driver install which is sufficient for both driver API apps and runtime API apps where the cudart library is statically linked.

For applications that use other CUDA libraries, (e.g. CUBLAS, CUFFT, CUSOLVER, CURAND, NPP) I believe most of them have statically linkable versions.

For applications that don’t statically link to a needed CUDA library, then the right approach is for the application to bundle/redistribute the version of cudart and/or other library with the app, and just depend on the host machine for a proper GPU driver install (same as the above cases).

The redistributable libraries are listed in the EULA appendix A:

https://docs.nvidia.com/cuda/eula/index.html#attachment-a

You can also get a good idea of what is available in static or dynamic form just by looking at the contents of the

/usr/local/cuda/lib64

directory on a standard linux CUDA toolkit install.

There is no official roadmap for you to do what you are asking. Of course it should be possible, with some thought given to what I have said above. The exact roadmap will depend on exactly what dynamically linked scenarios you want to support, and keep in mind that providing application support this way will only support applications compiled against the specific libraries and versions you provide. If you depend on the app developer to do the bundling, then of course they know which API version their app was compiled against, and can include the appropriate libraries. If you maintain your target machine with the latest GPU drivers, then this methodology will (should) support all applications, regardless of which CUDA version they were compiled against.

txbob, thanks for the quick and useful reply!

For some reason I always assumed static linking of cuda libs was Bad, and looking back at all my Makefiles for years shows I’ve always just been cut and pasting whatever I did last time. Static linking is indeed the default!


I solved my “minimal install” problem by ignoring the whole .deb apt-get system, since that introduces unneed dependencies which were eating far too much disk space. I REMOVED all cuda and nvidia packages with

apt-get remove --purge cuda* nvidia*

which removed not only the CUDA files but literally dozens of other packages CUDA depended on. This gave me a tiny Ubuntu, which I further reduced by removing stuff like Boost libraries, QT, etc. (A fresh Ubuntu install would have also been a good idea, but I already had a lot of setup in this instance).
Then at the end I downloaded the NVIDIA driver RUN package (not .deb) and used

./NVIDIA_driver390somefilename.run   --no-opengl-libs

and got a quite small CUDA-ready driver running with no extra fluff.

But of course my old binaries won’t work, they needed the libraries (I could recompile them, but I have a set of old historical versions too that I use for comparisons, and I dont want to edit all their makefiles and rebuild…) so I did an evil hack, and I created a new empty directory /usr/local/cuda-9.1/lib64 and copied JUST the libcudart.so files into them, using only about 100kb (as opposed to a 2GB CUDA toolkit). And then everything works fine, and I can snapshot the instance and only have it use about 350MB on disk. Success!

Thanks again for the help, txbob!