Understanding datacenter GPU device driver installation

i try to install device drivers for Turing GPUs in compute nodes of a small LINUX cluster under CentOS8.

In a first approach, the intention is to simply make the GPU cores accessible as accelerator cards. So no CUDA development on the compute nodes.

This resource “Installation Guide Linux :: CUDA Toolkit Documentation” indicates there are precompiled drivers available, and why it can be advantageous to use them.
But all my attempts to install them, by following “NVIDIA Driver Installation Quickstart Guide :: NVIDIA Tesla Documentation” seems to end up in “dnf --installroot $CHROOT module install nvidia-driver:latest”, which is not fitting our concept:

it will install

  • x11-server on compute nodes,
  • and cuda drivers,
  • while i am actually just wanting to install the “nvidia-driver/default”…

In another post “CUDA and driver installation on a small cluster cluster packages are mentioned, but the related links now seem to point to the same resources i already mentioned…

Am i getting/doing s.th. wrong?
Isn’t it possible to use the GPUs on headless nodes?
Any help would be gratly appreciated!