Nvidia-smi shows all gpus although slurm allocates one

As mentioned, we recently configured a small slurm cluster, and whenever I run nvidia-smi in a Slurm job which allocates only 1 gpu it shows all of them in the output , that’s not usually the behaviour according to other clusters I worked on so I guess there’s a misconfiguration somewhere, any ideas?

Since this isn’t my area, I asked our DevOps folks who said there could be multiple reasons, but their best guess is that you might be missing the cgroups plugin and/or it is not enabled for gres/gpus.