GPU driver verification failed when executing NGC container on Singularity

Hi, I’m trying to use one of the ngc containers images on a HPC center. I’m facing a driver version issue.

I followed the instructions here :
https://ngc.nvidia.com/catalog/containers/hpc:lammps

When it gets to starting the container, I ran :

singularity run --nv -B $(pwd):/host_pwd lammps_24Oct2018.simg /bin/bash

(By the way, there’s a typo in the command in the documentation, a whitespace is missing after the -B flag)

What I got is the process stopped with :

WARNING: Underlay of /etc/hosts required more than 50 (78) bind mounts
WARNING: Underlay of /usr/bin/nvidia-cuda-mps-server required more than 50 (233) bind mounts
WARNING: Could not chdir to home: /pbs/home/a/myname
2019/10/22 15:36:49 GPU driver verification failed: Host driver 418.39 not compatible with container: >=410.48, ==384.00

The output of

nvidia-smi

is :

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.39       Driver Version: 418.39       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla V100-PCIE...  On   | 00000000:3B:00.0 Off |                    0 |
| N/A   28C    P0    25W / 250W |      0MiB / 32480MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

As my driver version is 418.39, which is >=410.48, I don’t understand why it fails ?
Is there a possible workaround ?

If this post is not relevant to this section, feel free to move it.
Thanks !

Thanks for your answer.

Do you mean there’s a typo in the message, and “>=410.48” should be “>=418.48” instead ?

sorry my previous post was incorrect, I misread the requirements

Tried updating to driver version 418.87.01, but it didn’t solve the problem. Do you have any other idea ?