Reduced memory after 410.93 NVIDIA driver installation

Hello, I am managing small ROCKS 7 cluster with seven compute nodes without gpu and two with gpu. In one of them I have Tesla P100 graphic card, on another Quadro K5000. Recently I had a problem of configuring couple of my nodes. For job scheduling I am using 17.11.3-2 slurm workload manager and when I am running slurm script before installing NVIDIA driver with command free -h I see that code runs properly and finishes by using 20 GB of RAM, but when I run it once again with NVIDIA driver installed I get message that host is out of memory and stops somewhere around when it used 10 GB of RAM. I was counting on CPU, not GPU

P.S. this happens when I am using intel-opencl library. Although running simple c++ code works fine in both cases
nvidia-installer.log (84.3 KB)
alloc.cpp (98 Bytes)