Error on startup of DIGITS Caffe container 19.06

Seeing the following logs on run

docker run --name digits-caffe -d -p 8889:5000 -v digits:/data:ro -v jobs:/workspace/jobs --shm-size=1g --ulimit memlock=-1 nvcr.io/nvidia/digits:19.06-caffe

249c24540ae2873b41e16302d3cd7db72535fd443064831fa5cf9fca1124821f
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused “process_linux.go:430: container init caused “process_linux.go:413: running prestart hook 1 caused \“error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --utility --video --require=cuda>=9.0 --pid=21500 /var/lib/docker/overlay2/193c130acba506ea82b9cc9de68ccd10701681c369096b253f7a8bfadeb7cf4b/merged]\\nnvidia-container-cli: initialization error: cuda error: unknown error\\n\”””: unknown.

The current Tensorflow container starts and executes normally.
nvcr.io/nvidia/digits:19.06-tensorflow

Did you find a solution?

I am getting a similar error after installing nvidia-docker2, where when I run

docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi

I receive this error.

docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused “process_linux.go:430: container init caused “process_linux.go:413: running prestart hook 1 caused \“error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --utility --require=cuda>=10.1 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=396,driver<397 brand=tesla,driver>=410,driver<411 --pid=3577 /var/lib/docker/overlay2/0398fa2992b22c147ce9ebf468e5e0e495e3b9b5e7225fcc5b8891101ef0a8e2/merged]\\nnvidia-container-cli: initialization error: driver error: failed to process request\\n\”””: unknown.

Unfortunately no, I suspect it’s no longer being maintained

What is your driver version?
Release 19.06 is based on NVIDIA CUDA 10.1.168, which requires NVIDIA Driver release 418.xx. However, if you are running on Tesla (Tesla V100, Tesla P4, Tesla P40, or Tesla P100), you may use NVIDIA driver release 384.111+ or 410. The CUDA driver’s compatibility package only supports particular drivers. For a complete list of supported drivers, see the CUDA Application Compatibility topic. For more information, see CUDA Compatibility and Upgrades. (See https://docs.nvidia.com/deeplearning/digits/digits-release-notes/rel_19-06.html#rel_19-06). You might have to pull a DIGITS version specific to your driver, CUDA version.

I was able to fix this issue by reinstalling my Nvidia driver and Cuda toolkit.