Hi there
I am using nvidia-docker to run tensorflow object detection api on a AWS EC2 instance. I got the following problem with “possibly insufficient driver version”. The cuda version of my ec2 instance is 9.0. I read some threads that this maybe caused by incompatible of cuDNN, cuda version and tensorflow. BTW I used tensorflow 1.9.0. Is there any way to solve this problem. Thanks.
Rui
The following is the error
2018-11-16 01:37:35.085953: E tensorflow/stream_executor/cuda/cuda_dnn.cc:332] could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
2018-11-16 01:37:35.086177: E tensorflow/stream_executor/cuda/cuda_dnn.cc:340] possibly insufficient driver version: 396.44.0
Segmentation fault
nvidia-smi
Fri Nov 16 01:37:54 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.44 Driver Version: 396.44 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 On | 00000000:00:1E.0 Off | 0 |
| N/A 39C P8 27W / 149W | 0MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+