cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version

Hi
I’m using an EC2 Deep Learning Windows 10 g2.2xlarge instance. I have this problem when I try to run an implementation within jupyter notebook,

in the Terminal:
“cudaGetDevice () failed. Status: CUDA driver version is insufficient for CUDA runtime version”

This is my code:

from keras.applications.resnet50 import ResNet50

define ResNet50 model

ResNet50_model = ResNet50(weights=‘imagenet’)

output:

InternalError Traceback (most recent call last)
in ()
2
3 # define ResNet50 model
----> 4 ResNet50_model = ResNet50(weights=‘imagenet’)

C:\ProgramData\Anaconda3\envs\MXNet\lib\site-packages\keras_applications\resnet50.py in ResNet50(include_top, weights, input_tensor, input_shape, pooling, classes)
215 padding=‘valid’,
216 name=‘conv1’)(x)
→ 217 x = layers.BatchNormalization(axis=bn_axis, name=‘bn_conv1’)(x)
218 x = layers.Activation(‘relu’)(x)
219 x = layers.MaxPooling2D((3, 3), strides=(2, 2))(x)

C:\ProgramData\Anaconda3\envs\MXNet\lib\site-packages\keras\engine\base_layer.py in call(self, inputs, **kwargs)
458 # Actually call the layer,
459 # collecting output(s), mask(s), and shape(s).
→ 460 output = self.call(inputs, **kwargs)
461 output_mask = self.compute_mask(inputs, previous_mask)
462

C:\ProgramData\Anaconda3\envs\MXNet\lib\site-packages\keras\layers\normalization.py in call(self, inputs, training)
181 normed_training, mean, variance = K.normalize_batch_in_training(
182 inputs, self.gamma, self.beta, reduction_axes,
→ 183 epsilon=self.epsilon)
184
185 if K.backend() != ‘cntk’:

C:\ProgramData\Anaconda3\envs\MXNet\lib\site-packages\keras\backend\tensorflow_backend.py in normalize_batch_in_training(x, gamma, beta, reduction_axes, epsilon)
1833 “”"
1834 if ndim(x) == 4 and list(reduction_axes) in [[0, 1, 2], [0, 2, 3]]:
→ 1835 if not _has_nchw_support() and list(reduction_axes) == [0, 2, 3]:
1836 return _broadcast_normalize_batch_in_training(x, gamma, beta,
1837 reduction_axes,

C:\ProgramData\Anaconda3\envs\MXNet\lib\site-packages\keras\backend\tensorflow_backend.py in _has_nchw_support()
287 “”"
288 explicitly_on_cpu = _is_current_explicit_device(‘CPU’)
→ 289 gpus_available = len(_get_available_gpus()) > 0
290 return (not explicitly_on_cpu and gpus_available)
291

C:\ProgramData\Anaconda3\envs\MXNet\lib\site-packages\keras\backend\tensorflow_backend.py in _get_available_gpus()
273 global _LOCAL_DEVICES
274 if _LOCAL_DEVICES is None:
→ 275 _LOCAL_DEVICES = get_session().list_devices()
276 return [x.name for x in _LOCAL_DEVICES if x.device_type == ‘GPU’]
277

C:\ProgramData\Anaconda3\envs\MXNet\lib\site-packages\keras\backend\tensorflow_backend.py in get_session()
181 config = tf.ConfigProto(intra_op_parallelism_threads=num_thread,
182 allow_soft_placement=True)
→ 183 _SESSION = tf.Session(config=config)
184 session = _SESSION
185 if not _MANUAL_VAR_INIT:

C:\ProgramData\Anaconda3\envs\MXNet\lib\site-packages\tensorflow\python\client\session.py in init(self, target, graph, config)
1558
1559 “”"
→ 1560 super(Session, self).init(target, graph, config=config)
1561 # NOTE(mrry): Create these on first __enter__ to avoid a reference cycle.
1562 self._default_graph_context_manager = None

C:\ProgramData\Anaconda3\envs\MXNet\lib\site-packages\tensorflow\python\client\session.py in init(self, target, graph, config)
631 if self._created_with_new_api:
632 # pylint: disable=protected-access
→ 633 self._session = tf_session.TF_NewSession(self._graph._c_graph, opts)
634 # pylint: enable=protected-access
635 else:

InternalError: Failed to create session.

How can i solve this problem?

Hi,

It sounds like you are using an AWS image with Windows. This forum supports NVIDIA images which only run on AWS P3 instances with ubuntu. We don’t know exactly how AWS sets up their images so you will need to get their help.

Thanks, I’ll follow your recommendation