Hi,
I have a python virtual environment (conda) where I’ve installed CUDA toolkit 10.1.243 and tensorflow-gpu 2.3.0rc0. My CUDA driver is 11.0.
In order to test if tensorflow was installed to GPU correctly, I ran a series of commands from within the venv:
tf.test.is_built_with_cuda()
True
tf.config.list_physical_devices(‘GPU’)
Found device 0 with properties:
pciBusID: 0000:01:00.0 name: Quadro M2000M computeCapability: 5.0
[PhysicalDevice(name=‘/physical_device:GPU:0’, device_type=‘GPU’)]
python -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000,1000])))"
tensorflow.python.framework.errors_impl.InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: device kernel image is invalid
I am not sure how to troubleshoot this. I have a feeling that it is related to modifying the compilation such that tensorflow supports the compute capability of my device (5.0), but I am not sure how to proceed. Thank you!!