Like the link says, I just installed everything fresh, all the proper version, and its simply not working. I’ve looked through the code, its all good. I’ve been modifying it for my own purposes.
I don’t know where to ask, I feel like I’m stuck not being able to fully achieve my goals :(*
import tensorflow as tf
tf.config.gpu.set_per_process_memory_fraction(0.75)
tf.config.gpu.set_per_process_memory_growth(True)
# your model creation, etc.
model = MyModel(...)
Some users seem to have experienced this on Windows when there were other TensorFlow or similar processes using the GPU simultaneously, either by you or by other users: https://stackoverflow.com/a/53707323/10993413
As always, make sure your PATH variables are correct. Sometimes if you tried multiple installations and didn’t clean things up properly, the PATHs may be finding the wrong version first, and using that, causing an issue. If you add new paths to the beginning of PATH, they should be found first: https://www.tensorflow.org/install/gpu#windows_setup
As mentioned on your Stack Overflow post by another user, you could try upgrading to a newer version of CUDNN, though I’m not sure this will help since your config is listed as supported on TF docs: 從原始碼開始建構 | TensorFlow. If this does solve it, it may have been PATH issue after all since you will likely update the PATHs after installing the newer version.
I had a similar issue with the same error(CUDA 10.0, Windows 10). I realized that it was happening because I was running the program from within an IDE. Once I ran the program from the cmd, it worked perfect.
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
I am receiving a similar error training a pix2pix model.
hardware:
1080ti in slot 1
2080ti in slot 2
msi tomahawk ac x299 mobo (both pcie x16 slots)
os: windows 10
tensorflow 1.15
Cuda10, CUDNN7.6
I only get the allocation error when trying to use the 2080 ti in the second pcie slot on my motherboard. I need it there for thermal reasons (the 1080 ti overheats with the 2080ti above it).
I have tried with just the 2080ti installed (this succeeded) as well as using cuda_visible_devices to select only the 2080ti when both were installed (this caused the error).
Is there some hardware limitation with allocating to the second pcie device?