CUDA_ERROR_OUT_OF_MEMORY: out of memory when there is actually no such a large tensor to allocate

2019-12-27 17:30:16.733664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-12-27 17:36:44.441597: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-27 17:36:44.443680: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0 
2019-12-27 17:36:44.445285: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N 
2019-12-27 17:36:44.454656: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1376 MB memory) -> physical GPU (device: 0, name: GeForce GTX 860M, pci bus id: 0000:01:00.0, compute capability: 5.0)
2019-12-27 17:36:44.582148: I tensorflow/stream_executor/cuda/cuda_driver.cc:830] failed to allocate 1.34G (1443813632 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2019-12-27 17:36:44.674909: I tensorflow/stream_executor/cuda/cuda_driver.cc:830] failed to allocate 1.21G (1299432192 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2019-12-27 17:36:44.736202: I tensorflow/stream_executor/cuda/cuda_driver.cc:830] failed to allocate 1.09G (1169488896 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory

This time it’s a test program and no large tensor is used in computation. How come this error happens again?
Time in UTC+8.

Hi,

Given that your GPU appears to only have ~1.3 GB of memory, it’s likely to get an OOM error in computational tasks. However, even for a small task, users sometimes run into issues from TensorFlow allocating > 90% GPU memory right from the start. You could try the code snippets in this post for either TF1 or TF2 to allow dynamic memory growth, and see if that fixes your issue: https://devtalk.nvidia.com/default/topic/1068031/cudnn/geforce-gtx-1660-super-cuda-not-working-in-anaconda/post/5411179/#5411179