What's the difference between the CPU version of TF and the GPU version when GPU is disabled

2019-12-13 15:12:36.715761: W tensorflow/core/grappler/optimizers/implementation_selector.cc:310] Skipping optimization due to error while loading function libraries: Invalid argument: Functions '__inference___backward_cudnn_lstm_with_fallback_13456_14935' and '__inference___backward_standard_lstm_16028_16629_specialized_for_StatefulPartitionedCall_1_at___inference_distributed_function_16817' both implement 'lstm_7158499a-7bdb-44d9-aab7-5a6ed24ca091' but their signatures do not match.

What’s the effect of this mismatch?

2019-12-13 16:44:31.327560: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Resource exhausted: OOM when allocating tensor with shape[100,32,300] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node transpose}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

This often happens when I use TF-GPU but disable GPU; when I use TF-CPU, such error never happens. I wonder why.

Hi,

I wasn’t able to find much information on this. However, I would guess that TF-GPU supports CPU for compatability reasons in the case where GPU driver/CUDA gets unloaded - but I would think that the main point is for TF-GPU to be run on GPU.

For TF-CPU, I would think there may be extra optimizations made for CPU, but I don’t know why these wouldn’t also be added to TF-GPU.

This may be a better question for https://github.com/tensorflow/tensorflow/issues.