GPU memory usage is too high with Keras

Hello, I’m doing a deep learning on my Nano with hdf5 dataset, so it should not eat so much memory as loading all images to memory at once. It works, at my VM Ubuntu it eats about 1GB of memory but it’s using a CPU not Cuda. In my Nano when I load a hdf5 dataset and start learning it going to use about 2.4Gb of memory with same model. Is it because I’m using a GPU, so processing speed is faster and memory is more in use, or what? Because I’m sending a pictures to the network with 64 batches, it should not be a change there, I think…

Hi,

Would you mind to run the training task with CPU mode on Nano first.

There are lots of possibilities in your experiment can consume memory.
Suppose you are using TensorFlow as backend frameworks, the memory may be occupied by the TensorFlow rather than hdf5 loader.

Thanks.

@AastaLLL Thanks for the reply! I tried as you said, and it was really that TensorFlow! When I start learning with CPU it uses only about 800MB of RAM. Thanks for that info! Btw. Can I do something with it? Some lowering method or so?

Hi,

You can limited TensorFlow GPU memory allocation with this configure:

config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
session = tf.Session(config=config, ...)

But it may not work sometimes.
Especially if the minimal requirement is over the configured memory amount.

Thanks.