On a Jetson
To save memory resources I’m trying to fitting a model using
steps_per_epoch = int(trainNumber/batchSize),
validation_steps = int(validationNumber/batchSize),
verbose = False,
callbacks = callback)
I’m using a sequence to load images (400,400,3) and a bach size of 4. After loading 2 sequences I have a out of memory error on the GPU. I was expecting that the GPU memory was cleared after each sequence. I’m I missing something?
Update: I ran top and it was RAM that was exausted. I tried the same code on a raspberry pi (with tensorflow 1.14.0) and the memory usage was bellow 1.5Gb during training.