Issues with RAM usage


I’m facing some issues while training a simple neural network with the Jetson Nano using Keras/Tensorflow-GPU. In a short time the RAM usage reaches the limit and causes the training to crash. This happens despite the fact that I don’t use that much images and already disabled the display manager.

Does anybody know if these problems can be avoided? Or isn’t the Jetson Nano suited for training neural networks, but just only for running them?

Please check

For overcoming RAM shortage, you shall need to mount a swap file.

You can do some limited training. See dusty_nv’s answer here, however if your system is swapping, it may not run out of memory, but things will go very slowly. A microsd card is not very fast compared to actual ram.

Thank you for the suggestions. I have set the swap to 8GB, but this is not being utilised. TensorFlow allocates a certain amount of RAM to the GPU for training, but this seems to be very low (~80-500MB) since a large portion of RAM is already in use by the system. Since TensorFlow does not use the swap partition for allocating GPU memory, these solutions do not work.

Is there any way to limit the “normal” RAM usage so more memory is available to allocate for GPU usage by TensorFlow?

If you already did a “sudo systemctl isolate” to switch to cli only mode, then there probably isn’t much you can do. You can use “systemctl” on it’s own to see running services and start shutting them down with “systemctl disable”. If you don’t need Docker, for example, you can “sudo systemctl stop docker”.

There should be 80-90% available with the GUI gone. Note that the “free” command is confusing:

(from a headless server:)

$ free -h
              <b>total        used</b>        free      shared  buff/cache   <b>available</b>
Mem:            <b>31G        2.6G</b>        8.8G        8.1M         19G         <b>28G</b>
Swap:          <b>7.4G         46M</b>        7.4G

Just ignore columns other than the first two and the last. The rest is used by the os for cache and freed when needed.