TensorFlow always out of memory

Hey, I tried running a FCN-8 like Network using TensorFlow in Python but whatever I try the machine always runs out of memory and kills the process.
I was using a frozen model using TensorRT to optimize for usage with FP16 but nothing helps.
When I start the program the machine uses around 1.4 of the free 3.87 GB, then the program increases its memory usage until it reaches the maximum and the process gets kill. Even setting the per_process_gpu_memory_fraction to 0.1 does not help.

What should I do? Because if cannot even run a simple network like this one, then the Nano becomes effectively useless for me and I asking what models the Nano is able to run.

Hi jonaswolf4793, see the benchmarks section of this blog for some example networks that we released the performance of with Jetson Nano. The inferencing of these was done through TensorRT, which has improved performance and memory efficiency. You can see these GitHub repo’s for examples of how to use TensorRT with TensorFlow:

For a quick fix to try to get your model running in your TensorFlow code as-is, have you tried mounting a swap file?

1 Like

Hi Jonas,
in addition to what dusty_nv wrote, a FCN is a very parameter-intensive model and will need a lot of memory. Nvidia used a Unet in their benchmark, with a 300x300 input resolution. The resolution will be key as to how much memory is used. Can you specify more details about your model, like input size and number of parameters, or which and how many layers you are using?

Also, I suspect that the default 1.4 GB usage of the Ubuntu desktop can be scaled down with a lightweight desktop. I’m hoping Nvidia or the community will create something in the future. Raspbian or Raspbian Lite consume less than 200MB memory on the RPi. First thing I did was to deactivate the beautiful desktop background and high visual settings. But a lot more can be done for sure.