How to set limitation on GPU memory usage?

The TX2 has 8GB shared GPU/CPU Memory, but how is this value divided or addressed dynamically?

For example, There is a running tensorflow model on GPU that takes around ~7.0GB memory like as below.

GPU memory usage: used = 7400.89, free = 452.121 MB, total = 7853.01 MB
GPU memory usage: used = 7400.91, free = 452.105 MB, total = 7853.01 MB
GPU memory usage: used = 7701.21, free = 151.805 MB, total = 7853.01 MB
GPU memory usage: used = 7745.49, free = 107.52 MB, total = 7853.01 MB
GPU memory usage: used = 7756.38, free = 96.6367 MB, total = 7853.01 MB
GPU memory usage: used = 7757.27, free = 95.7383 MB, total = 7853.01 MB
GPU memory usage: used = 7757.3, free = 95.707 MB, total = 7853.01 MB

When most of the memory is used on the GPU, is there no effect on the operation of the CPU?

So, I’d like to ask if there is a way to limit the usage of GPU memory from Jetson system without modifying running tensorflow’s code.

Thansk.

Hi,

We don’t divide the physical memory into CPU part and GPU part.

A memory is allocated by CPU then it is CPU memory.
A memory is allocated by GPU then it is GPU memory.
There is a memory system to control and monitor this.

TensorFlow provide an API to limit GPU memory allocation:
https://www.tensorflow.org/guide/using_gpu

config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
session = tf.Session(config=config, ...)

It will limit TensorFlow program allocate no more than 40% memory.

Thanks.