The TX2 has 8GB shared GPU/CPU Memory, but how is this value divided or addressed dynamically?
For example, There is a running tensorflow model on GPU that takes around ~7.0GB memory like as below.
GPU memory usage: used = 7400.89, free = 452.121 MB, total = 7853.01 MB
GPU memory usage: used = 7400.91, free = 452.105 MB, total = 7853.01 MB
GPU memory usage: used = 7701.21, free = 151.805 MB, total = 7853.01 MB
GPU memory usage: used = 7745.49, free = 107.52 MB, total = 7853.01 MB
GPU memory usage: used = 7756.38, free = 96.6367 MB, total = 7853.01 MB
GPU memory usage: used = 7757.27, free = 95.7383 MB, total = 7853.01 MB
GPU memory usage: used = 7757.3, free = 95.707 MB, total = 7853.01 MB
When most of the memory is used on the GPU, is there no effect on the operation of the CPU?
So, I’d like to ask if there is a way to limit the usage of GPU memory from Jetson system without modifying running tensorflow’s code.
Thansk.