I now do some development of tensorflow on Jetson, but I find that if GPU memory is set to automatic allocation, it will eat all my memory. However, if I develop on the desktop, I need to use automatic allocation to deal with the memory problem. How can I identify whether the current GPU uses CPU shared memory or GPU on-board memory
How to identify whether the memory used by the current graphics card comes from the memory module or from the graphics card itself
You can limit the maximum memory used by TensorFlow with the below configure:
Note that the integrated GPU of all Jetsons share memory with the system (the iGPU is directly wired to the memory controller), and there is no separate GPU memory. Discrete GPUs of a desktop PC have their own memory.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.