Jetson TX2 GPU memory?

Only when a friend asked me after my reflex order of the TX2 I realized that I knew quite some specs, but had no idea about GPU memory and I didn’t find anything about it so far.

Before I try to dig into this complicatedly on my own with all the on-board-tools available:
Can sb shed some major light?! (Must be shared memory as opposed to anything I know from Nvidia, but how much accessible and with what throughput?)

TIA
G.

GPU on Tegra TK1, TX1, and TX2 do not have their own memory. It is hard wired to the memory controller and shares system RAM. This also implies the GPU is not limited by PCIe bus speeds and PCIe management functions of a GPU don’t apply.

I do not know what limits there might be on how much system RAM can be used by the GPU.

Basically all of system RAM can be used by GPU. Maybe 100 meg or so reserved for the kernel. Using CUDA ZeroCopy mapped memory they can physically share the same buffers without memcopys.

Thanks, guys, for confirming what my gut told me, especially that the GPU can use most of the 8GB RAM.

Hey @dusty_nv, do Nvidia have any Deep Learning real-world numbers how this Pascal / ARM 8GB shared memory system compares to a 1050 4GB GDDR5 / Kaby Lake with the quite different trade-offs?

Thanks
G.

Can I use it directly on jetson nano?
My jetson nano shows 2G of free memory as soon as it is turned on. Does it mean that I can map 2G of memory to the GPU at most?

Jetson TX2 4GB will allow developers to run neural networks with double the compute performance (https://developer.nvidia.com/embedded/faq)

The Nano is wired the same way as the TX2, and so whatever RAM is not used by the operating system can be used by the GPU. The Nano doesn’t have as many cores, so it probably won’t consume as much RAM for a given application (think of what happens when part of the cores are doing something versus all of the cores…and if a TX2 has 512 cores and uses them all, then a Nano with 256 cores could never use more than half the RAM).

FYI, on a TX2 with R32.1 and the GUI running (which uses GPU cores) I am seeing about 600MB of RAM used by everything put together without CUDA. It might be practical to say approximately 1.25GB could be used by a Nano for CUDA (assumes the operating system requires about 0.75GB and will kill applications approaching that mark due to lack of memory…a tiny bit of headroom spare RAM is needed).

If you would run a code from https://devtalk.nvidia.com/default/topic/491518/cuda-programming-and-performance/cudamemgetinfo-how-does-it-work-33-/post/3522842/#3522842, you’ll see that in a system with killed gdm, some amount of GPURAM are still consumed by unknown things.