TX1 : ARM and GPU communication and Memory handling

Hello,

On Tx1 ,We are currently launching two different applications on two different ARM core.
Among these only one ARM core will use GPU. How is the communication handled between ARM cores and GPU.

4GB memory available (LPDDR4 on TX1)is exposed to both ARM and GPU. Can we limit the memory say 2GB only for GPU processing and the remaining for ARM and other process? How can user handle the memory distribution across the applications?

Regards,
njs

Hi,

GPU has its own scheduler.
If your application occupies all the GPU resource, the other job may need to wait for free resource.

We don’t provide a mechanism to limit the memory for a special process.
Maybe you can handle this via checking the information of cudaMemGetInfo().
[url]https://devtalk.nvidia.com/default/topic/1013464/jetson-tx2/gpu-out-of-memory-when-the-total-ram-usage-is-2-8g/post/5168834/#5168834[/url]

Thanks.