TX1 : ARM and GPU communication and Memory handling


On Tx1 ,We are currently launching two different applications on two different ARM core.
Among these only one ARM core will use GPU. How is the communication handled between ARM cores and GPU.

4GB memory available (LPDDR4 on TX1)is exposed to both ARM and GPU. Can we limit the memory say 2GB only for GPU processing and the remaining for ARM and other process? How can user handle the memory distribution across the applications?



GPU has its own scheduler.
If your application occupies all the GPU resource, the other job may need to wait for free resource.

We don’t provide a mechanism to limit the memory for a special process.
Maybe you can handle this via checking the information of cudaMemGetInfo().