I’m new to cuda and now I have to write code to run cuda program on a cluster.
The program works well on my single-gpu machine with GTX 980Ti, while [out of memory] error on the cluster uses several Titan X.
How to check free gpu at runtime and set device?