best way to tell which GPU is free in a Multi-GPU server?

Hi bros, so far I only coded in a single GPU workshop, but may need to expand to a multi-GPU environment. I hope to submit independent jobs to each GPU.

The question is, how to tell which GPU is free? I hope to do it in my C++ program itself, instead of using command line before job triggering.
method 1: to try everyone using “cudaSetDevice”?
method 1: use some interface from nvml?

any advice are welcomed, thanks.

Problem solved. I prepared one using nvml’s nvmlDeviceGetComputeRunningProcesses