Hello. The CUDA runtime API cudaDeviceProp struct (queried using cudaGetDeviceProperties) contains the pciDomainId, pciBusID and pciDeviceId fields, but does not provide the PCI function (a “pciFunctionId” field is missing) which is necessary to build the full PCI ID [domain]:[bus]:[device].[function].
Is the missing PCI function field necessary to bind the PCI ID to a unique GPU index?
I am asking also because, strangely, the related function cudaDeviceGetByPCIBusId allows not only the full format [domain]:[bus]:[device].[function] but also the partial format [domain]:[bus]:[device]. Does it mean that the PCI function field is not necessary? What device index will cudaDeviceGetByPCIBusId provide, using the latter format without [function], for a multi-function device? An error? A “random” GPU index among several corresponding to different PCI functions? Or it does not matter, namely it is guaranteed that at most one GPU index can match the partial [domain]:[bus]:[device]?
This is a bit blocking for me because, incidentally, I think there is a bug in cudaDeviceGetByPCIBusId (at least in CUDA 10) resulting in spurious pinned-memory allocations for default GPU (index = 0) at subsequent calls of cudaMalloc / cudaMallocHost even though a different GPU was assigned before performing the mallocs. So I am forced to use cudaGetDeviceProperties to link the GPU index to the PCI ID, and cannot use cudaDeviceGetByPCIBusId to do that. But cudaDeviceProp does not provide the cudaDeviceProp value, so I am stuck, unless [domain]:[bus]:[device] alone binds to a unique GPU index.
I hope I can find a solution. Thanks in advance.