CUDA Quantum code with multiGPU support works as single with single GPU

I am evaluating CUDA Quantum; the goal is to build and run with multi-GPU support the example script cuquantum_backends.cpp located in examples/cpp/basics inside the official container image.
On an HPC system, I reserve 2 GPUs from a DGX Ampere and use enroot as container engine.

I build as follow with no errors:

nvq++ cuquantum_backends.cpp -o cuquantum_backends.x --qpu cuquantum --platform mqpu 

as shown in GTC talk: Inside CUDA Quantum.

To the original code I added the following:

auto &platform = cudaq::get_platform();
printf("Num QPU %zu\n", platform.num_qpus());

From the code execution I get

[ ... ]
Num QPU 1

I understood that through cuQuantum library, each GPU simulates a QPU, thus I would expect a

[ ... ]
Num QPU 2

As a check I ran nvidia-smi both inside and outside the container, and both GPUs are seen, so to me it seems that the code sees only one QPU/GPU.

I see many possibilities for the code to behave in this way, among those are enroot and me missing something in how CUDA Quantum and cuQuantum work. Does anyone as any suggestion?

Thanks for helping


If this issue has not been resolved yet, could you please specify how you actually run your binary on your DGX system? In particular, do you request 2 MPI processes?