I want to use cudaq to run large scale simulations, aiming at 1e9 shots per circuit.
It seems cudaq.sample(…) crashes if I ask for more than 1M shots, even for a bell-state circuit.
The code below works for shots=1024*1000
got GPU, run 1024000 shots
{ 00:512936 11:511064 }
but if I ask for shots=1024*1024 it crashes
got GPU, run 1048576 shots
Segmentation fault
This is the software stack I’m using
# pip3 list|grep cuda
cuda-quantum 0.7.1
cupy-cuda12x 13.2.0
CUDA-Q Version 0.7.1 (https://github.com/NVIDIA/cuda-quantum 1f8dd79d46cad9b9bd0eb220eb04408a2e6beda4)
I run on GPU is A100
# nvidia-smi
Thu Jun 20 18:17:40 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A100-PCI... Off | 00000000:C3:00.0 Off | 0 |
| N/A 35C P0 40W / 250W | 418MiB / 40960MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
Demonstrator code:
import cudaq
print(cudaq.__version__)
qubit_count = 2
@cudaq.kernel
def kernel(qubit_count: int):
qvector = cudaq.qvector(qubit_count)
h(qvector[0])
for i in range(1, qubit_count):
x.ctrl(qvector[0], qvector[i])
mz(qvector)
print(cudaq.draw(kernel, qubit_count))
cudaq.set_target("nvidia")
shots=1024*1024
print('got GPU, run %d shots'%shots)
result = cudaq.sample(kernel, qubit_count, shots_count=shots)
print(result)
Hi @janb. Thank you so much for raising this issue and using CUDA-Q. Our engineering team has fixed the issue, and the PR can be found here. Please let us know if you have any other questions.
Hi, may I ask when the PR merges into the master branch?
And we are using Dockefile to customize the cuda-quantum environment. If there are anything else we need to change?
This is the podman file we are using.
FROM ubuntu:22.04
ARG arch=x86_64
Hi @ziqinguse. There are two primary options. 1) The change was approved and should be included in our nightly Docker release tomorrow. You can use the Docker image found here. 2) Alternatively, you can install from the source code linked in the PR above.
Hi I understood. For the nightly version: CUDA Quantum (nightly) | NVIDIA NGC
I cannot see the details of layers. May I know where to check the Dockerfile?
Hi @ziqinguse, there should be a “layers” tab next to the “overview” and “tags” tabs on the top of the page. This tab should list the Docker layers. Please let me know if this is not what you are seeing.