I have compiled the following code
#include <iostream>
#include <algorithm>
#include <execution>
#include <numeric>
int main(){
int number_vals = 130’000’000;
double* Data = new double[number_vals]{};
// std::fill(std::execution::par_unseq, Data, Data+number_vals, 1.);
double result = std::reduce(std::execution::par_unseq, Data, Data+number_vals, 0.0, std::plus<>{});
std::cout << “Hello World!”<<std::endl;
std::cout << "The result is: "<< result<<std::endl;
}
using the command:
nvc++ -std=c++20 -stdpar main.cc -g.
but when I start the resulting binary in cuda-gdb the application freezes at startup. The problem does not seem to occur when I do not use the flag -std=c++20. I am working with the latest version of the HPC SDK on Ubuntu 24.04 using an rtx-4090
nvidia-smi output:
Mon Nov 17 12:14:29 2025
±----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.95.05 Driver Version: 580.95.05 CUDA Version: 13.0 |
±----------------------------------------±-----------------------±---------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 Off | Off |
| 0% 49C P8 27W / 450W | 40MiB / 24564MiB | 0% Default |
| | | N/A |
±----------------------------------------±-----------------------±---------------------+
±----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 2032 G /usr/lib/xorg/Xorg 9MiB |
| 0 N/A N/A 2186 G /usr/bin/gnome-shell 10MiB |
±----------------------------------------------------------------------------------------+