Problem with cl_khr_int64_base_atomics extension

Hi,

for some of my kernels I need an atomic add of doubles, which I built myself using atom_cmpxchg from the cl_khr_int64_base_atomics extension. Somewhat to my surprise I discovered that the NVidia drivers will happily let me use the cl_khr_int64_base_atomics extension (and at least atom_cmpxchg seems to work as expected), but won’t report it in the list of supported extensions queried by specifying CL_DEVICE_EXTENSIONS to clGetDeviceInfo. This happens with both a GTX470 and a Tesla C2050 using the CUDA 3.2.1 / 260.19.12 drivers on Linux 64 Bit, and with a GTX570 using CUDA 4.0.1 / 270.61 on Windows 7.

So - is the cl_khr_int64_base_atomics extension supported or not ? If not, why does the NVidia OpenCL compiler let me use it ? And if it is, why doesn’t clGetDeviceInfo report it as supported ?

I’ve just run into this same issue. The CUDA programming guide claims that 64 bit atomics on global memory are supported by all GPUs with compute level 1.2 or later, and they seem to work fine. By OpenCL doesn’t report that as a supported extension. What’s the actual case?

Peter