cudaMalloc cuComplex data types

Hi,

I wanted to know if it is possible to allocate memmory with cudaMalloc for cuComplex data types, so that a cuda kernel can accept arrays of cuComplex types. I guess it is always possible to do two separate arrays for real and imaginary parts and pass them individually, but it would be nice, for example, to copy from a c++ array of complex to a cuda array of complex with a short syntaxis.

Thank you for your answers.

Yes, it’s possible.

Include the header file:

#include <cuComplex.h>

If you study that header file, you’ll find all sorts of datatype definitions and useful helper functions.

For example, on a default linux install it would be in:

/usr/local/cuda/include/cuComplex.h

If you do a google search using the following terms:

site:devtalk.nvidia.com cuComplex

you will get various examples, such as this one:

https://devtalk.nvidia.com/default/topic/858473/please-help-for-using-cublas-zgemm-/

Thanks txbob.

I already use cuComplex, but I was wandering if I could pass cuDoubleComplex* arrays as parameters to the kernel from the host. That would mean that some way of using cudaMalloc like

cudaMalloc((void**)&d_cVector, sizeof(double)*2*size);
cudaMemcpy(d_cVector, h_cVector, sizeof(double)*2*size, cudaMemcpyHostToDevice);

kernel_func<<dimGrid,dimBlock>>(d_cVector);

where the header of the kernel is

__global__ void kernel_func(cuDoubleComplex *d_cVector);

Note that I am assuming that the size of a cuDoubleComplex type is twice the size of double (which could be wrong).

Sorry txbob, I allready saw in your link that sizeof(coDoubleComplex) has to be used.

Thnx.

I think if you study the header file as I suggested, you can figure these things out. And, as I suggested, there are numerous examples of usage of cuComplex.

Instead of assuming what the size of the cuDoubleComplex type is, you could actually look at the header file, as I suggested, and deduce it:

/* Double precision */
typedef double2 cuDoubleComplex;

So it is the same as a double2. A double2 is indeed twice the size of a double.

If you’re not sure what a double2 is, you can find its definition in vector_types.h (same location as cuComplex.h):

struct __device_builtin__ __builtin_align__(16) double2
{
    double x, y;
};

nvcc automatically includes vector_types.h when you compile. It does not automatically include cuComplex.h

as std::complex and cuDoubleComplex are compatible,

std::vector<std::complex<double>> h_cplx;
...
cuDoubleComplex* d_cplx; cudaMalloc(&d_cplx, h_cplx.size()*sizeof(cuDoubleComplex)); 
cudaMemcpy(d_cplx, h_cplx.data(), h_cplx.size()*sizeof(cuDoubleComplex), cudaMemcpyHostToDevice);

to copy host-vector to device-memory.