Complex Numbers

What is the current state of using complex types on GPUs?

The std::complex types seemed to compile for the most part, but anything other than +, * threw errors (at compile) about trying to make host calls from the device.

Are none of the abs(), conj() functions defined on the GPU or something? I saw an earlier post by cbuchner1 with a base class that I could probably work with; however, it would be nice to know if there is an proper CUDA Complex type with at least most of the operators in place.

Ben

I find the CUDA cuComplex type rather difficult to work with because it does not offer a lot of operators. You can probably add most needed operators to it by defining additional non-member operator overloads (which is possible without changing nVidia’s SDK header files)

Using cuComplex has the advantage that this is the type supported by the CUBLAS and CUFFT libraries.

Christian

Alright, sounds good. Your class is good enough for me!

Thanks for the class, thanks for the response.

Ben

At the company I work for, a student intern tested a 4x4 matrix inversion based on cuComplex and compared the resulting performance with an implementation based on my own class. We found that cuComplex was faster by around 20%. I believe this is because cuComplex encapsulates CUDA’s native float2 type and hence may optimizes memory access to load the entire float2 in one instruction.

In hindsight what I should have done in the design of my class, is to use a float2 instead of float real; float img; - or even better internally store a cuComplex in my complex class.

Christian

Ah, okay, thanks for the advice.

I’ll have to actually get this kernel working before I spend too much time worrying about the details :).

Ben