Can I use double types in place of float?

I am trying to modify some simulation C code so it can use CUDA on a Windows x64 machine. All of the C code uses doubles, not floats. I notice that most of the functions I have been looking at in the runtime API reference (i.e. cudaMalloc3D, ChannelFormatKind) take only floats for arguments or are an enumeration which does not have a double. Are doubles used in Windows cuda programming?

Yes, you can use double type. CUDA (including on Windows) supports most of C++. I don’t see anything in cudaMalloc3D() that references a floating-point type of any kind, much less float. You can certainly do an allocation for double using cudaMalloc3D().

ChannelFormatKind is a descriptor used with texture and surface programming. Textures and surfaces are unique to CUDA GPUs (not part of C++, nominally) and do not support double. You can do texturing or surface operations on 64-bit types, so it is possible to use a backdoor method to texture as double, if that is what you want.

But if you are:

and you are asking questions like these, my advice to you is to not delve into texture/surface work for a baseline initial port or adaptation to CUDA. Reserve that for when you are more comfortable with CUDA and have identified a specific perf bottleneck to target.

1 Like

Got it. I am still a novice at cuda. I am a fluent C programmer. CUDA seems to be a fairly steep learning curve.

That might be another reason to reserve texture/surface methods for later, if this is your “first foray” into CUDA.

There are organized/orderly learning resources available, as well as documentation, including the CUDA C++ programming guide.