I have a GTX 260 and the specification says that it supports double precision arithmetic. When I use doubles in the kernel it just returns 0 but when I change it to floats it gives the correct output. Am I supposed to do something special to enable double precision?
No, but you need to do what tmurray said by going into your project properties and setting that compiler flag…by default, nvcc (which is called by VS to compile the device code) compiles for the older architectures that don’t support the double type.
Basically if you have a CUDA rule, unless it supports the “-arch” flag, you need to change your .cu program properties to compile with “custom build step”. The command line will then become nvcc … and here you want to add -arch sm_13.
To get the complete command line, you can copy/paste it the one from your CUDA rules if that’s what you are using now.
Try opening an SDK sample, right click on its main .cu file (the one that doesn’t have a tiny red icon on it), go into properties, and then you can edit the Custom Build Step. Add nvcc flags there.