I’m working on a project using CUDA 3.2 that is being developed on Windows 7 using Visual Studio. I am having this weird problem using a GeForce GTX 580 and a Tesla C2050 (each on different machines) where the Tesla GPU will give correct floating point numbers back to the code, and the 580 GPU will give back a series of NANs (Not A Number).
- The GPU's are being used more for parallel computing power rather than for graphics.
- The code on each machine is identical since the project folder was just copied over.
- The code did not receive correct floating point numbers until it was moved from the 580 GPU machine to the Tesla GPU machine.
Does anyone know how this problem could be caused by the difference in graphics cards?