GPU programming with GTX 750 Ti

Hi, I have a NVIDIA GeForce GTX 750 Ti. What’s the best way to use it for GPU computing with Fortran? PGI’s CUDA Fortran and their OpenACC do not support it. What would you recommend to do?
Thanks, georg

How did you establish that CUDA Fortran does not support the GeForce GTX 750 Ti? Could you point to the relevant reference?

It’s based on the information I got from PGI. I posted the problem in the PGI user forum and asked, among other things, the following:

"Does it mean that PGI’s CUDA Fortran does not work with the GeForce GTX 750 Ti ? "

and their reply was:

"Correct, CUDA Fortran isn’t support on Maxwell either. We officially support all NVIDIA Tesla products as well as several AMD devices (See: There are some GeForce and Quadro products that share the same architecture as a Telsa product, in which case these products will work as well. Look for the compute capability (currently supported are 2.x and 3.x) or architecture name (Fermi or Kepler). "

Thanks. I spent a few minutes doing an internet search but had not been able to find a statement to the effect of the above.

If your existing code is written in Fortran, or you must to develop in Fortran, I do not know of an alternative to the PGI product. In analogy to the use of “supported” as it applies to CUDA, I assume the above statement implies that the compiler has only been tested with supported platforms, and while it may work well with other GPUs this is not guaranteed and if something goes wrong you would be on your own.

The possible alternatives I see are the obvious ones: (1) develop in the C++ subset provided by CUDA (note that CUDA 7.0 adds C++11 features) (2) acquire one of the hardware platforms supported by PGI

what happens if you compile your code for a cc3.x target and specify -Mcuda=nordc (CUDA Fortran) or -ta=nvidia,kepler,cuda6.5,nordc (OpenACC) ?

I will try that; not sure if the second option will work because the GeForce 750ti is a Maxwell and not a Kepler architecture.

Thanks for your comments.