High Level API for CUDA: for or against?

Hi all,

I’ve seen several “High level API” allowing to use Nvidia devices without using directly CUDA. The last I have in mind is “Rapidmind”.

I would like to know if anyboby does already use this kind of technology.

Do you think these solutions really exploit the potential of Nvidia device?

Thanks for your answers :)

Larry35

Generally, you’d use something like Rapidmind if you want your code do be directly portable to other GPUs and parallel devices. This probably means that you can use only a subset of the features of CUDA that also exist on other hardware. Also, it might incur some performance overhead, but that depends on how rapidmind converts your code to CUDA.
If you want to use the full potential of the device then just use CUDA.

For - http://forums.nvidia.com/index.php?showtopic=65905

:D

If you try rapidmind, could you let us know what you think of it in terms of ease of use?

Another option is to compile CUDA kernels for multi-core processors:
http://www.crhc.uiuc.edu/IMPACT/ftp/report…08-01-mcuda.pdf

Cheers,
John Stone