CUDA vs. OpenVidia

I haven’t investigated these Apis too closely yet, but from my understanding, OpenVidia uses OpenGL to accomplish basically the same thing as CUDA (GPGPU). With all the resource management, textures and HLSL removed, it seems CUDA is a clear winner?

So if you’re writing something like SIFT image recognition on the GPU, is there any reason to go with OpenVidia over CUDA? Backwards compatibility is not an issue in this case, DX10 cards only are fine. My concern about CUDA is it’s still very fresh and it’s unknown how optimized it is, or how well drivers support it, how stable it is, nor what its future is. OpenVidia seems like a safer choice even if it is slower and more cumbersome to program for. My background is in graphics (ps2, xbox, ps3, xbox360) so I’m quite familiar with Cg/HLSL and that whole architecture. But on the other hand, CUDA could end up being faster, more stable, and easier to maintain.

I would love to see performance comparisons between OpenVidia and CUDA. Also, has a done done perf comparisons between OpenVidia’s SIFT and the SiftGPU project?

And finally, are there any DirectX GPGPU libraries comparable to OpenVidia?

Tomas Vykruta

CUDA also has some extra features like scattered writes, shared memory and better support for integer operations, Using these might make implementing something a lot faster or easier than with shaders, or wrappers around that.

It appears NVidia is quite serious about it. The only reason to use something else would be ATI compatibility. If NVidia-only is ok, then CUDA is the best choice.