OpenCL == CUDA?

OSX 10.6 will have something called “OpenCL” which is a C-like interface to GPU computation.

http://www.apple.com/pr/library/2008/06/09snowleopard.html

Is this a variation of CUDA?
A wrapper?
A competitor?
A Larrabee-only library and compiler?

I know that there’s not much info, but that press release is just about ALL that’s public.
Very curious indeed…

I suspect that OpenCL will be very similar to CUDA, but with multi-vendor support. Currently shipping Apple hardware has GPUs from Intel (MacBook, Mac Mini), ATI (iMacs, Mac Pro options), and NVIDIA (high end iMac, Mac Pro option, and MacBook Pro). Clearly if they want OpenCL to be useful on their product line, they will need to build something which can compile down to any of the above GPU architectures. Moreover, I suspect that OpenCL will also support efficient CPU-only compilation, much like the researchers did with MCUDA.

I don’t know the other GPU architectures well enough to postulate how well such an abstraction will perform. Shared memory is a big advantage of CUDA, and it would be unfortunate if the OpenCL abstraction hid that.

It’s CUDA, I guess :)

http://news.cnet.com/8301-13579_3-9962117-37.html

Maybe including nice bindings to OS X stuff. Apple are probably not going to be reimplementing any GPU programming libraries. Pretty cool decision on their part.

There is an HPCwire.com article that mentions OpenCL being a possible cross platform replacement for OpenGL that has been submitted to the Khronos Group for approval as an actual standard. (Sort of a non-hardware specific GPGPU language. similar to CUDA, Brook+ and others.)


This is the Wikipedia def: http://en.wikipedia.org/wiki/OpenCL

OpenCL (Open Computing Language) is a language for GPGPU based on C99 created by Apple in cooperation with others. The purpose is to recall OpenGL and OpenAL, which are open industry standards for 3D graphics and computer audio respectively, to extend the power of the GPU beyond graphics.

Apple has proposed OpenCL for Khronos Group where on June 16th 2008 Compute Working Group was formed[2] for the standardization work.

OpenCL is scheduled to be introduced in Mac OS X v10.6 (‘Snow Leopard’).[3] According to the press release:[3]

Snow Leopard further extends support for modern hardware with Open Computing Language (OpenCL), which lets any application tap into the vast gigaflops of GPU computing power previously available only to graphics applications. OpenCL is based on the C programming language and has been proposed as an open standard.

The initial OpenCL implementation is reportedly built on LLVM and Clang compiler technology.[citation needed]

Just noticed this http://www.amd.com/us-en/Corporate/Virtual…~127451,00.html from AMD. If OpenCL is just based on CUDA and AMD is now supporting CUDA… does it mean that AMD is going to make something really similar to CUDA run on its hardware?
It would be really cool…

From the SIGGAPH 2008 paper, OpenCL is quite similar to CUDA, but it’s more “open.” For example, its memory hierarchy is more flexible. Each “work item” (similar to a “thread” in CUDA) has their own private memory, and each “compute unit” (similar to a MP) has their own local memory (similar to shared memory in CUDA), and all compute units have access to a shared global memory. It also has a qualifier for constant memory.

It has some required functions, which are mostly the same as CUDA 1.0. Optional functions include double precision, atomic operations (in global memory and local memory), rounding mode selection, etc. are mostly in line with different CUDA versions.

OpenCL is also designed to be able to share data with OpenGL. So an application can use OpenGL to do visualization.

opencl is not CUDA it ll be a frontend for all graphics cards that will wrap a language to acess GPU of all brands via supports of library.

The question is: will NVIDIA implement the OpenCL layer?

no they do not need to implement nothing. they’ll keep working with cuda and apple ll develop an higher language to provide acess to all GPU whatever their brand through a unique programmer interface and routines

nvidia needs a cut though…