Is the Apple OpenCL platform based on CUDA?

I was wondering if the Apple OpenCL platform, which supports NVidia GPUs, hands the code off to the CUDA compiler for optimization, or if it goes the whole way and generates and optimizes PTX using its LLVM infrastructure? I’m trying to figure out what the reasons are for OpenCL code that is basically exactly equivalent to CUDA code semantically is running about half as fast.

Also, can anyone point me to instructions for how to access the PTX produced by the Apple OpenCL compiler?

Thanks,
Cyrus Omar

I’m pretty sure Apple’s OpenCL uses their own compiler (although the API is built on top of the CUDA driver API), but we do the low-level PTX to binary translation and optimization.

Probably best to ask Apple this kind of thing.

Short answer: no.

Open CL is designed to run on multiple devices using llvm to generate specific device code. For example it can also generate efficient code to run an OpenCL program on a CPU if no GPU is available (sower, but still much faster then a sequential implementation of the program). I read that OpenCL also uses Apples Grand Central on the Mac platform. I recommend reading this article about OpenCL: [url=“Mac OS X 10.6 Snow Leopard: the Ars Technica review | Ars Technica”]http://arstechnica.com/apple/reviews/2009/...s-x-10-6.ars/14[/url]

Short answer: no.

Open CL is designed to run on multiple devices using llvm to generate specific device code. For example it can also generate efficient code to run an OpenCL program on a CPU if no GPU is available (sower, but still much faster then a sequential implementation of the program). I read that OpenCL also uses Apples Grand Central on the Mac platform. I recommend reading this article about OpenCL: [url=“Mac OS X 10.6 Snow Leopard: the Ars Technica review | Ars Technica”]http://arstechnica.com/apple/reviews/2009/...s-x-10-6.ars/14[/url]