Ocelot 1.0 Alpha Release High Performance GPU and Multi-core CPU targets

Any concerns about nVidia IPR hidden in the CUDA APIs or programming model?

Wow, I love you guys! I am going to try this out tonight on our code. (I have an algorithm that is super-fast in CUDA, but I can’t get the CPU version very fast at all.)

Everything was done using publicly available documentation. We do not use any portion of the Open64 compiler or any other tool released by nvidia. For the CUDA runtime, we completely reimplemented it from scratch using only the CudaReferenceManual as a guide.

very interesting! will definetly have a try!

I have another, off-topic question; where you involved in creating the GPU VSIPL++ package that i believe originated from Georgia tech?

I would be interested in seeing some of your performance numbers for FIR (TDFIR) filters for large datasizes. I want to see how my implementation compares :)

I personally wasn’t involved in the development of GPU VISPL, but Andrew Kerr, the other main contributor for Ocelot, worked on GPU VSIPL. I’ll forward your question to him.

Is this better than compiling with -deviceemu? :P

Very much looking forward to trying this out. Thanks for all your work.

It should be significantly faster, especially for programs with a large number of threads. The current version has about a ~10-20 cycle context switch overhead between threads in the same CTA, which I think was the main problem with deviceemu. You also don’t have to recompile your program to change from execution on a CPU vs a GPU.

On the other hand, you won’t be able to call printf from within a kernel. :)

Does your CPU path support zero-copy? Could it run cuPrintf?

The CPU path does support zero-copy, although we don’t have any regression tests more complicated than the simpleZeroCopy SDK example. I haven’t looked at cuPrintf in enough detail to say whether or not it would work, but as long as it only uses CUDA API calls internally, it should work.

thanks!

Oh hey, this is on Slashdot now. Good job!

You list the library “rt” as a dependency. What library is this? It is hard to look for, as “rt” is quite common in package names/on the internet. Or is this library already installed by default?

These are real-time extensions to linux. Almost all flavors of linux that I am aware of have support for this.

Thanks, I was surprised that it went through. Hopefully this generates some more interest in CUDA and Ocelot.

UPDATE: There is a tech report available describing the implementation: http://www.cercs.gatech.edu/tech-reports/t…stracts/18.html

as well as some preliminary performance numbers: http://www.gdiamos.net/files/cpusAndGpus.png (log scale warning)

Nehalem: Intel Core i7 920
Phenom: AMD Phenom 9550
Atom: Intel Atom N270

Hmmm, I’m sure I’ve asked this before, but Ocelot does support the driver API too, right?

ps–that was a good paper. If you’re still going to work on Ocelot, I might make a few completely ridiculous feature requests…

CUDA for GPUs, FPGAs, and now CPUs :)

It’s -24 C outside so this paper can be a good holiday diversion!

There is not currently a driver level api implementation in Ocelot. It would be possible to add one in the future without too much effort (the implementation of the CUDA runtime is about 2-3k lines and I wouldn’t expect the driver level api to be much more complex than this), but we don’t have anyone actively working on it.

Feel free to make any suggestions, we would welcome any input that you have.

I am planning on working directly on Ocelot for the remainder of my time at Georgia Tech (1-1.5 years). I think that Andrew Kerr, the other main contributor is as well. We are also starting a few side projects next semester that will add be headed by other PhD students working on CUDA-related topics that will add features to Ocelot.