Image processing with OpenCL

Hi to everyone, I’m paolo and I’m working on a thesis based on OpenCL computation for image processing.

Fisrt of all I have to apologize for my english but I’m italian, so… sorry!

I have been studying this subject since 4 weeks and I understood the general context creation and kernel execution. I followed some tutorials (I think the best is at: really… LOOK AT THIS!!!) and I did some basic examples like array operations, matrix multiplication…

What I was really looking for is a tutorial (or more than one if you have) about the use of image2d_t and image3d_t with sampler and all the mechanism made in Opencl for Image processing. I searched on the internet but every, I say EVERY, tutorial uses other structures as array or other. The only example is the Grass and terrain Apple’s example but it is quite big and difficult to read.

I’m here to ask: Did anyone of you see any kind of image processing with opencl?

Thanks in advance,


have a look at oclBoxFilter and oclVolumeRender in nvidia’s GPU SDK

Thanks for the reply.

I had a look but I didn’t find any use of images in the files that you told me to look.

I was looking for a simple example.

Thank you , bye

Which version of sdk are you using ? in 3.0 beta in oclBoxFilter.cpp at line 169 you will find it

all the examples are quite simple )

I’m using SDK 2.3 on MacOsX Snow Leopard. I compiled the Cuda examples but the compilation didn’t work for the OpenCL examples.

Simple…? Ehm… -.-

It is just that I read the OpenCL specification, I followed some tutorials but I didn’t find big guides or books about the argument and in 3 weeks I didn’t get so much knowledge to understand these examples.

Sometimes I need clear and short examples to understand the basis and then, when I have a good general knowledge about the argument, I start with other’s code.

With OpenCL I’m having a little bit of problems because it is a new kind of programming technique for me (I have an academic knowledge of C++, C and Java and a little bit of assembly… really a little bit). The most difficult part is about memory optimization, banks conflict. All those stuff are quite a mistery for now. I’m going to study them but I need some time before I practice with them.

However, thanks for the reply but. maybe, I need some other time to understand that code.



Cool, thanks I didn’t see it. On the Nvidia site I found the 2.3 version. ( the mac os x download gets you the 2.3 version).

Thanks I’ll take a look at it.

Do you know any other guide (possible a complete one) about opencl, because I shoud do a thesis about it and, leaving the stuff I found on the web, I have no idea where to look at.

Regards, Paolo.

who is looking for… will always find something ) 21st century, mate ))

I am doing a thesis as well in applied geological modeling, 3rd year actually. So I have been working 1 year with cuda, then switched to cl, one month ago, to do benchmarking on nv/ati gpus, to find which one is better )) Look through nv cuda and amd stream docs

World’s First Textbook on Programming Massively Parallel Processors…processors.html


thanks. I’ll take a look

And I’d like to speak with all of you about something.

I installed Cuda driver, toolkit and sdk on ubuntu 9.04 with a 8600 gt. It works pretty good and all the benchmarks run smoothly.

I tried to install the OpenCL SDK but during the compilation, generally, I have this error

[codebox]/usr/bin/ld: cannot find -lOpenCL[/codebox]

It seems as no opencl libraries are installed. What should I do?

It isn’t enough to have a working CUDA enviroment? I read the linux installation guide:

Verify the system has a CUDA-capable GPU OK

Verify the system has a supported version of Linux. OK

Download and install the NVIDIA driver OK

Download and install the NVIDIA GPU Computing SDK OK

Build the SDK “super-makefile” (Makefile) OK

Do you have any advice?

Did I miss any part?

Thank you in advance, regards Paolo.


There’s anyone there?

Do you have any advice about my situation?

Thanks, bye

A quick Google search suggests that your system may not have the right drivers installed. Look into that.

As for learning more about the platform, the book by Kirk and Hwe really is quite good at teaching how to do parallel computation, although thats in CUDA. The last chapter is on OpenCL. OpenCL is still very much a growing platform and until it’s big enough, you aren’t going to see too many learning resources. The best way is to understand parallel computation and then sift through SDK code to understand how its implemented. The matrix multiplication is a great way to start, as well as the vector reduction. For understanding more about memory optimizations, look through the best practices and the programming guide. To be able to understand why something is better, you need some understanding of the architecture. This is about 100 times the amount of help I received when I got stuck, I mostly had someone scream at me to go read the programming guide. Also, the examples in the SDK are generally simple and quite readable. Not sure why you are doing a thesis on a subject which you appear to have very little familiarity with, though.

+1 :)

Thank you. Really. I appreciate everything you wrote here.

I made an example on matrix multiplication. I spent more than 4 days to do that but I did. And I have no shame to admit it. Now I can say that I fully understood what I wrote. This is my way of working.

I had a look at the CUDA guides. I thought they were only for Cuda but I discovered that they really have good material for OpenCL too. I’m going to study them when I finish the OpenCL specification. I’m at the end circa so maybe in one or two days…

I like the challanges. Maybe now I’m stuck in a big one, maybe too big for me, but this is what challenge really is, don’t you think?

Again I have appreciated what you said up here, I really thank you for the time you spent and I hope you don’t think you lost it to a dumb-not-willing-guy. I’m really putting every part of me in this research and I hope this is enough for the ones who help me.

Thanks to everyone here.

Good job to all of you.

Sincerly, Paolo.