I am working on the following case: there are 3 passes in every frame update of my program:
pass #1: opengl outputs some color, normal and position values to an off-screen texrure
pass #2: opencl acquires ownership of these textures and processes the values in them to apply some lighting effect
and the resuting pixel is written to an output texture.
pass #3: the output-texture is drawn to the screen onto a full-screen quad to show the end-results.
now the problem is that the performance drops considerably when the view-port and hence all the off-screen textures are resized (size of the off-screen texture and the size of the view-port are the same).
First I thought this might be due to increased number of pixels that need to be processed (long execution time)
But after some experiments I realized that the problem was related to the size of the bound textures!
To test this I wrote a very trivial kernel that directly outputs a color to the output-texture:
so there won’t be any long per-pixel processing and a static color value is written directly to an output texture:
the kernel-execution time is not dependent on the input and is only limited by the write-to-texture speed!
then I varied the number of textures acquired by the opencl at full resolution (1680x1050 pixels)
I added the following textures in the given order and measured the total kernel preparation and execution times (starting from acquiring the opengl objects, setting the arguments, launching the kernel, releasing the opengl objects and calling clFinish: as recommeneded in nvidia opencl guides)
- output texture (RGBA8)
- color-map (RGBA8)
- position-map (RGBA32F)
- normal-map (RGBA32F)
I measured the following kernel times with the trivial kernel:
bound opengl objects:1 -> 8 ms
bound opengl objects:1,2 -> 15.7 ms
bound opengl objects:1,2,3 -> 45.9 ms
bound opengl objects:1,2,3,4 -> 75.7 ms
I think the problem is obvious.
the first two textures are of the same size, they are both RGBA8
adding on of the rgba8-textures results in about 7-8 ms overhead…
the last two textures are RGBA32F, adding on of them causes about 30ms of overhead…
the latency seems to be proprtional to the memory consumed by the texture.
I am also sure that I am using the opengl interoperability extension correctly:
I checked the extension-string and I performed memory read tests: they all passed!
I also checked if the opencl-images are created from the opengl-textures correctly
they are also fine (I am getting the desired color in my output-texture)
so I do not think I am doing something wrong in setting up the opencl-opengl interface.
I expected this to work as fast as the shader version of this kernel funstion but it does not…
it seems that some internal memory operations are done under the hoods.
what could be the cause of the overhead when acquiring the opengl-textures by opencl?
can we overcome it?
msvc 2003 .net