I am trying to implement reflections on the GPU without raytracing. The algorithm is based on the paper “GPU-driven interactive reflections on curved objects” by Estalella, Martin, Drettakis and Tost. In a first render pass I have to render to positions and normals of the reflector into a texture. This I have made with GLSL, where the output of the fragment shader are the positions/normals and not the colors. After that I load every vertex of my objects to CUDA to calculate the reflected positions into one vertex buffer object.
The “PostProcessGL”-example renders the actual frame into a pbo, but the positions are interpreted as colors and the positions have so a maximum value of x, y, z is one, but the coordinates are often bigger than one.
Other examples, which are working with textures, copy the data of an image into texture and this doesn’t help me too.
So here is my question: How can I render the actual frame, copy to a texture and this texture to CUDA like an uniform texture in GLSL or how is it possible to interpret the pbo with values bigger than one (1.0)?
If somebody can help me or has another idea, it would be very nice!!!
If somebody doesn’t understand my problem, I can post some sample code!! :">