render to Texture help needed I need help with rendering with cuda to a OPenGL usable texture

Hi all,

I am writing a dense vector field visualisation program in cuda. Basically I am implementing the noise advection and blending on the gpu using cuda, but then I want to be able to use the results as a texture with openGL.

my original solution worked but I know its not optimal,

display{

advect<<<512,256>>>(Nb_dev);
cudaMemcpy(Nb,Nb_dev,MTexNTex2*sizeof(float),
cudaMemcpyDeviceToHost);
glTexImage2D(GL_TEXTURE_2D,0,2,NTex,MTex,0,
GL_LUMINANCE_ALPHA,GL_FLOAT,Nb);
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(0,0);
glTexCoord2f(0.0,1.0); glVertex2f(MTex-1,0);
glTexCoord2f(1.0,1.0); glVertex2f(MTex-1,NTex-1);
glTexCoord2f(1.0,0.0); glVertex2f(0,NTex-1);
glEnd();
glutSwapBuffers();
}

now this works, but What I really want to do is avoid the memcpy from device to host (to me it seems silly to have to do this). After many hours of digging through the SDK examples I have attempted to use the open cuda/opengl interop functions in order to avoid this, my current attempt uses pixel buffer objects,

glGenBuffers(1,PBO);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER,PBO);
glBufferData(GL_PIXEL_UNPACK_BUFFER,
NTex
MTex2sizeof(GLfloat), NULL ,GL_DYNAMIC_DRAW);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);

cudaGraphicsGLRegisterBuffer(&pbo_resource,*PBO,cudaGraphicsMapFlagsNone);

. . .

display{
float* tmp;
size_t num_bytes;
cudaGraphicsMapResources(1,&pbo_resource,0);
cudaGraphicsResourceGetMappedPointer((void**)&tmp,&num_bytes,pbo_resource);
advect<<<512,256>>>(tmp);
cudaGraphicsUnmapResources(1,&pbo_resource,0);
cudaThreadSynchronize();

glBindTexture(GL_TEXTURE_2D, tex[0]);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB,*PBO);
glTexImage2D(GL_TEXTURE_2D,0,2,NTex,MTex,0,GL_LUMINANCE_ALPH
A,GL_FLOAT,NULL);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, 0);
glBindTexture(GL_TEXTURE_2D, 0);
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(0,0);
glTexCoord2f(0.0,1.0); glVertex2f(MTex-1,0);
glTexCoord2f(1.0,1.0); glVertex2f(MTex-1,NTex-1);
glTexCoord2f(1.0,0.0); glVertex2f(0,NTex-1);
glEnd();
glutSwapBuffers();
}

but all I get is a blank white texture… I know I am missing something, I also had a look at cudaGraphicsGLRegisterImages and the associated functions but to me it seemed like the textures here are always read only.

Anyway if anyone knows how to create a texture using cuda and texture map a quad with it in opengl without copying data to the host please show me how you did it, or if anyone knows if it cant be done.

Dave

I am using PyCUDA and PyOpenGL, but I hope I can help.

I was trying to create and use 3D texture; I decided to use shaders.

According to documentation (wiki on OpenGL site) using mepped textures as buffers requires special treatment of texture object in shader.

Shader (texture is 3D 256x256x256):

[codebox]uniform samplerBuffer signalTexture;

in vec3 joinedTextureCoordinate;

void main() {

    int offset = 0;

    offset += int(255*joinedTextureCoordinate.z)*256*256;

    offset += int(255*joinedTextureCoordinate.y)*256;

    offset += int(255*joinedTextureCoordinate.x);

    gl_FragColor = texelFetch(signalTexture, offset);

}

[/codebox]

Preparation of buffer object:

[codebox]gl_zero = glGenBuffers(1)

textureId0= glGenTextures(1)

glBindBuffer(GL_TEXTURE_BUFFER, self.gl_zero)

glBufferData(GL_TEXTURE_BUFFER, data, GL_DYNAMIC_DRAW)

glBindTexture(GL_TEXTURE_3D, textureId)

glTexBuffer(GL_TEXTURE_BUFFER, OpenGL.raw.GL.ARB.texture_rg.GL_R32F, gl_zero)

[/codebox]

The most important part of solution of texture problems was using function texelFetch (not texture3D/texture) in shader.

Hope it helps.

I am using PyCUDA and PyOpenGL, but I hope I can help.

I was trying to create and use 3D texture; I decided to use shaders.

According to documentation (wiki on OpenGL site) using mepped textures as buffers requires special treatment of texture object in shader.

Shader (texture is 3D 256x256x256):

[codebox]uniform samplerBuffer signalTexture;

in vec3 joinedTextureCoordinate;

void main() {

    int offset = 0;

    offset += int(255*joinedTextureCoordinate.z)*256*256;

    offset += int(255*joinedTextureCoordinate.y)*256;

    offset += int(255*joinedTextureCoordinate.x);

    gl_FragColor = texelFetch(signalTexture, offset);

}

[/codebox]

Preparation of buffer object:

[codebox]gl_zero = glGenBuffers(1)

textureId0= glGenTextures(1)

glBindBuffer(GL_TEXTURE_BUFFER, self.gl_zero)

glBufferData(GL_TEXTURE_BUFFER, data, GL_DYNAMIC_DRAW)

glBindTexture(GL_TEXTURE_3D, textureId)

glTexBuffer(GL_TEXTURE_BUFFER, OpenGL.raw.GL.ARB.texture_rg.GL_R32F, gl_zero)

[/codebox]

The most important part of solution of texture problems was using function texelFetch (not texture3D/texture) in shader.

Hope it helps.

I am using PyCUDA and PyOpenGL, but I hope I can help.

I was trying to create and use 3D texture; I decided to use shaders.

According to documentation (wiki on OpenGL site) using mepped textures as buffers requires special treatment of texture object in shader.

Shader (texture is 3D 256x256x256):

[codebox]uniform samplerBuffer signalTexture;

in vec3 joinedTextureCoordinate;

void main() {

    int offset = 0;

    offset += int(255*joinedTextureCoordinate.z)*256*256;

    offset += int(255*joinedTextureCoordinate.y)*256;

    offset += int(255*joinedTextureCoordinate.x);

    gl_FragColor = texelFetch(signalTexture, offset);

}

[/codebox]

Preparation of buffer object:

[codebox]gl_zero = glGenBuffers(1)

textureId0= glGenTextures(1)

glBindBuffer(GL_TEXTURE_BUFFER, self.gl_zero)

glBufferData(GL_TEXTURE_BUFFER, data, GL_DYNAMIC_DRAW)

glBindTexture(GL_TEXTURE_3D, textureId)

glTexBuffer(GL_TEXTURE_BUFFER, OpenGL.raw.GL.ARB.texture_rg.GL_R32F, gl_zero)

[/codebox]

The most important part of solution of texture problems was using function texelFetch (not texture3D/texture) in shader.

Hope it helps.

To get an optimial solution, use CUDA and Opengl interop. Write data to a cuda “surface” (which uses a CUarray). Use that cuarray with a opengl registered cuda texture. The Cuda C Programming manual has the details.

To get an optimial solution, use CUDA and Opengl interop. Write data to a cuda “surface” (which uses a CUarray). Use that cuarray with a opengl registered cuda texture. The Cuda C Programming manual has the details.

To get an optimial solution, use CUDA and Opengl interop. Write data to a cuda “surface” (which uses a CUarray). Use that cuarray with a opengl registered cuda texture. The Cuda C Programming manual has the details.

thanks indy, know how to do cuda gl interop for vertex bufferes. The cuda example where nice for this. I just wanted a similar example for textures. I’m happy to dig through the programming manual though… I just like to see working examples.

thanks indy, know how to do cuda gl interop for vertex bufferes. The cuda example where nice for this. I just wanted a similar example for textures. I’m happy to dig through the programming manual though… I just like to see working examples.

thanks indy, know how to do cuda gl interop for vertex bufferes. The cuda example where nice for this. I just wanted a similar example for textures. I’m happy to dig through the programming manual though… I just like to see working examples.