I have a very annoying problem with some CUDA code I have written. I suspect my problem is just a lack of understanding on how to properly interact with cuda and openGL.
The code isn’t exactly complicated, but it does involve a pipe line of kernels.
Basically I have 2 meshes, a low resolution with about 3000 nodes and a high resolution mesh with about 60000 nodes. Each of the nodes in the low res mesh is also a mass in a mass spring system.
The pipeline is roughly
(repeated 50 times (for small enough time step, probably need to move to another thread)
calculate forces on masses (from springs/external forces)
calculate new mass positions
Calculate face normals low res mesh
Calculate vertex normals low res mesh
Calculate new High Res vertex positions based on mapping from low res
Calculate high res face normals
Calculate high res vertex normals
The high res mesh is then rendered, I may also render the masses/springs in the low res mesh.
Where ever possible I’ve just used cuda arrays to store the data, but there are 4 exceptions to this. The vertex positions and normals for each mesh are stored in openGL vertex arrays and I use cudaGLRegisterBufferObject and cudaGLMapBufferObject
to make the data available. This is so I can use the data directly for rendering.
This will all work fine for thousands of frames and then I’ll get a kernel time out on the calculate high res face normals step, or the whole thing will just freeze completely. In either case my system is basically caput and I have to reboot. I’m using cuda 1.1 under linux with an 8800 GTX
Is there a problem with updating an opengl vertex buffer in cuda and then using it to render?
Is GL_DYNAMIC_DRAW the wrong usage when allocating the buffers?
Any suggestions on things to look for which might be causing this?