Allocating memory simutanously with GL is it correctly implemented?

My CUDA/GL program runs fine. But after I created another texture (without even use it), the program crashed.

The texture creation is like this.





It’s created after allocating some CUDA memory.

The texture is 800x600. My program use less than 200M memory, and I’m using a 8800GTX with 768M memory.

Sorry, I used the wrong constant in teximage. But still, why does CUDA crash for a GL error? The texture isn’t used at all…

Hard to say without seeing the rest of your code (which I assume is fairly complex).
Perhaps the texture id is leaking somewhere to cuda access :ermm: .

AFAIK, it should work (like some SDK examples do). You can replicate this error in a small kernel? Perhaps it is just a bug.

Indeed, “the rest of my code” is 5000+ lines of (CUDA with my macros) and a few hundred lines of I (which is a language I invented and have not yet released). Also, legally they don’t belong to me. Otherwise, I’d just post them.

That’s quite unlikely… They’re written in different languages, and compiled to different files (one CUDA exe and one GL dll). The exe doesn’t know a pointer to the GL window class where the texture id is stored.

I’d try…

It’s worth mentioning that I bumped into GL/CUDA memory issues a lot during 0.8. In those days, I used 10+ PBOs and a similar number of CUDA allocations. All of them are doubling vectors. That version of my program works for a few iterations, but never worked enough to produce any useful result. In the end I rewrote the entire thing (which contains loads of geometry shaders) in CUDA and it worked. I haven’t done the same in 1.0 yet.

I’ve been suspecting of some memory management issues with CUDA and some OpenGL functionalities. I would’ve counted on solid texturing and PBO support, because of the SDK examples, but we’ve come to learn that many strange things can happen with CUDA… :yes:

As a side note, I was messing with textures with cudaArrays (no OpenGL) and my simple matrix mul kernel would give back bogus results every now and then. For some reason, after I rebuilt the executable (and the planets aligned), the damn thing started to work every single time. :magic: I just sat there and said “whatever…”.

Btw, this test used several kernels in the same .cu. I saw some problems with shared memory in this. Perhaps there are still some hidden bugs in the compiler. If this is your case, you could try separating your kernels and see what happens… who knows…