crash in cudaMalloc when ogl interop is used

Hi

I have this very simple code, but it crashes reproducably both on windows and linux inside ogl/cuda:

int main(int, char**)
{
if(cudaGLSetGLDevice(0) != cudaSuccess)
throw std::runtime_error(“Unable to set gl device”);
void* data;
cudaMalloc(&data, 666);
}

crash inside cudaMalloc, and the backtrace is:

#0 0x00007ffff4bddf40 in ?? () from /usr/lib/libnvidia-glcore.so.260.19.21
#1 0x00007ffff4b9e131 in ?? () from /usr/lib/libnvidia-glcore.so.260.19.21
#2 0x00007ffff4b9ea25 in ?? () from /usr/lib/libnvidia-glcore.so.260.19.21
#3 0x00007ffff7253900 in ?? () from /usr/lib/libcuda.so.1
#4 0x00007ffff724df6b in ?? () from /usr/lib/libcuda.so.1
#5 0x00007ffff72a0574 in ?? () from /usr/lib/libcuda.so.1
#6 0x00007ffff7bc6da9 in ?? () from /opt/cuda/lib64/libcudart.so.3
#7 0x00007ffff7bbcfe8 in ?? () from /opt/cuda/lib64/libcudart.so.3
#8 0x00007ffff7bb6f89 in cudaMalloc () from /opt/cuda/lib64/libcudart.so.3
#9 0x0000000000401f66 in main (argc=1, argv=0x7fffffffd898) at /home/bschindl/scivis-exercises/cuda-tests/main.cpp:21

I know I could use cudaSetDevice, but my real application uses the ogl interop (cuda 3.1), so that’s an idealized version of my application from cuda perspective.
Am I really doing something wrong or have I found a bug?

Thank you

Hi

I have this very simple code, but it crashes reproducably both on windows and linux inside ogl/cuda:

int main(int, char**)
{
if(cudaGLSetGLDevice(0) != cudaSuccess)
throw std::runtime_error(“Unable to set gl device”);
void* data;
cudaMalloc(&data, 666);
}

crash inside cudaMalloc, and the backtrace is:

#0 0x00007ffff4bddf40 in ?? () from /usr/lib/libnvidia-glcore.so.260.19.21
#1 0x00007ffff4b9e131 in ?? () from /usr/lib/libnvidia-glcore.so.260.19.21
#2 0x00007ffff4b9ea25 in ?? () from /usr/lib/libnvidia-glcore.so.260.19.21
#3 0x00007ffff7253900 in ?? () from /usr/lib/libcuda.so.1
#4 0x00007ffff724df6b in ?? () from /usr/lib/libcuda.so.1
#5 0x00007ffff72a0574 in ?? () from /usr/lib/libcuda.so.1
#6 0x00007ffff7bc6da9 in ?? () from /opt/cuda/lib64/libcudart.so.3
#7 0x00007ffff7bbcfe8 in ?? () from /opt/cuda/lib64/libcudart.so.3
#8 0x00007ffff7bb6f89 in cudaMalloc () from /opt/cuda/lib64/libcudart.so.3
#9 0x0000000000401f66 in main (argc=1, argv=0x7fffffffd898) at /home/bschindl/scivis-exercises/cuda-tests/main.cpp:21

I know I could use cudaSetDevice, but my real application uses the ogl interop (cuda 3.1), so that’s an idealized version of my application from cuda perspective.
Am I really doing something wrong or have I found a bug?

Thank you

I believe that you need to actually initialize OpenGL before initializing CUDA for gl/CUDA interop to work, are you doing that?

I believe that you need to actually initialize OpenGL before initializing CUDA for gl/CUDA interop to work, are you doing that?