I am working with optix 8.0.0 samples and trying to run optixpathtracer example. But I am getting this exception " “GL interop is only available on display device, please use display device for optimal performance. Alternatively you can disable GL interop with --no-gl-interop and run with degraded performance.”
Following this post
I tried to fix it by clean re-installation of the nvidia drivers and cuda but still doesnt help.
My system:
Dell laptop with two graphics card
Intel UHD graphics
Nvidia RTX A3000 12GB laptop GPU (RTX Driver release 570)
Cuda 12.1
windows 11
I used cudaGetDevice to check the device prop and it points to NVidia GPU. But CUDA_CHECK( cudaDeviceGetAttribute( &is_display_device, cudaDevAttrKernelExecTimeout, current_device ) ); shows it is not the display device. How do I set it to display device. I already followed Nvidia control panel settings to set graphics card to Nvidia.
I tried to deactivate intel UHD graphics then I got error with init GLFW window.GLFW Error 65542: WGL: The driver does not appear to support OpenGL
Have you tried using the Nvidia Control Panel to set your A3000 to be the default GPU for optixPathTracer? You can set the preferred GPU either globally or on a per-application basis.
Does the sample run fine when using --no-gl-interop? You just want to make sure you’re getting optimal performance, and/or see what the performance difference is?
To clarify, the “degraded performance” that the sample is referring to does not mean that ray tracing will slow down. What it means is that the application has to copy the framebuffer from your Nvidia GPU to the display GPU every frame. This is a relatively fast operation and may or may not be noticeable. The framebuffer copy will not prevent being able to achieve 60fps, and chances are it will not matter at all if your render kernel takes longer than what is required for 60fps, meaning it takes longer than 16 milliseconds (if you use a high samples-per-pixel setting, for example). Moreover, this only applies to interactive applications, and does not apply when writing your ray tracing results to an image file, since the framebuffer copy to the host machine is unavoidable.
I’m only explaining in case the ‘degraded performance’ warning was accidentally scaring you more than it should. Internally we’ll generally just use --no-gl-interop fairly often as needed, and there aren’t many cases where it matters. If we’re doing careful benchmarking on a display GPU, for example, then yeah we’ll try to make sure GL interop is working. But actually more often than not, I prefer benchmarking on a non-display GPU because it’s faster and the timings are more stable - because the display GPU is always doing other things, display stuff, and not just the ray tracing or compute for your app.