Error while trying to execute an OptiX 6.5 precompiled samples


I am new to the OptiX 6.5 and I try to run the samples in the SDK-precompiled- samples. But I got the errors as:
OptiX Error: Unknown error (Details:
Function “_rtBufferCreateFromGLBO” caught exception: Encountered a CUDA error: cuGLGetDevices() returned (999): Unknown)’
I have read the same topic (Error while trying to execute an OptiX program).
From the topic, it suggests I can plug the Nvidia GPU directly to my monitor, but I am separate with the server and work through the remote desktop to use the Nvidia GPU.
It could also be the openGL is not using the nvidia graphics. Because I do have two VGA compatible controller, when I try “lspci | grep -i vga”, it gives two controllers, NVIDIA Corporation Device 2230 (rev al) and ASPEED Technology, Inc. ASPEED Graphics Family (rev 52).
In my computer, the OpenGL version is 3.1 Mesa 21.2.6. I have tried to use “sudo prime-select nvidia” and when I query prime-select, it is nvidia. But still OpenGL is using Mesa.

Can anyone help me with this? Thanks in advance!


First of all, if you’re new to OptiX, please do not use any of the legacy OptiX versions (1 - 6) for new developments.
There already have been six releases of OptiX 7 which implements a more modern, explicit API, is better integrated with CUDA, contains more features, and is generally faster than any older OptiX API

Now to your issue:
Most of the OptiX SDK examples use CUDA-OpenGL interoperability to display the image ray traced with OptiX (CUDA) with a textured rectangle in OpenGL. The interop happens by allocating an OpenGL pixel buffer object (PBO) which resides on the GPU device and is mapped to a CUDA pointer into which the OptiX device programs can render directly.

That CUDA-OpenGL interop mechanism can only work with an NVIDIA OpenGL implementation runing on the same device.
Since you’re using Mesa’s OpenGL implementation these CUDA-OpenGL interop including rtBufferCreateFromGLBO (where GLBO stands for OpenGL buffer object) cannot work and will fail in the described way.

To get these working you either need to install the NVIDIA display driver in a way which uses the NVIDIA OpenGL implementation, or if that is not possible (e.g. because the system is a compute system without graphics subsystem, or under Windows when running devices in Tesla Compute Cluster (TCC) mode) you would need to disable the CUDA-OpenGL interop code paths completely.

Many of the OptiX SDK examples have a command line option --nopbo which disables the OpenGL pixel buffer object interop code path. The ray traced image will then be copied from device to host to whatever OpenGL implementation is running during a glTexture2D call. Search for that inside the OptiX SDK source code and you’ll find it.

Same OpenGL interop issues explained here:

If you’re running multiple OpenGL implementations in parallel, then the application code would need to be enhanced to pick the proper OpenGL implementation.
The OptiX SDK examples are not doing that but most likely pick the first OpenGL pixelformat which is matching irrespective of the vendor.
Means it might be possible to overcome that with a more sophisticated OpenGL pixelformat selection code, but that is not really the scope of the simple OptiX SDK examples.

That CUDA-OpenGL interop behavior is exactly the same for OptiX 7 versions, just that the buffer resource management happens explicitly with CUDA host calls.

Thanks for the rapid response!

I am using OptiX 6.5 because we have bougth the VTD software from VIRES and we actually want to do some simulations using this software with OptiX. They are using OptiX 6.5 and when I run the lidar plugin, it gives this error. So I tried also the sample in the OptiX 6.5 SDK and still gives the same error.

Thanks for the explanation and the solutions. I have found the --nopbo line from the SDK samples. But are there any ways to change the implementation to Nvidia OpenGL?
Since there was no --nopbo line in the VTD VIRES lidar program and I don’t really know how to change their codes right now.
I am using Ubuntu 20.04 LTS and I have tried to reinstall the newest driver 510 and also 515.48.07 and try to change the prime-select to nvidia. I have also tried to use nvidia settings to change the prime graphics, but I dont even see the Preferred Graphic Processor as written in Error while trying to execute an OptiX program - #6 by sienaiwun.

Many thanks in advance!

But are there any ways to change the implementation to Nvidia OpenGL?

That is outside my expertise for Linux. I’m not using Linux at all and need to leave that to someone with more Linux driver knowledge.

Searching all sub-forums here for “ASPEED” turned up this for example:
Though switching the X11 implementation to the NVIDIA device also recommended attaching monitor to that dedicated device.

If none of the threads in that search is providing a sufficient answer, maybe post a general question about changing the OpenGL implementation selection order inside the OpenGL or Linux driver sub-forums.

In any case, if an application requires CUDA-OpenGL interop and is not picking the correct OpenGL implementation which is supporting that and also doesn’t check the respective code paths against the required features, then that is foremost an application issue you’d need to report to the application vendor. This is a developer forum, not an end-user application support forum.

Thanks for the answers! I will then ask in other forums!


Please read the other threads from the search first.
Then contact the software vendor to verify what system configurations are supported.

I will search first! Thanks!