Optix on Tesla K40

Hello developers,

I have been working on Optix with my GTX Titan X card since a year. Everything was working fine. Now I have changed my GPU to Tesla k40. Since it doesn’t have a graphical output so I have used CPU’s built-in VGA for the display.

When I ran my application it gave me this error:

OptiX Error: Unknown error (Details:
Function “_rtBufferCreateFromGLBO” caught exception: Encountered a CUDA error: cuGLGetDevices() returned (999): Unknown, [3801508])

I know its something about the display output. Does anybody know how can I fix this?


Tesla K40 Windows 10. Cuda 7.5

And can we use GPU without display output like Tesla for Optix development?

The Tesla boards run in a different driver model (Tesla Compute Cluster, TCC) and there is no OpenGL implementation running on them which means OpenGL interoperability can’t work, and that rtBufferCreateFrom[b]GLBO/b is trying to create a buffer from an OpenGL Buffer Object which can’t exist.

Other than that, Tesla boards are excellent for OptiX ray tracing or any other CUDA-based work because they are not limited by the 2 second Windows Timeout Detection and Recovery (TDR) limit per launch like boards running the Windows Display Driver Model (WDDM).

Simply make sure you do not use any OpenGL interop functions and create your buffers on the host as usual, which are then transferred to the device and back via PCI-E using map(), write/read, unmap(). After that you can do with the data whatever you need.

If you look at the optixConsole example, that’s not using any OpenGL mechanism to display the results and should work just fine on the Tesla board without changes.

Check if other examples have a command line option to disable the pixel buffer object. Try --help for the command line options help output and look for --nopbo. If some examples fail on the Tesla with the above mentioned rtBufferCreateFromGLBO() error, use the --nopbo option and try again.

Thank you so much for your response, I will try this thing. One more question. Can I use my Titan X just for the display and Tesla card for computation. or both in parallel? If yes, how?


Thanks alot. it worked.

Yes. You should be able to select which installed board is visible to CUDA either inside the NVIDIA Control Panel or via the environment variable CUDA_VISIBLE_DEVICES. Please find more information in this recent thread: https://devtalk.nvidia.com/default/topic/1028647/rendering-problems-on-drivers-above-383
I don’t have any GeForce board and don’t know how the control panel looks there. My Quadro control panel has an option to “Manage GPU Utilization”.

Not with OptiX 3.9.1. That only supports homogeneous multi-GPU configurations, basically only GPUs of the same architecture. The rules are a little more intricate but in your case the Tesla K40 is a Kepler GPU and the Titan X is at least a Maxwell GPU which is newer and should also be faster.
By default OptiX 3.9.1 would select the board with the higher Streaming Multiprocessor version, in that case the Titan X.
Though you could have two processes running which each use another device you would need to select inside the application by the rtContextSetDevices() function.

Heterogeneous multi-GPU systems are only supported in OptiX 4 and 5, and since you have a configuration which OptiX 5.0.0 can handle, I would recommend to switch to that.
Selecting multiple devices in one OptiX context will also not allow to use OpenGL interop.
A lot of information about why can be found here:

Performance of a heterogeneous multi-GPU system might also not be optimal. First the kernel needs to be compiled for both architectures, so higher startup times should be expected. Then the PCI-E connection can influence the performance. Best is if all boards are connected to 16 electrical lanes slots.

OptiX 5.0.0 also contains the optixConsole example.
If examples support an option to toggle the use of OpenGL interop as explained, you should be able to figure out which code paths to take inside he OptiX examples to not reach any rtBufferCreateFromGLBO() calls.