No OpenGL option for CUDA install. How to render?

I found the Julia Set exercise from the CUDA by Example book and was working on it today.

I have a Titan X installed on my linux system, exclusively for GPU processing and my monitor is connected to an intel integrated graphics card. The Julia Set code seems to want to call the OpenGL library (#include <GL/glut.h>), but I know that when I install CUDA on my graphics card I should NOT choose the “install OpenGL” option (I’d go back to the inferal login-loop otherwise).

I guess I’m a bit confused here, since I wouldn’t know what to do if I want to run the Julia Set code. What is the workaround? To plug in my monitor to my Titan X and also use it for rendering and install OpenGL? Will any parallel processing capabilities be reduced if I do this?

As far as I know (I could be wrong?) a graphics card can be used for GPU parallel processing or for rendering (plugged into monitor) but not for both at the same time.

Certainly one approach would be to use Titan X for graphics, display, and CUDA usage, and then you could load the OpenGL libs and run the X desktop on the Titan X. In this way you could run the Julia sample.

If you don’t like the idea that the graphics card is being shared between CUDA and display tasks, another common approach would be to run 2 NVIDIA graphics cards, one of which is handling display tasks, one of which is handling CUDA tasks.

In either scenario it would possible to run CUDA/OpenGL interop linux sample codes or your Julia code.

You may also want to read this:

There may even be other approaches (perhaps more complicated) than what I have outlined here.

I’ve had no problem at all running CUDA on the same graphics card I’ve got a screen plugged into, same with multiple machines here; just in case you wanted to hear that someone else has done it ;).

How does that affect the workload balance? Essentially my question is, by plugging in my Titan X and using it for rendering too, am I consuming 50% of my GPU’s resources or only say 5-10% ? Since I have a GPU intended for parallel processing and not for rendering, having an idea about this is useful.

Thanks for all other replies!