TESLA drivers Separate it from graphics drivers.

All,

I think it is a good time to separate the TESLA drivers from Graphics drivers.

Any plans for this?

Why is this a good time? Why is this desirable at all? You’re basically saying “kill interop, kill the ability for a graphics board to run CUDA,” etc., because having separate drivers while maintaining the level of interconnectedness to make these things work is not possible.

Graphics drivers should still support CUDA. I am not asking to kill that support. Every1 needs it ofcourse.

TESLA, which is meant for HPC, need to be separate from Graphics. It really is oddddddd to see it being listed as a graphics card and it does not have graphics output anyway. Hence.

One can directly bypass “X” issues, “Watchdog” issues, “Vista Multi-GPU issue” etc… for TESLAs at least. (I know watchdog is fixed… just quoting it for the range of problems that the community has faced that r purely graphics related)

I don’t think it is a good time to kill Graphic API from Tesla, but how can i exploit all the nice rendering feature of Graphics APi like Framebuffer Object, PBO and VBO and also geometry shader , there’s nothing like that with Tesla, then why should we keep it while we can not use it

An easier solution would be to not use Vista if it is causing problems for you. Or perhaps do your programming with DirectX compute shaders when that becomes available later this year; it probably won’t run as fast, but it may be more ‘compatible’ with Windows than CUDA…but who knows what nVidia has up their sleeve before then ;)

This is complete nonsense. You can use opengl interop with a tesla. If you map a PBO or VBO, the cuda runtime automatically transfers the data from the tesla card to your other video card where the opengl program is executing.

As I understand it, the GPU’s on the boards stem from the same design, hence use the same driver. Do you really want to limit the group in which your ‘tesla’-drivers are tested only to the tesla users, instead of the millions of ‘graphics’ card users? I would prefer a thoroughly tested driver instead of some futile cosmetic advantages.

And OpenGL works with TESLA because TESLA is a graphics card according to the system. So it is not really a complete non-sense. - My own guess. I have no working knowledge of OpenGL at the momment.

This is a good point. May b, the same driver can kick in. but export the TESLA as a non-graphics card. But if this interferes with OpenGL functionality, then there is no point…

Thanks for your answers.

Have you ever tried FrameBuffer Object, or Geometry shader with Tesla. If you succeed using it, tell me how , it would be great

Well depending on what you exactly mean there are two options:

1: Configure the the X server to use a vritual resolution and use-display-device=none to create an X screen on the Tesla, subsequently you can create a window with glx-context to run whatever opengl/shader stuff you want. Only drawback is that this is not supported by NVIDIA. So YMMV. (Don’t know if this is at all possible under Windows)

2: Convert your geometry shader to cuda and use opengl interop to transfer data from tesla (processing) card to your other (display) card. See for instance the SimpleGL or postProcessGL sample applications.

I would argue the opposite. It’s time for TESLA, at least the C cards, to become a true graphics device with a video out. What’s the point of performing interactive computations if you can’t see what’s going on?

Possible that the absence of graphics related stuff has freed up transistors and space to host more MPs or host more memory or whatever… So, its a good thing.

Not everyone needs to visualize their computations in real time. Some (most?) of the people using Teslas with Linux are using them from a shell, perhaps even remotely (via SSH or whatever).

Like Sarnath said, adding graphics connectors increases the complexity of the card (not by much probably, but still); I think most people are running them either in a server environment, or have another low-end nVidia card that actually connects to the monitor to display their results.

In fact, in my (limited) experience, I haven’t yet written a single kernel that actually outputs any sort of graphics, since I mostly do numerical/linear algebra code with CUDA.

Sure, so leave the 1U Teslas with no video out. People will realistically use those just as computing devices. However, keeping the C cards without a video out is just limiting their usefulness. A lot of (maybe most?) people using them so far are using CUDA simply for numerical computing. There are a lot of other exciting possibilities available though, even with boring old numerical computing, once you can see what’s going on while it’s going on.

Wait for a GUI based debugger for CUDA. That will help you visually see what is going on even inside a TESLA. This does NOT necessitate a graphic output for TESLA

No, but you would need another graphics card to display the GUI (or use some kind of remoting). Either way, you still wouldn’t need a graphics output.

StickGuy, if they add a graphics port to the Tesla cards, what is the difference at the point between a Tesla and a video card with a bit less RAM? And what if adding it drives the price up some, especially given that most people probably wouldn’t use it?

You can still visualize the Tesla output without a monitor port…you just need to look at doing it in a different way (output to console, render to another graphics card in the computer, output to a file and view via web browser, etc.)

Current NVIDIA prices are something like: Quadro FX 5800: $3000, Tesla C1060: $1500, GeForce 285: $400. While these cards are not entirely comparable, the jump from $1500 to $3000 for a video out is quite a steep jump. The step down from $1500 to $400 costs 3 GB of RAM, which is also a steep jump.

I find it most interesting how against the idea of a video out you are given that the target market for the C cards is desktop (super)computers, not computers in some data center that are only accessed remotely. With desktop supercomputing, users are more likely (in my opinion) to run relatively short jobs where feedback on running processes could be provided in a timely fashion. There are also many problems where user input in the form of computational steering during computation is desirable. Just because visual feedback isn’t necessary to develop, e.g., a solver (a questionable proposition at best), doesn’t mean that people applying solvers wouldn’t benefit from visual feedback.

Get a TESLA + Geforce 285 - for 1900$. Thats good enough.U got huge memory and huge compute power with a display as well.

And a bandwith of only 6GB/s between them. Just what I wanted! Thanks for your outstanding suggestion!!!

Well, I did not know people have apps that keep transfering data to and fro between graphics cards… :-) lol…

OTOH,

Seen in the light of moving out data for visualization… aaah yes… I can see your point… Its a bit of pain. But worth it for 1900$ - Isn’t it?

You could even put your GeForce for immediate visualization and put the TESLA for delayed visualization… Think of pipeline-parallel-pattern.

OR

Look @ it this way. CUDA 2.2 has zero-memory copy… So, just make sure your TESLA does it and updates your memory… ANd, let ur 285 display it…

Anyway, you need to transfer data to 6GB/s or whatever, to display it. Isn’t it?

Does this not sound like an outstanding suggestion? Tmurray would agree… You can hide this 6GB/s latency this way.

OR

Even better, You could have a frame-buffer inside your GTX 285 and you can use TESLA to update it via zero-memory copy. I dont know if CUDA would allow it. If it allowed, it would be GREAT!

After all, device-memory also sits at a particular system memory address and is non-pageable once you have allocated. Technically, it must not make any difference between that memory sitting in RAM or any other device.

Tmurray, Any comments?

@Tim,

Although my comments above were mostly in light vein – the last point looks to have some water. Can you tell sthg on that?