Several H/W related Questions

Hi all,

I’m a newbie with CUDA, so have several questions.

I’m planning to build quite a robust ray tracing system with CUDA. So my first thing to do is to select
graphics cards. :rolleyes:

  • It seems to me that Quadro Plex products are far more expensive than Geforce serieses, while their
    computation power difference (in terms of flops) is not to big. (e.g., GTX 280(295) vs Quadro Plex D2).
    So my first question is whether I should buy Quadro or Geforce.

  • If I use two GTX 280 cards and interconnect them using SLI, then can CUDA use them as if they were a single
    device? If so, does it mean total 2GB memory availability? (GTX 280 has 1GB of memory)

  • finally, in a system with a arbtrary non-CUDA graphics card (ATI or old nVidia) and a CUDA-ready card,
    can I use CUDA for computation and display for another arbitrary card? I’ve seen several postings on this issue,
    but quite confusing. (Maybe, the answer is “it depends”? :shifty: )

Many thanks in advance.

  • I think you can use two card simultaneously for computing if you disable SLI, but in that case they won’t share memory.

  • It’s possible. You only have to configure properly your system.

You can get use a non CUDA card to display on the screen. In fact the Tesla boards dont have a display port. You will just need to copy the rendering from the CUDA device(s) to an openGL/directx/whatever texture/surface on the display card. In my experience you lose a few fps doing it this way due to the slowness of the pci bus transfer - but it does work.

It depends. If you are looking for a development platform: go GTX 280. If you need massive amounts of memory (4GiB), go Tesla 1060. If you are deploying a dense server that will run jobs 24/7, go Tesla 1070. If … (there are a lot more ifs). And this is only my opinion, search for some of the other “selecting a GPU” threads to see many arguments for and against Tesla (let’s keep it out of this thread guys, that horse has been beaten to death).

No. SLI bonds the two cards together for graphics only. CUDA only sees one of the two cards in SLI. If you disable SLI, however, CUDA sees two cards which you can run separate applications on. Or, if you write your application with one thread controlling each GPU, you can use both at once.

It depends :) If you are running linux, you can mix ATI and NVIDIA no problem. If you are running windows, forget about even trying. Tim Murray at NVIDIA says that it is possible in Windows XP, but people attempting so on the forums have reported issues (i.e. it doesn’t work).

As for mixing old NVIDIA with new NVIDIA cards: download the release notes for the latest CUDA driver (177.80 I think) and see which old GPUs it supports. NVIDIA drops the oldest off the bottom every once in a while so if you want the system to be future proof to use the latest CUDA for a few years, get a more recent card.

If you do not need the extra memory, take a geforce

No, they are separate devices.

On linux you can have cards from different manufacturers.

On XP it should also be possible.

On Vista it is not possible.

When using older cards from NVIDIA you have to be careful that they are not too old, e.g. they should be supported by the same driver. Maybe it is possible to have 2 NVIDIA drivers loaded at the same time, but my guess is that it will at least be a painful experience.

Aaah good point - my idea of an “old” card was still a nvidia 7800 :) Since having driver issues with ATI boards some years ago I’ve only ever used nvidia.

Paul Thurott was talking about Windows 7 and apparently it might be possible again to have multiple graphics drivers installed.

But that’s for Windows 7… and still not confirmed yet.

I guess it will be presented as a great new innovation ;)

Quadro cards are professional graphics cards. They are probably more thoroughly tested and also they have more memory typically and are powerful for visualization.

My understanding is that even if you have two GTX 280 cards, you need to have two CPU threads that can launch kernels to execute on each of them. You can’t combine the two cards logically into a single device with 2GB memory. It would be like kernels parallely executing on the two graphics cards each having 1GB memory.

Yes, you can use a non-CUDA card for display and while executing your kernel, you can use specify the device (the CUDA enabled card) on which to run the kernel using the cudaSetDevice apis (pls refer to the cuda reference manual that comes with the SDK for details)

MisterAnderson42 gave an excellent answer

Thanks all of your for the kind answers!

So let me see if I have this right before I trash my currently working system :)

System is Linux 64-bit (ubuntu) with an old GeForce 7600 GT graphics card (not CUDA-capable).

To try out CUDA without paying out for a TESLA I have got hold of a GeForce GTX 260 (cheap but CUDA-capable)

I would like to keep using the old 7600 for graphics and the new 260 for CUDA programs.

I have downloaded the CUDA driver cudadriver_2.3_linux_64_190.18.run

I assume this is based on (identical to???) the ‘normal’ 190.18 driver. The release notes for this (http://www.nvnews.net/vbulletin/showthread.php?t=136281) say it supports “GeForce 6xxx and newer NVIDIA GPUs”. So I take this to mean that if I install the above CUDA driver it will run my old graphics card for doing graphics (despite the fact that card does not support CUDA), and my new card for doing CUDA.

Please, if I am wrong about this, let me know! And if I am right, when I put in the new card, (with the old card still connected to the display), will the system figure out to keep running the graphics on the old card? And will the CUDA system give me the choice of just one device (the new card) to run CUDA programs? Or will I have to do any further configuration?

Thanks in advance

Gareth Williams

Yes, this is correct. All NVIDIA drivers are CUDA capable these days.

Sometimes adding a new card will change the PCI ID of the old one. So X may not start right off the bat. You may need to edit your xorg.conf with the new correct PCI ID.