Hello, is it true that if you run CUDA kernals your screen will freeze if you only have one graphics card? I am afraid that when I run kernels that run for long time I will not be able to use my computer at all. I am getting the hp elite 450T with 2gb geforce 420 gt. Should that be enough to allow me to use my computer and run moderate-sized kernels? I am afraid that I cannot use my computer while running cuda. I would like to at least surf the web and listen to music while a CUDA program runs. I do not plan to do anything extreme, but it would be very annoying to not be able to do anything.
I am really excited to start programming with CUDA. It is part of my research project. Since that desktop only has one graphics card slot, should I cancel my order and get a desktop that has two GPU slots? Or is it not worth it since the freeze is very minimal? The desktop also has 9 GB of ram.
I am a little scared and not sure what to do. Thanks for the feedback guys.
Hello, is it true that if you run CUDA kernals your screen will freeze if you only have one graphics card? I am afraid that when I run kernels that run for long time I will not be able to use my computer at all. I am getting the hp elite 450T with 2gb geforce 420 gt. Should that be enough to allow me to use my computer and run moderate-sized kernels? I am afraid that I cannot use my computer while running cuda. I would like to at least surf the web and listen to music while a CUDA program runs. I do not plan to do anything extreme, but it would be very annoying to not be able to do anything.
I am really excited to start programming with CUDA. It is part of my research project. Since that desktop only has one graphics card slot, should I cancel my order and get a desktop that has two GPU slots? Or is it not worth it since the freeze is very minimal? The desktop also has 9 GB of ram.
I am a little scared and not sure what to do. Thanks for the feedback guys.
It doesn’t matter how fast your GPU is, unfortunately. Currently there is no preemptive multitasking for CUDA devices, so a GPU must be 100% devoted to one task at a time. The timeslice for CUDA is a single kernel call (or other operation), not the duration of the entire program. Most programs I write have kernel calls that last less than 10 milliseconds, which would be bearable (though a little jerky) for concurrent CUDA/display use. However, it really depends on what you want to do. Most any CUDA kernel that takes several seconds could probably be broken down into several shorter kernels. Kernels longer than ~5 seconds will be terminated by the display driver.
A potentially bigger issue is that you cannot use the live debugging tools (cuda-gdb or Parallel Nexus) on the display GPU for the same reason. If you prefer a live debugger, you will probably want to get a second cheap graphics card and use it as the primary display. Life is easier if the second card also has an NVIDIA GPU, but different operating systems have varying capability to deal with multiple vendor graphics drivers as well.
It doesn’t matter how fast your GPU is, unfortunately. Currently there is no preemptive multitasking for CUDA devices, so a GPU must be 100% devoted to one task at a time. The timeslice for CUDA is a single kernel call (or other operation), not the duration of the entire program. Most programs I write have kernel calls that last less than 10 milliseconds, which would be bearable (though a little jerky) for concurrent CUDA/display use. However, it really depends on what you want to do. Most any CUDA kernel that takes several seconds could probably be broken down into several shorter kernels. Kernels longer than ~5 seconds will be terminated by the display driver.
A potentially bigger issue is that you cannot use the live debugging tools (cuda-gdb or Parallel Nexus) on the display GPU for the same reason. If you prefer a live debugger, you will probably want to get a second cheap graphics card and use it as the primary display. Life is easier if the second card also has an NVIDIA GPU, but different operating systems have varying capability to deal with multiple vendor graphics drivers as well.
In order to avoid disappointment when comparing execution times on GPU vs CPU, I’d recommend adding a more powerful GPU for CUDA, instead of adding a cheaper one for display. While the 2GB of GPU memory sound impressive, it’s computational power is no match for a core i7 quad.
In order to avoid disappointment when comparing execution times on GPU vs CPU, I’d recommend adding a more powerful GPU for CUDA, instead of adding a cheaper one for display. While the 2GB of GPU memory sound impressive, it’s computational power is no match for a core i7 quad.
I totally missed that the original poster was going to use a GT 420, which I’ve never heard of until now. (I read GT 240) Although it doesn’t sound super fast, I am intrigued that the GT 420 appears to be a single slot Fermi card. Are there any other single slot cards out there with compute capability > 2.0? I could use one for a small modular compute node I’m designing.
I totally missed that the original poster was going to use a GT 420, which I’ve never heard of until now. (I read GT 240) Although it doesn’t sound super fast, I am intrigued that the GT 420 appears to be a single slot Fermi card. Are there any other single slot cards out there with compute capability > 2.0? I could use one for a small modular compute node I’m designing.
Originally I planned getting a HPE450t from shopping.hp.com. It comes with Intel I7 quad core, 8GB ram, and the 2GB NVIDIA GeForce 420. I had trouble finding information about this card, but I came across the “Comparison of Nvidia graphics processing units” wikipedia page and found the Geforce 420T that is OEM and appears to have the same amount of RAM. I was excited to hear that this was one of the first graphics card of the Fermi generation.
However, with the screen freeze issue, I may need to get the HPE480T that comes with 2 PCI-E x16 slots (among other expandability). Although I am only a beginner, I am worried that the lack of debug due to a single graphics card may negatively affect my CUDA experience. I am willing to get the HPE480T with only a 1 GB Nvidia Geforce 315 since the price is about the same for both models. What do you guys think?
Originally I planned getting a HPE450t from shopping.hp.com. It comes with Intel I7 quad core, 8GB ram, and the 2GB NVIDIA GeForce 420. I had trouble finding information about this card, but I came across the “Comparison of Nvidia graphics processing units” wikipedia page and found the Geforce 420T that is OEM and appears to have the same amount of RAM. I was excited to hear that this was one of the first graphics card of the Fermi generation.
However, with the screen freeze issue, I may need to get the HPE480T that comes with 2 PCI-E x16 slots (among other expandability). Although I am only a beginner, I am worried that the lack of debug due to a single graphics card may negatively affect my CUDA experience. I am willing to get the HPE480T with only a 1 GB Nvidia Geforce 315 since the price is about the same for both models. What do you guys think?
Yes, but it’s easier than you think. Look in your closet or ask your friends for some lame archaic NVidia card laying around. Everybody has that 2004 era box or at least that old GeForce 7600 or whatever just gathering dust.
But the second secret… you can use any PCIe card in a x1 slot! PCIe autodetects the lanes available and uses what it can negotiate.
But x1 slots can’t PHYSICALLY hold a x16 card… so you just take and modify the card! It’d be scary if it were an expensive card, but for a throwaway, it’s not so frightening.
Yes, but it’s easier than you think. Look in your closet or ask your friends for some lame archaic NVidia card laying around. Everybody has that 2004 era box or at least that old GeForce 7600 or whatever just gathering dust.
But the second secret… you can use any PCIe card in a x1 slot! PCIe autodetects the lanes available and uses what it can negotiate.
But x1 slots can’t PHYSICALLY hold a x16 card… so you just take and modify the card! It’d be scary if it were an expensive card, but for a throwaway, it’s not so frightening.
Wow that sounds really tempting SPWorley. If I can find a cheap x1 graphics card then I can still get the 2GB Nvidia card! :) Is there a problem if the graphics card you use for only display purposes is much older than the CUDA graphics card? That link you showed on ebay is that a PCI-E x1 card?
It sounds like I can keep my original build (HPE 450T) since I can use an old Geforce card to display my Debugging tools while I run a CUDA app. Right?
Wow that sounds really tempting SPWorley. If I can find a cheap x1 graphics card then I can still get the 2GB Nvidia card! :) Is there a problem if the graphics card you use for only display purposes is much older than the CUDA graphics card? That link you showed on ebay is that a PCI-E x1 card?
It sounds like I can keep my original build (HPE 450T) since I can use an old Geforce card to display my Debugging tools while I run a CUDA app. Right?
It’s nearly impossible to find a x1 PCIe graphics card. The point is that you can use a x16 card in a x1 slot and it just downgrades its communication to x1.
That link is just a random eBay card to show you they really are just $10. Pick whatever you like. Though maybe one with DVI output is better than the really old VGA-only ones.
It’s OK to use an older NV card… the current unified drivers still support even ancient cards. Just make sure it’s PCIe.
It’s nearly impossible to find a x1 PCIe graphics card. The point is that you can use a x16 card in a x1 slot and it just downgrades its communication to x1.
That link is just a random eBay card to show you they really are just $10. Pick whatever you like. Though maybe one with DVI output is better than the really old VGA-only ones.
It’s OK to use an older NV card… the current unified drivers still support even ancient cards. Just make sure it’s PCIe.