As my newly purchased laptop will sadly not run CUDA at full functionality (graphics interoperability won’t work it seems), I would like to hear other people’s experience with running CUDA on their laptops. What computer did you use, and what drivers did you install? How does CUDA function on it? Were there any specific steps needed to make it work?
I’m sorry that your laptop is not working with all CUDA functionality yet, but I expect it will work once you are able to install the correct display driver (from your other thread I see that you haven’t been able to install the drivers released with CUDA). NVIDIA does not provide drivers for laptops publicly because we have to work with each individual laptop manufacturer to support custom features (such as display power management). Therefore you need to contact your laptop manufacturer to request that they provide a CUDA-capable display driver.
If and when we find bugs with graphics interop on mobile GPUs, we will address them ASAP.
Today, I actually managed to install the driver released with CUDA by using an .inf file from a different version. The penalty for this was simply that the laptop’s screen is unable to recover from suspend mode. Somewhat annoying, but it would be a low price to pay if it enabled all CUDA functionality. However, it did not improve anything at all. The graphics interop examples still give the same error message about not being supported on multi GPU systems.
As far as I can tell, all example programs that do not display graphics are working fine. But as I want to use CUDA for programs that need to visualize the results, this is of little comfort. It’s still possible to read back results and render with OpenGL in the same program though, but that means doing slow transfers between device memory and host memory.
Have you tried http://laptopvideo2go.com? They provide INFs that solve “screen blank after suspend” and “incorrectly reported dimensions - black strip on the screen” problems with NVidia Go GPUs. Worked for me with a 5-year old Toshiba with 420 Go as well as with a new VAIO + 7400 Go.
I just bought my own macbook pro, and I’ve been working on getting first-hand information about CUDA problems on laptops.
(No, I’m not running CUDA in OS X – that’s a whole 'nother ball of wax.)
I have installed apple bootcamp and Windows XP and used an internal prerelease NVIDIA display driver to get CUDA working on my mac. This post is not about where to get an NVIDIA driver that supports CUDA for your laptop. That is also another whole ball of wax since technically NVIDIA cannot release drivers for laptops since the OEMs control release of their own drivers. We are working on that issue also. For now you’ll have to use the various .INF tricks to get a driver installed (see other posts on this thread, but note that we can’t officially support this).
This post is about graphics interop on laptops with CUDA.
It turns out that most G8X laptops, the macbook pro included, appear to report multiple displays in windows despite only having one GPU. Unfortunately we added some code to the SDK samples that checks for multiple displays, and exits with the message “Graphics interoperability on multi GPU systems currently not supported.”
However, at least on the macbook pro, if you comment out the call to “isInteropSupported()” in the sample code, the samples that use interop will work. For example, in the fluidsGL sample, comment out lines 316-318.
I’m working on fixing this bug for the next release of the CUDA SDK. (And in general the CUDA team is working on making interop work with multiple GPUs for a future release of the CUDA Toolkit.)
Please let use know in a reply to this thread if this works for you.
I’ve just tried the samples simpleGL, fluidsGL and postProcessGL from the cudaSDK and everything seems to work fine after removing the isInteropSupported() check.
My laptop is a Zepto 6324W with a GF8600M GT Card, driver version 163.44 (with modified .inf) and SDK version 1.0, running on WinXP(32).
Thanks Mark! Removing the “isInteropSupported()” supported call did the trick, and I can finally run all SDK examples after having installed the 163.44 driver nutti mentioned.
However, the more robust solution you provided does not seem to work on my 6224W. (and presumably not on nutti’s 6324W either, as only their plastic case differs AFAIK) It just gives the same old error message. After playing around with the code, my 6224W-version of your fix looks like this instead:
As you can see, I just put a hack in there that prevents the do-loop from executing more than once. I’ll try to investigate where the problem lies in the upcoming week.
So i got the fluid demo working, very sweet and quick on the 8600m gt 256mb macbook pro.
Hey Mark Harris im attempting your n-body simulator from gpugems3 and im getting
Error 1 error C2440: ‘type cast’ : cannot convert from ‘ParamBase **’ to ‘std::_Vector_const_iterator<_Ty,_Alloc>’ c:\documents and settings\m3the01\desktop\gpugems3 (d)\content\31\demo\common\src\paramgl.cpp 153
I encountered the same error
error C2440: ‘type cast’ : cannot convert from ‘ParamBase **’ to ‘std::_Vector_const_iterator<_Ty,_Alloc>’
Any help could be appreciated!