MacBookPro6,1, CUDA 3.0: Device not found MacBookPro6,1 CUDA, Device not found, Beginner's Quest

About a week ago, I downloaded and installed the CUDA 3.0 SDK and driver on my new MacBookPro (MacBookPro6,1, 2.53 GHz Intel Core i5, 8GB) which comes with a built-in Intel HD Graphics adapter, and an NVIDIA GeForce GT 330M. Immediately after installing the software, I did a “make” on the C programs, and everything went well. I could execute the compiled programs, and it was nice to see the output of ./deviceQuery. I then started reading a book about CUDA, which took me a few days. When I finally felt ready to try out a few things, I was quite surprised that the installed software did not find my CUDA card! Here’s the output of ./deviceQuery:

[font=“Courier New”][indent]Carsten-Kuckuks-MacBook-Pro:release carsten$ ./deviceQuery
./deviceQuery Starting…

CUDA Device Query (Runtime API) version (CUDART static linking)

There is no device supporting CUDA

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 55683, CUDA Runtime Version = 0.0, NumDevs = 0

PASSED

Press to Quit…

[/indent][/font]

Reboot does not help.
Reinstalling the CUDA driver, or the SDK does not help.
Running a video in the background (hoping that would trigger a switch to the CUDA card) did not help.

Does anybody here on this forum have any idea what I’m doing wrong here?!

Regards,

Carsten

Here’s an update: I installed the application gfxCardStatus which shows which card is active, and also allows for forcing OS X to use a specific graphics card. ( [url=“gfxCardStatus by cody krieger”]http://codykrieger.com/gfxCardStatus/[/url] ). When I force OS X to use the Intel card, I get the behaviour described in my first post. When I force OS X to use the nVidia card, everything works nicely.

So the issue boils down to: How do I programmatically force OS X to activate the nVidia card?

This issue also raises the question: Is it the responsibility of an application programmer to activate the nVidia card on the MBP, or should it be the responsibility of the CUDA SDK? In my judgement, the case could be made for both…

Hi Carsten,

I had similar issues with a fresh CUDA 3.1 driver/toolkit/sdk install on my i7 MBP with 10.6.4. Your gfxCardStatus tip fixed it for me as well. Thanks. I too would be interested to know more about why some applications (e.g., the ones I tried in the SDK) can find the 330M just fine, while others (e.g. deviceQuery) seem to think there are no CUDA devices available when I’m not forcing the use of the nvidia card.

Eric

I’d like to know how to programmatically switch between integrated graphics and the NVIDIA GPU. I guess I’ll take a look at the gfxCardStatus code, which should provide the information.

I’d like to know how to programmatically switch between integrated graphics and the NVIDIA GPU. I guess I’ll take a look at the gfxCardStatus code, which should provide the information.