Hi everyone. I was running 3.1 flawlessly in both my computers (2008 8-core mac pro with nvidia GeForce GTX 285 and 2010 13 in MacBook Pro with geforce 320m). I decided to upgraded to 3.2 RC, by first deleting the previous install (/usr/local/cuda, /Library/Frameworks/CUDA.framework, /System/Library/Extensions/CUDA.kext, etc.). I rebooted and reinstall the developer driver, toolkit and SDK, all using version 3.2. On my MacBook Pro after I make the examples and run ./deviceQuery I get:
cudaGetDeviceCount FAILED CUDA Driver and Runtime version may be mismatched.
I tried rebooting, uninstalling and reinstalling, and running with sudo, nothing works. Examples on the openCL directory work fine. I reinstalled 3.1 and it worked fine again. Is there a problem with 3.2 and the 320m?
Hi. As I mentioned in my original post, even 3.1 works fine. The problem seems to appear with 3.2 RC. Although 3.1 works I would like to have 3.2 as this is the version I run on my Mac Pro and I would rather be working with the same version in all my machines to avoid problems.
Hi. As I mentioned in my original post, even 3.1 works fine. The problem seems to appear with 3.2 RC. Although 3.1 works I would like to have 3.2 as this is the version I run on my Mac Pro and I would rather be working with the same version in all my machines to avoid problems.
Neither 3.1 nor 3.2 works for my MacBook Pro (mid 2010, with a GT 320M).
I always get the message:
cudaGetDeviceCount FAILED CUDA Driver and Runtime version may be mismatched.
I tried reinstalling and removed everything before, i.e. I unloaded the kext, deleted it, removed the framework, the preference pane, the sdk, etc. It doesn’t help.
Neither 3.1 nor 3.2 works for my MacBook Pro (mid 2010, with a GT 320M).
I always get the message:
cudaGetDeviceCount FAILED CUDA Driver and Runtime version may be mismatched.
I tried reinstalling and removed everything before, i.e. I unloaded the kext, deleted it, removed the framework, the preference pane, the sdk, etc. It doesn’t help.
This lets you decide which of the two graphic chips will currently be in use (either the Intel or the Nvidia one). Then switch to “NVIDIA only” mode and reinstall the driver. While in “NVIDIA only” mode the device should be detected when you run CUDA applications. After reinstalling the driver, you can switch back to dynamic mode whenever you want. Just make sure to enter the “NVIDIA only” mode again, when you like to run a CUDA application.
This lets you decide which of the two graphic chips will currently be in use (either the Intel or the Nvidia one). Then switch to “NVIDIA only” mode and reinstall the driver. While in “NVIDIA only” mode the device should be detected when you run CUDA applications. After reinstalling the driver, you can switch back to dynamic mode whenever you want. Just make sure to enter the “NVIDIA only” mode again, when you like to run a CUDA application.
Got it working, the problem was with the boost headers that nvidia included. Replaced the C/src/interval/boost/numeric headers folder with latest one from Boost1.44 now it compiles fine.
Got it working, the problem was with the boost headers that nvidia included. Replaced the C/src/interval/boost/numeric headers folder with latest one from Boost1.44 now it compiles fine.