No CUDA device after rebooting...

Hi there,

I have a mac book pro, with only one GPU (GeForce 8600M GT), with the latest version of Leopard (10.5.8) and I am struggling with the install of all the packages.

I installed the CUDA 2.3 (Drivers 2.3.0, SDK, and toolkit)

After rebooting, I always get:

So I can’t use CUDA on my GPU.

When I re-install the drivers, I get:

So it is very annoying to install the drivers each time I want to use CUDA.

Does anyone have the same issue?

Any idea how to solve this?

Cheers

I think below topic is similar issue.
http://forums.nvidia.com/index.php?showtopic=104375

could you checking /dev/nvidia* file permissions?
This info from last comment.

thanks for the tip, but there is no device file /dev/nvidia on my computer.

However, few days ago I installed the 2.3a packages. It seems the problem is gone: I can still use CUDA after rebooting the computing :-D

I have 2.3a, have the same problem and no /dev/nvidia files. I have to reinstall the kext after a reboot. This never used to happen apart once the install order was sorted out.

I had the same problem Using 2.3a on Snow Leopard 10B504. The “There is no device supporting CUDA.” error or just the emulator showing up. Looking in the log file I saw:

Sep 16 00:33:28 MacBook-Pro SystemStarter[21]: "/System/Library/StartupItems/CUDA" failed security check: group writable

Changing the permissions to on /System/Library/StartupItems/CUDA to 755 appears to have fixed the problem.

Thanks for the response here and in the other thread. Can you be more explicit - is this a chmod command or what - I am sorry to say I do not know the spell to implement what you have kindly pointed out!

Sorry - found the spell now in one of maolimu’s posts:

sudo chmod g-w /System/Library/StartupItems/CUDA/*

sudo chmod g-w /System/Library/StartupItems/CUDA/

it does require having and remembering your root password…

Hi,

I’m also having this problem, but the chmod suggestions haven’t helped me. I have an interesting behavior, though, that might help someone figure out what is going wrong.

If I run my test application (basically, “hello world”) I get an error on the first cudaMalloc of “38” (cudaErrorNoDevice). If I go to the SDK in /Developer/GPU Computing/C/bin/darwin/release and run ./Mandelbrot, then the application window opens normally and runs. I can zoom in and out. If I press “s” to switch to the CPU implementation, the frame rate falls by a factor of 10. The GPU is definitely up and running.

If I leave the Mandelbrot application running, then, when I run “hello world” again, it runs to completion.

If I close the Mandlebrot application, then, when I run “hello world”, it gives me the same error (38).

Similarly, if I try to run the SDKs “deviceQuery”, I see the error 38 – but if I run deviceQuery while Mandelbrot is running, then it behaves normally.

deviceQuery output without Mandelbrot running:

$ ./deviceQuery

[deviceQuery] starting...

./deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

cudaGetDeviceCount returned 38

-> no CUDA-capable device is detected

[deviceQuery] test results...

FAILED

Press ENTER to exit...

deviceQuery output with Mandelbrot running:

$ ./deviceQuery

[deviceQuery] starting...

./deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

Found 1 CUDA Capable device(s)

Device 0: "GeForce GT 330M"

  CUDA Driver Version / Runtime Version          4.0 / 4.0

  CUDA Capability Major/Minor version number:    1.2

  Total amount of global memory:                 256 MBytes (268107776 bytes)

  ( 6) Multiprocessors x ( 8) CUDA Cores/MP:     48 CUDA Cores

  GPU Clock Speed:                               1.10 GHz

  Memory Clock rate:                             790.00 Mhz

  Memory Bus Width:                              128-bit

  Max Texture Dimension Size (x,y,z)             1D=(8192), 2D=(65536,32768), 3D=(2048,2048,2048)

  Max Layered Texture Size (dim) x layers        1D=(8192) x 512, 2D=(8192,8192) x 512

  Total amount of constant memory:               65536 bytes

  Total amount of shared memory per block:       16384 bytes

  Total number of registers available per block: 16384

  Warp size:                                     32

  Maximum number of threads per block:           512

  Maximum sizes of each dimension of a block:    512 x 512 x 64

  Maximum sizes of each dimension of a grid:     65535 x 65535 x 1

  Maximum memory pitch:                          2147483647 bytes

  Texture alignment:                             256 bytes

  Concurrent copy and execution:                 Yes with 1 copy engine(s)

  Run time limit on kernels:                     Yes

  Integrated GPU sharing Host Memory:            No

  Support host page-locked memory mapping:       Yes

  Concurrent kernel execution:                   No

  Alignment requirement for Surfaces:            Yes

  Device has ECC support enabled:                No

  Device is using TCC driver mode:               No

  Device supports Unified Addressing (UVA):      No

  Device PCI Bus ID / PCI location ID:           1 / 0

  Compute Mode:

     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 4.0, CUDA Runtime Version = 4.0, NumDevs = 1, Device = GeForce GT 330M

[deviceQuery] test results...

PASSED

Press ENTER to exit...

This odd behavior showed up after I rebooted since installing the CUDA libraries and the SDK.

Any help would be appreciated.

Hi all!

I’ve had the same problem. In my case, every time I ran a cuda binary from /Developer/GPU Computing/C/bin/darwin/release the error was “cudaSafeCall() Runtime API error 38: no CUDA-capable device is detected”.

This occur because the kernel extension CUDA.kext (located at /System/Library/Extensions/) isn’t autoloaded at system startup. In fact, after a manual load of this extension (with the command kextload /System/Library/Extensions/CUDA.kext) all works well.

To solve, first of all make sure all is well installed, so read the getting started guide for mac, and follow strictly those steps.

I have uninstall everything, and then reinstall all following the steps explained into the guide.

Then, after the installation, all seems to work, because the CUDA.kext extension is load (as written into the guide, doing kextstat | grep -i cuda you could see the extension right loaded).

But, after a reboot, the system doesn’t autoload CUDA.ext (in fact, doing again kextstat | grep -i cuda you couldn’t see nothing).

To solve it:

First, create a .plist file like this:

<?xml version="1.0" encoding="UTF-8"?>

<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">

<plist version="1.0">

<dict>

        <key>Label</key>

        <string>com.nvidia.cuda.launchd</string>

        <key>ProgramArguments</key>

        <array>

                <string>/sbin/kextload</string>

                <string>/System/Library/Extensions/CUDA.kext</string>

        </array>

        <key>RunAtLoad</key>

        <true/>

	<key>UserName</key>

        <string>root</string>

</dict>

</plist>

and save it to /System/Library/LaunchDaemons/ (I think you could use /Library/LaunchDaemons/ instead, but even CUDA.kext is in /System/Library/, so I use /System/Library/LaunchDaemons/).

Make sure it has the right privilege, doing

sudo chown 0:0 /System/Library/LaunchDaemons/[filename]

(in my case, sudo chown 0:0 /System/Library/LaunchDaemons/com.nvidia.cuda.launchd.plist) (this .plist file specifies how to load CUDA.kext at login).

After all, to enable the daemons who will load CUDA.kext, use launchctl:

sudo launchctl load -w /System/Library/LaunchDaemons/com.nvidia.cuda.launchd.plist

And, now all works well for me!

(EDITED to write right step)