MacPro, osx10.5.4 with 8800GT CUDA cannot find my card

System: MacPro with OSX10.5.4 and an NVIDIA GeForce8800GT.

cuInit() in the test file below returns CUDA_ERROR_NO_DEVICE. Any ideas?

System specifics:

         Chipset Model: NVIDIA GeForce 8800 GT

          Type: Display

          Bus: PCIe

          Slot: Slot-1

          PCIe Lane Width: x16

          VRAM (Total): 512 MB

          Vendor: NVIDIA (0x10de)

          Device ID: 0x0602

          Revision ID: 0x00a2

          ROM Revision: 3233

Modifed from code posted by Mass Fatica (NVIDIA)

   /*

    suggested:

      g++ -o gpumeminfo gpumeminfo.c -I /usr/local/cuda/include/  -L /usr/local/cuda/lib/ -lcuda

    worked:

      nvcc gpumeminfo.c -lcuda

    */

   #include <cuda.h>

    #include <stdio.h>

   static unsigned long inKB(unsigned long bytes)

    { return bytes/1024; }

   static unsigned long inMB(unsigned long bytes)

    { return bytes/(1024*1024); }

   static void printStats(unsigned long free, unsigned long total)

    {

       printf("^^^^ Free : %lu bytes (%lu KB) (%lu MB)\n", free, inKB(free), inMB(free));

       printf("^^^^ Total: %lu bytes (%lu KB) (%lu MB)\n", total, inKB(total), inMB(total));

       printf("^^^^ %f%% free, %f%% used\n", 100.0*free/(double)total, 100.0*(total - free)/(double)total);

    }

   int main(int argc, char **argv)

    {

       unsigned int free, total;

       int gpuCount, i;

       CUresult res;

       CUdevice dev;

       CUcontext ctx;

      CUresult e = cuInit(0);

       printf("error %d\n", e);  /* prints: "error 100" */

      cuDeviceGetCount(&gpuCount);

       printf("Detected %d GPU\n",gpuCount);

      for (i=0; i<gpuCount; i++)

       {

       cuDeviceGet(&dev,i);

       cuCtxCreate(&ctx, 0, dev);

       res = cuMemGetInfo(&free, &total);

       if(res != CUDA_SUCCESS)

           printf("!!!! cuMemGetInfo failed! (status = %x)", res);

       printf("^^^^ Device: %d\n",i);

       printStats(free, total);

       cuCtxDetach(ctx);

       }

      return 0;

    }

It works fine on my Mac Pro ( CUDA 2.0, 10.5.5).

Compiled with:
gcc check_mem.c -I/usr/local/cuda/include -L/usr/local/cuda/lib -lcuda

$ ./a.out
error 0
Detected 1 GPU
^^^^ Device: 0
^^^^ Free : 415762688 bytes (406018 KB) (396 MB)
^^^^ Total: 536674304 bytes (524096 KB) (511 MB)
^^^^ 77.470206% free, 22.529794% used

Check your CUDA installation. What is the output of deviceQuery?

I’m having the same problem on 10.5.5. I have an 8800GT in slot 1 configured 16x and a 7300GT in slot4 (8x). The 7300GT is driving 2 displays. The 8800GT has no displays attached to it (it wouldn’t play nice with my KVM switch).

Anyway:

emp% /Developer/CUDA/bin/darwin/release/deviceQuery
There is no device supporting CUDA.

Device 0: “Device Emulation (CPU)”
Major revision number: 9999
Minor revision number: 9999
Total amount of global memory: 4294967295 bytes
Number of multiprocessors: 16
Number of cores: 128
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 16384 bytes
Total number of registers available per block: 8192
Warp size: 1
Maximum number of threads per block: 512
Maximum sizes of each dimension of a block: 512 x 512 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 1
Maximum memory pitch: 262144 bytes
Texture alignment: 256 bytes
Clock rate: 1.35 GHz
Concurrent copy and execution: No

Test PASSED

I have tried reinstalling the CUDA driver package. Moving the card from slot 1 to 4, plugging a monitor into the 8800 to make sure it was initialized, etc.
Nothing works…

It isn’t a problem with my installation technique or somesuch, as it’s working fine on my MacBook pro.

reinstall the toolkit, double check that CUDA.kext is actually installed (select customize during the installation). it seems to get unselected with no rhyme or reason, so that might cause it.

Perfect, that worked. Thanks.

-jm