Can't Compile Examples

I’m running OSX 10.8.2, I downloaded and installed CUDA 5.0 for Mac OSX without problem but when I try to compile the examples as per the instructions on the “getting started” page, I got this message:

Makefile:79: *** MPI not found, not building simpleMPI… Stop.
make: *** [0_Simple/simpleMPI/Makefile.ph_build] Error 2

The PATH and DYLD_LIBRARY_PATH variables are defined as is described, and I’ve also tried compiling for 64bit architecture (make x86_64=1) but it didn’t help. Is there something I’ve forgotten to install? Something not mentioned in the “getting started” document?

Thanks!
~N

MPI is an additional library that may or may not be present on a given system. Inadvertendly the Makefile was constructed so it does not skip building this app when MPI is not present. You can invoke make -k to skip over this build failure. Alternatively you could install MPI on your machine.

Ah thanks! As I understand it, MPI is a library for distributing work over a network, is that right? If so how does it interact with CUDA?

Okay so some things compiled! :)
I’m getting this output when I run deviceQuery

nat-oitwireless-inside-vapornet100-c-14559:deviceQuery Zephyr$ ./deviceQuery
./deviceQuery Starting…

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GT 650M"
  CUDA Driver Version / Runtime Version          5.0 / 5.0
  CUDA Capability Major/Minor version number:    3.0
  Total amount of global memory:                 1024 MBytes (1073414144 bytes)
  ( 2) Multiprocessors x (192) CUDA Cores/MP:    384 CUDA Cores
  GPU Clock rate:                                775 MHz (0.77 GHz)
  Memory Clock rate:                             2000 Mhz
  Memory Bus Width:                              128-bit
  L2 Cache Size:                                 262144 bytes
  Max Texture Dimension Size (x,y,z)             1D=(65536), 2D=(65536,65536), 3D=(4096,4096,4096)
  Max Layered Texture Size (dim) x layers        1D=(16384) x 2048, 2D=(16384,16384) x 2048
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Maximum sizes of each dimension of a block:    1024 x 1024 x 64
  Maximum sizes of each dimension of a grid:     2147483647 x 65535 x 65535
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Bus ID / PCI location ID:           1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 5.0, CUDA Runtime Version = 5.0, NumDevs = 1, Device0 = GeForce GT 650M
nat-oitwireless-inside-vapornet100-c-14559:deviceQuery Zephyr$

TL;DR the Test didn’t pass.

It actually looks like the test wasn’t run- what does this mean?

That is the test and it ran correctly (assuming that’s your card and everything)

To answer the question about MPI: To first order the use of MPI is orthogonal to the use of CUDA. When you have a cluster of CUDA-enabled nodes, you can use MPI to parallelize work across the nodes of the cluster, while CUDA is used to parallelize the work assigned to each node. The simpleMPI app shows how this multi-layer partitioning of work can be done, which is useful for programers who work in a cluster environment and are already familiar with MPI but new to CUDA. The basics of MPI aren’t hard to pick up, so if you have access to a cluster of some sort, give it a try.