SDK compile can't find includes?

Installed latest driver & cuda toolkit. Installed the SDK (in lhome/me/cuda), changed to the C directory, and typed “make”. Get "Error: builtin_types.h: No such file or directory.

builtin_types.h does exist, in /usr/local/cuda/include. Obviously, the include path is not picking up the correct path, or the CUDA include files aren’t getting put in the proper location, or something. In my own code, I could easily fix it with a -I switch to the compiler, but I don’t really want to go through every Makefile in the SDK and do this. Is there an easy alternative?

Also, is it possible to compile & link CUDA code on a machine that doesn’t have an NVidia card or driver? I work mostly on my notebook, which just has the Intel GMA 960 graphics (which is fine since I don’t do extensive graphics), and often at locations where I don’t have a connection to my desktop. I’d like to be able to write & compile on the notebook, then test code when I have the desktop handy. The “Getting Started” guide says that the driver &c needs to be installed, but is this just explaining the typical installation?

Thanks,
James

CUDA_INSTALL_PATH=/usr/local/cuda make

should probably work.

You don’t need the card and driver to build most code (if you need NVIDIAs OpenGL headers and libraries, they need to be extracted from the standard driver bundle). And if you relink your code with ocelot, you can execute and debug code without access to a CUDA GPU as well.

CUDA_INSTALL_PATH=/usr/local/cuda make

should probably work.

You don’t need the card and driver to build most code (if you need NVIDIAs OpenGL headers and libraries, they need to be extracted from the standard driver bundle). And if you relink your code with ocelot, you can execute and debug code without access to a CUDA GPU as well.