Cannot open include file 'cutil_inline.h'

I’m setting up a new Windows 7 workstation with CUDA, and I can’t seem to get past this error.

fatal error C1083: Cannot open include file ‘cutil_inline.h’

I have read previous forum postings that say I have to tell the compiler where this file is using the “-I” flag. The problem is that I can’t find the file on my system. Other forum postings claim the file is in the “common” directory of the CUDA SDK installation folder. My “C:\CUDA” directory doesn’t have a common folder.

Do I need to install something else? All I have installed is the CUDA Toolkit and Developer Drivers for my card (and Visual C++ as a compiler).

Also, Windows search claims the file cutil_inline.h does not exist anywhere on my system.

  • Bill

Found the solution…

Go to the main CUDA download page, which at the time of this post is:
http://developer.nvidia.com/object/cuda_3_0_downloads.html

If that link is dead it will probably be the first or second hit for a google search on “CUDA SDK”.

The file that must be downloaded and installed is:
“GPU Computing SDK code samples”

The default installation directory is
c:\users\yourname\application data… something long, which isn’t convenient to link to. I recommend installing to a more convenient location.

Don’t ask me why this header file is not included in the CUDA toolkit, or why it is packaged with a download labelled “code samples.”

This header is part of a utility library designed to make the CUDA example programs shorter. You are strongly discouraged from using it directly in your programs. If you do want to use parts of it in your programs, you should copy out the part you want into your code base. NVIDIA does not consider cutil part of the CUDA API and can change it without warning.

What headers would you recommend instead?

Oh, sorry, I was answering the specific question as to why the cutil headers are not in the CUDA toolkit.

You are trying to compile the GPU Computing SDK examples, or some other code?

PHamnett: Thank you for asking the logical followup question.

seibert: I’m compiling some other code. I inherited a CUDA project written by someone else and I have been teaching myself CUDA by extending it. Apparently whoever developed the original code was not aware that it is bad practice to use this library directly. Given that every CUDA program I have ever seen uses these functions, I thought they were part of the language. I guess if I have to learn “real” CUDA, I have to read the cutil source code.

This is an unfortunate side effect of the examples in the SDK making heavy use of it, and most people learning from them naturally pick up the conventions. I wouldn’t go nuts trying to remove cutil from an existing application. Just be aware of the issue in case someone unusual comes up.

I find the “check error and abort if something is wrong” macro from cutil very useful, so I have lifted it into my own headers. However, I’m glad I read the header and copied it into my code, because these macros in cutil compile to nothing unless the _DEBUG preprocessor symbol is defined. I would bet more than a few people have though they were checking errors with these macros and in fact were not. :)

(And for further confusion: the CUDA SDK is not really an SDK, which also contributes to the assumption that cutil is part of the API. It’s just a bunch of example programs. The CUDA Toolkit is what other people would typically call an “SDK”. Early bad naming choices became stuck due to tradition.)

In one post you have just brought the entire confusing mess into perspective for me. I spent 2 hours yesterday trying to figure out why I could download the SDK code samples, but not the SDK itself. And not only did NVIDIA choose to name their “example programs” a “developers kit”, they also embedded a real “developers kit” into their “code samples.” (cutil) If they knew these functions were so helpful, why not finish developing them, write a little documentation, and put them in the toolkit?

Thank you for your helpful reply.

I’m not one to speak for NVIDIA, but looking through cutil, I see a lot of simple error-checking macros and I/O functions to read basic ASCII data and image files into memory. The latter have nothing to do with CUDA and would unnecessarily bloat the API if they were formally added. The error-checking macros are useful boilerplate if “abort instantly if anything goes wrong” is a good error handling approach for your application, though.

There’s other stuff that I would consider not applicable to CUDA either, like timers and host thread creation, which are better handled by native APIs or existing cross-platform libraries. I do second a recent request that perhaps some of the 2/3/4-component vector operations in cutil_math be cleaned up and put into an optional header in CUDA proper.

So aside from the error checking and the vector math, the rest of the common folder doesn’t really have anything to do with CUDA and just gets the I/O and platform specific code out of the examples. I don’t think anyone at NVIDIA wants to see cutil become another mediocre cross-platform library that breaks people’s programs when someone changes something.

(I would really like that vector math header, though… :) )

The location of cutil_inline.h was, for me, in the hidden folder ProgramData and as follow :

C:\ProgramData\NVIDIA Corporation\NVIDIA GPU Computing SDK 4.2\C\common\inc

I fixed the C1083 issue doing this :
Right click on your project (vs8) > Properties
Custom build step
Command line > adding this : -I"C:\ProgramData\NVIDIA Corporation\NVIDIA GPU Computing SDK 4.2\C\common\inc"

Hope this can help you…