printf in device kernels and <stdio.h>

I wonder why using printf in CUDA kernels requires to include a standard “host”-oriented <stdio.h> header file?

Consider a program from CUDA Programmer’s Guide 4.0:

// printf() is only supported

// for devices of compute capability 2.0 and above

#if defined(__CUDA_ARCH__) && (__CUDA_ARCH__ < 200)

#define printf(f, ...) ((void)(f, __VA_ARGS__),0)

#endif

__global__ void helloCUDA(float f)

{

 printf("Hello thread %d, f=%f\n", threadIdx.x, f);

}

int main()

{

 helloCUDA<<<1, 5>>>(1.2345f);

 cudaDeviceReset();

}

If I compile it like this:

> nvcc -arch=sm_20 test.cu

I get these errors:

test.cu(10): error: identifier "printf" is undefined

1 error detected in the compilation of "/tmp/tmpxft_000008bc_00000000-9_test.cpp4.ii".

I guess file *.cpp4.ii is used in a compilation phase that processes c++ templates by cudafe++.

For some reason cudafe++ does not recognize printf as an internal CUDA keword, nor is it given a proper C++ declaration of printf unless one

provides one with

#include <stdio.h>

Which is against the rules (host functions cannot be invoked from device code).

I think this shows an inconsistency in the current implementation of the CUDA language (other cuda keywords, like threadIdx, have their proper C++

declaration)[i].

[/i]

What makes you think printf is an internal CUDA keyword? It is a library function, just like it is in native C. You include the header, the toolchain automagically overlays the prototype with a CUDA version, and the runtime device and host functions needed to support printf get “silently” included into the code emitted by nvcc. Just like if you were programming in native C.

1 Like

“printf” is not a keyword in CUDA, nor is it in C/C++ on which CUDA is modelled. It is a library function that requires a prototype which is supplied by a header file, and thus inclusion of that header file. Similarly, inclusion of stdlib.h is required when using “malloc”, again just like in C. By contrast threadIdx is part of the core language. The inclusion of host header files has been around in CUDA from the start, except it wasn’t always visible. math.h for example is being pulled in by one of the CUDA header files. My understanding is that the use of host header files is helpful in providing the consistency required for the tight integration of host and device code that CUDA offers.

1 Like

Ok, but “texture” is a sort of a reserved word in cuda, isn’t it?

I’m just recovering from a stupid error of moving texture references from header to an object file and admire how nvidia forced the texture<,> template to link statically, against all c++ rules.

So before sending a file to gcc, you do some “nvidia” preprocessing, yeah?, and preceed “texture” with the c++ static keyword and get rid of any "extern"s. Yes, it works!

If you don’t belive texture is a CUDA keyword, try this:

struct texture {};

int main()

{

}

In a similar way printf should be declared somewhere as a device function OR the user guide should

inform the user that they should include standard header files.

As it is now, printf is another “magic” cuda “keyword”.

1 Like