How to tell a pointer points to global memory

I have a kernel in which I have to calculate the sum of many individual vectors (all of the same size) So what I did was the following:

__global__ void test_kernel(float *output, float *input1, float *input2, float *input3, float *input4, float *input5, float *input6, float *input7) {

float *vector;

__shared__ float sdata[blockSize];

switch(blockIdx.x) {

 case 0 : vector = input1; break;

 case 1 : vector = input2; break;

 case 2 : vector = input3; break;

 case 3 : vector = input4; break;

 case 4 : vector = input5; break;

 case 5 : vector = input6; break;

 case 6 : vector = input7; break;

}

//here comes reduction code (modified from the example) that works on vector

if (tid==0)

output[blockIdx.x] = sdata[0];

Now the compiler is complaining that he cannot determine which type of memory vector points to (and assumes global memory luckily, so my outcomes are correct). But adding a global in front of float *vector gives me this warning:

"./test_kernel.cu", line 39: warning: invalid attribute for variable "vector"

     __attribute__((__global__)) float *vector;

So how should I instruct the compiler that vector points to global memory?

Yes, These are advisary warnings. The CUDA manual talks about this restriction. The compiler usually can take care. If you have a pointer that points to shared memory, the compiler will figure that out. SO, therez nothing to worry.

Check out 4.2.2.4 (restrictions) of 1.1 CUDA manual. Therez a paragraph that starts with “Pointers…”.

So I guess I can add an item to the wishlist, I don’t mind telling the compiler which kind of memory my pointer points to. I am currently trying to ‘sell’ CUDA to our processing department so a clean compilation stage is very nice to have at this time :)

"./test_kernel.cu", line 39: warning: invalid attribute for variable "vector"

    __attribute__((__global__)) float *vector;

How did you set the nvcc command options for getting this message? :)

I don’t think it is a special compiler option; I get the same warning. This is due to the fact that one cannot declare a global pointer due to the restrictions in 4.2.2.4 of the 1.1 CUDA programming guide.

What is possible though is to use device for a variable at file scope. This variable will be global and available in the kernel for all blocks and threads in a straightforward manner. In the host code one will have to use cudaGetSymbolAddress, cudaMemcpyToSymbol, and cudaMemcpyfromSymbol to access it properly.

What I am wondering about is the fact that almost none of the SDK examples use device variables. Instead these are defined as without any CUDA qualifiers and passed to all kernel calls…I wonder why? Is it due to performance reasons? I thought that global variables should be used where possible to improve performance and omit long parameter lists for kernels… :huh:

haha,I`m not this means.I only want to know the method of

get the warning or error messages :haha: (i don`t know)

A pointer can be either in global or shared memory. But what it points to can also be global or shared memory. We are worried about what it points to – not where the pointer resides.

I think you need to check in visual C++ how to do that. nvcc is always emitting these warnings, you just want visual C++ to show them to you.