How to handle Advisory .

Hi All,

I wrote a code as:

And after compile in Release mode, I got

Can any one tell me how to remove this Advisory warning ?

Thanks :

Kundan

These Advisory warnings are there to advise you that the compiler has assumed that the pointer used in that line number is pointing to Global Memory. If this is intended behaviour, you can leave it.

Otherwise, you need to change your code to make sure that compiler assumes in the required way. there is no direct method to make the compiler assume… but some tricks like the following can help… For example, if you know for sure that the pointer points to shared memory at run-time, consider dummy-initializing the pointer to shared memory at start of day – Just to give a HINT to the compiler that this pointer actually points to shared memory.

In general, you should NOT use a pointer to point to various memories during run-time…

For example:

ptr = &sharedArray[5];

	.... do some processing...

   ptr=&globalArray[10];

   ... do some processing..

It is possible that the compiler can do some non-sense. Since CUDA does NOT have pointer qualification supprt (you can only tell where the pointer resides and not wht kinda memmory it points to), you need to be careful.

Hope this helped.

Also, this subject has been discussed many times. If forum-search does NOT help you, use “google.com” and specify “site:http://forums.nvidia.com” along with your keywords.

Hi Sarnath,

Thanks for this attention.

I will try your suggestion.But what i think ,pointer members inside struct tag or structure pointer may cause this problem as you say you should NOT use a pointer to point to various memories during run-time….But if i ignore this Advisory warning and executing the program sample.cu, some memory crash had occured.

Could you make me clear how to prevent this crash?

Is in CUDA any restriction on using pointer inside struct or structure pointer?

You need to go look @ the line number specified in the warning and look @ ur usage. The warning says that the compiler generated global memory load/store instruction for that code. If this is NOT what you want – then there is a problem. Otherwise there is no problem.

Usually there is no problem. Some of my kernels have more than 10 or 20 warnings like these… .We dont even bother to look @ what these warnings are…

NOTE: These warnings comes for the GPU section of your code…from the GPU compiler.

Is the crash problem same as the one reported by Manjunath Gudissi?? His code was trying to in-direct GPU pointer in the Host-memory code. I think some1 replied to him stating the problem but his confusion remained…

Better post a portion of your program. (Dont pose all your program - Not many will read it carefully)

Hi Sarnath,

What i am trying to say is

[b]I have a structure :

typedef struct tag{

unsigned long uLong;

 float  fType;

My_Enum eType;  // Here My_Enum is enum define below.

unsigned short int* usInt;

unsigned char * uChar;

}tag;

enum My_Enum{

ME_1= 0 , ME_2= 1 };

I am calling a CPU function from main (i.e. host_function_1(//args) ) in which i write code:-

tag* GPUStr ;

CUDA_SAFE_CALL( cudaMalloc( (void **)&GPUStr, sizeof(tag)) );

CUDA_SAFE_CALL( cudaMalloc( (void**)&GPUStr->uLong, (sizeof( unsigned long )*1000) ) );

CUDA_SAFE_CALL( cudaMalloc( (void**)&GPUStr->fType, (sizeof( float )*1000) ) );

CUDA_SAFE_CALL( cudaMalloc( (void**)&GPUStr->eType, (sizeof( My_Enum ) ) );

CUDA_SAFE_CALL( cudaMalloc( (void**)&GPUStr->usInt, (sizeof( unsigned short int)*1000) ) );

CUDA_SAFE_CALL( cudaMalloc( (void**)&GPUStr->uChar, (sizeof( unsigned char)*1000) ) );

and calling global function :

device_fun<<<1,1>>>(GPUStr);

Inside global function i call device function :-

d_function(GPUStr->uChar, GPUStr->fType);

Then my system has charshed because of these pointers accesses[/b]

What you have said is same as Manjunath’s problem. Let me anwer it again.

tag* GPUStr;

	CUDA_SAFE_CALL( cudaMalloc( (void **)&GPUStr,  sizeof(tag)) );

At end of “cudaMalloc” you have a pointer to GPU memory in a global pointer variable called “GPUStr”.

Note that this pointer can be indirected only by your GPU kernel.

Outside your GPU functions, you need to use “cudaMemcpy” to write anything into it.

CUDA_SAFE_CALL( cudaMalloc( (void**)&GPUStr->uLong,  (sizeof( unsigned long )*1000) ) );

This is WRONG. Because GPUStr contains a pointer in the GPU memory space. The above statement is executed by CPU.

It will crash. It is like NULL pointer indirection or a BAD pointer indirection.

What you need to do is, create the structure in normal CPU memory… like for example

tag cpuStr;

cpuStr.field1 = ....

cudaMalloc(&cpuStr.pointerField, ....)

and then finally

cudaMalloc(&GPUStr,...);

cudaMemcpy(GPUStr, CPUStr, sizeof(..));

Does this help?

Hi ,

I read the solutions of the problem reported by Manjunath Gudissi. But the cause of the problem given by smokyboy (to Manjunath Gudissi) is

But in CUDA SDK matrixMul project i observed that :

This work fine in Release mode . If this work then why not the same thing with struct(In my code).

There is no question of debug/release mode.

The matmulis good… He does a cudaMemcpy…

Instead, in your code, you are trying to write into GPU memory using a “Pointer” operation… you need to use cudaMemcpy to do it.

Kundan,

You need to understand two things:

  1. Where a pointer resides
  2. Where a pointer points to?

A “float *” pointer declared in your code resides in Host side
A “device float *” pointer resides in Device’s global memory.

Where pointer points to – depends on what is stored in it.

float p = malloc(1000); / p points to HOST memory /
cudaMalloc(&p, 1000); /
p resides in host memory. But P points to device memory. “*p” or “p->” will NOT work in CPU code. Use cudaMemcpy.

Thanks a lot Sarnath , I will correct myself.