static memory control

Hi.

I have two questions…^^

[1.]

I want to use static memory in CUDA device function.

but cuda do not support declare static memory within device function.

therefore, I use static_variables[idx] in global memory with cuda memory allocation. in my algorithm, my device function always check static_variable is 0 or not. threfore it has bottleneck.

“cuda do not support declare static memory within device function”.

then does it means declare static memory in global function is available?

in global function or main body, is it possible to use

shared static variables[sub_idx]=1;

main(){

GPUbody();

}

__global__ GPUbody(){

tid =threadIdx;

__shared__ static variables[sub_idx]=1; // Â ###### Â is it possible ???? ####

a=GPUkernel(tid);  // these __device__ function  use static variables

}

__device__ GPUkernel(tid){

if  (variables[tid]){

  Â Â Â ... 

        Â Â } 

    Â Â Â Â else{

    Â ...

    Â Â Â Â }

}

[2.]

and I’m wondering

if I define as below, where M is allocated? I’ll use MM in device function.

#define MM 10000

the constant variable allocate in local register, constant or global memory?

No, it’s explicitly not allowed, unfortunately.

The programming guide 4.2.1.4 “Restrictions” says

"device and global functions cannot declare static variables inside

their body. "