static memory control


I have two questions…^^


I want to use static memory in CUDA device function.

but cuda do not support declare static memory within device function.

therefore, I use static_variables[idx] in global memory with cuda memory allocation. in my algorithm, my device function always check static_variable is 0 or not. threfore it has bottleneck.

“cuda do not support declare static memory within device function”.

then does it means declare static memory in global function is available?

in global function or main body, is it possible to use

shared static variables[sub_idx]=1;




__global__ GPUbody(){

tid =threadIdx;

__shared__ static variables[sub_idx]=1; // Â ###### Â is it possible ???? ####

a=GPUkernel(tid);  // these __device__ function  use static variables


__device__ GPUkernel(tid){

if  (variables[tid]){

  Â Â Â ... 

        Â Â } 

    Â Â Â Â else{

    Â ...

    Â Â Â Â }



and I’m wondering

if I define as below, where M is allocated? I’ll use MM in device function.

#define MM 10000

the constant variable allocate in local register, constant or global memory?

No, it’s explicitly not allowed, unfortunately.

The programming guide “Restrictions” says

"device and global functions cannot declare static variables inside

their body. "