i’m working on a student project using CUDA, and I wrote an algorithm with few pointers.
The two main structures are:
[codebox]typedef struct Case {
void * ptr; // Path *
struct Case * suiv;
}Case;
typedef struct Chemin {
bool top, bottom;
Case * begin, * end;
short nb;
}Path;[/codebox]
All those pointers are pointing to shared memory, and all instance off those structures are declarated in arrays in shared memory.
The problem is when I try to access a Path nb through a Case * for exemple
[codebox]
Case * myCase = case;
short nb = myCase->ptr->nb;[/codebox]
Something like that will give me this advisory:
The problem is that, first, nvcc is assuming wrong, and that then I have an error:
Which I assume is somewhat related to the advisory.
I looked over the internet for some answers but didn’t find much. On a post it was said that I should see every case to determine if it was assuming right, but here in every case (about 10), it’s assuming wrong, and I don’t really know how to change that.
I would be really glad if someone could help me, my entire algorithm is based on those kind of things so I have no way of doing a quick magic workaround External Media
Pointers to pointers generally lead to pain in CUDA, but most commonly when dealing with things in global memory. I’ve never seen anyone try to build this kind of data structure in shared memory before.
The usual solution is to use integer offsets to an array of your structs as “pointers”. A little more cumbersome to dereference, but then there is no ambiguity for the compiler to deal with.
Shared-space reduction operations refer to the atomic operations. Capability 1.1 devices only do atomic operations on 32 bit words, and only in global memory. Capability 1.2 and 1.3 can do atomic operations on 64 bit words and also in shared memory. If you have a capability 1.3 device, you need to add a compiler flag to take advantage of it.