Strange behaviour with constant memory

Hi All,

As I’ve stated in a previous topic I am implementing a unsteady vector field visualisation program. The algorithm I am using is called Lagrangian-Eulerian Advection (LEA). The algorithm is based on the texture advection. Now with going into too much detail about the algorithm, basically it requires a that a random 3% of the textels are flipped each iteration.

To explain the problem I really need give some visuals, here is a video of the algorithm running with the random flip

and here is on without it. just so you see that its necessary

Now my Random number generator is a simple linear congruential generator. given by

device unsigned int LCG(unsigned int X0){
unsigned int a = 0x15A4E35;
X0 = (a*(X0) + 1) & 0xFFFFFFFF;
return X0;

Anyway Now finally to my problem… The algorithm has a lot of constants, currently I have been passing them as args to the kernel call, or hard coding . So I thought I would start to use some constant memory, but if i put in the lines,

constant float alpha_dev;
constant float i_dev;

and change NOTHING ELSE… these two variables are never used I dont even do a cudaMemcpyToSymbol(), all this is is the declaration.

now if i compile and run with out the random flip all is fine…constantMem.avi

but when I put the random flip back in the code, look at the result… i get these funny artifacts (or incorrect flips).…constantMem.avi

I have been tearing my hair out with this… It really looks to me as thought constant declarations seems to be interfering with my LGC function… but I have know idea how that could happen and to me it does not make sense… does anyone have any ideas?

Thanks in advanced…