<NUMREP repeats of code block above, with different constants>
In one particular case, if NUMREP is 105 or less, my kernel is using 31 registers or less, and performance is 114Mpix/s. But if I make NUMREP=106, register usage jumps up to 35. I guess that is not so bad, but at the same time, performance drops immediately down to 92Mpix/s, and this happens just by adding 4 lines of code. From what I can tell, adding another REP should (in a perfect world) not increase register usage at all, since no new variables are introduced. Shared mem usage is on both cases 40 bytes, while constant mem usage increase from 24 to 32 bytes.
Does anyone have a guess about what happens? Am I running into some nvcc compiler limit that makes it choose another code generation strategy?
Thanks E.D. I just tried using -maxregcount=31. It ends up using a bit of local memory instead, but I still get the same, sharp performance drop when going from 105 to 106 REPS. I had quick lock at the generated code using decuda, but couldn’t see any major differences.
I guess it could be some cache effect, where going from 105 to 106 reps makes the last rep throw out the texture cache data for previous REPs. The cuda profiler doesn’t show any major difference between the 105 and 106 version. It would be useful to be able to get texture cache hit rate from the profiler. Still, the performance decrease is surprisingly large. When going from 10 to 105 reps, the performance slowly drops from 151 to 112 Mpix/s, and from 106 and on to 120+ it immediately drops to 92 Mpix/s.