I am trying to reduce the register usage of the following quaternion class. Especially I need to reduce the register usage of the rotateVector method. From what I have read in the forum, there is no deterministic rule of how the compiler assigns registers. I got stuck with the following code which I tried to optimize via trial and error. Hopefully you can give me some hints how I can further reduce the register usage.
Hello, it is true that the methods work on register or shared mem variables only. I use the volatile keyword here as I have read in the forums that it may help to reduce register usage if I force variables into registers. The rotateVector method assigns 7 registers in my code.
If you’re not making frequent accesses to global memory, you have to worry about occupancy that much less (which you shouldn’t be worrying much about in the first place).
256 threads per block is a fine amount, which on G200 allows you 64 registers per thread.
Hello I mentioned that wrong, actually my kernel makes excessive use of global memory, I meant it is not used in the quaternion class for computation directly. I will have a 50% occupancy on a g200 as my complete kernel uses 25 registers currently, I would like to bring the overall usage down. and the rotateVector method is a key element in my code. Furthermore I have to wait for my g200 card till christmas ;(, currently I am working with a gtx8800 which gives me only 33% occupancy
50% occupancy on G200, ie 512 threads, is enough to hide latency in even the worst case scenario (constant TLB misses). 33% on G80, ie 256 threads, is enough to completely hide latency in common usage (when the TLB cache does its job).
Do not aim for 100%. It is unfortunate that NVIDIA never articulates this in its documentation.
Unfortunately this would mean I have to wait till christmas to see my code working, this is not a great option, and on the gtx8800 getting the code down to 16 regs has a great impact on performance. Therefore I would appreciate every hint to reduce the register usage.
What do you mean “see your code working”? It is working already. You might say, “working as fast as it could,” but you won’t see that until you get your G200 anyway.
What do you mean by “great impact” on 8800? (What % speedup.) And is this the identical code with simply fewer registers, or some modified code?
If I comment the rotation vector code out it gets under 16 registers, i do not know what is optimized out in this situation but my kernel runtime drops from about 27ms to under 10ms. Actually this does not matter a lot as this kernel simply is not useful.
Actually this is a good question, I will check this tomorrow, but because of the performance drop when i get the kernel unter 16 regs i am not yet at or near to bandwith limit.
You don’t know what gets optimized out. Could be a lot. (The compiler is aggressive in identifying dead code.)
I’m confident that you’ll get a very small boost, if any, by running more than 256 threads per MP on G80. A boost that will surely be erased on G200.
If you’re not maxing out your bandwidth, it is either because you’re not issuing perfectly coalesced reads (a huge factor, much more important than occupancy) or because you’re using __syncthreads() which hurts calc-memfetch overlap. In the latter case, running two blocks per MP will help.
I think loads from global memory are also optimized away if the values are never used, so check that also. It might be just that you are reading in less data when commenting things out.
alright i will check for coallesced reads in any case. actually i am using __syncthreads in my kernel a lot, as i have to do a reduce operation which spans a binary tree over shared memory (similar to the reduce examples in the cudpp lib). I thought if i could get the kernel down to 16 regs. In that case I could use two blocks per MP, this is currently limited to 1 block on my gtx8800 due to regs.
Traversing binary trees in shared mem sounds like it could cause massive bank conflicts. This is another very important optimization factor (ie potentially 10x speedup).
It’s much more important than occupancy or running two blocks (potentially 1.5x speedup).
What kind of reduction are you doing on the binary tree? Depending on what kind of operations you are performing, you might want to try to find an old assembly book, or search the web for ‘assembly function optimization branch’. There are lots of neat old assembly tricks that use bit shifting, logic functions and so forth to remove branches from the code, which is a big deal on some architectures. If you’re doing something like "if a > b, c = a, else c = b’ you might be able to remove the branch there with the optimization and save yourself some warp divergence.
EDIT: Does anyone know if nVidia’s compiler already does these type of optimizations for built-in functions like max() and so forth? Since CUDA performance takes a big hit from divergent warps, perhaps they could include a header with some optimized macros for common functions in the 2.1 release of CUDA. It wouldn’t take but a minute to put together, and it could make a big performance difference for some people that are not used to the architecture.
The statement “if a > b, c = a, else c = b” does not cause divergence, it results in predicated instructions. These are similar to masks and bitshifts in principle (which themselves are similar to divergence, in principle), but the implementation has low overhead.
Christoph John - Out of curiosity, what sort of system does this class belong to? (If I were to guess, you could be doing some collision detection like me)
// ensure last summing cycle has been finished for all threads in block
__syncthreads();
if(threadIdx.x < offset)
LocalBlock[threadIdx.x] += LocalBlock[threadIdx.x + offset];