Feeding unsigned integers to __ffs and __clz

I love the __ffs and __clz functions. Ideal for quickly getting important information out of a bit mask. The trouble is, I often have my bit masks as unsigned ints. I see that the arguments to these two functions are int, and I can observe with the following code that converting an unsigned int that would overflow the signed int format will result in a negative number, which converted back to unsigned int results in the original number, implying that the bit string is not in fact being altered by conversion to signed int.

if (threadIdx.x == 0 && blockIdx.x == 0) {
    unsigned int uTH = 0xfffffff0;
    int iTH = uTH;
    unsigned int nTH = iTH;
    printf("Unsigned = %11u  Signed = %11d   Recovered = %11u\n", uTH, iTH, nTH);

(Result: Unsigned = 4294967280 Signed = -16 Recovered = 4294967280)

Is that correct, that I can safely feed my unsigned int bit masks to __ffs and __clz and trust that the original bit strings are being interpreted?



Why does the interface use ‘int’, you may ask. That’s because POSIX decrees an ‘int’ argument for ffs(). The device intrinsic __ffs() is simply CUDA’s version of this POSIX function, and when I added __clz() as the symmetrical counterpiece to __ffs() I felt it was best to keep the interfaces for these two functions consistent, so it also got an ‘int’ argument.


Neat. I figured that the spirit of “do atomicAdd for 64-bit signed ints by first converting them to 64-bit unsigned ints, the results are bitwise identical” would also apply here, but it’s good to have the back story, too. I kind of posted this as a public service thread for anyone else who comes along.