 # Floating Point to Integer Representation

I’m simply trying to do some floating point subtractions, then interpret the result as an integer (not casting, simply accessing it as an integer to see the actual bit value).

I’ve tried using a union, doesn’t seem to work. I end up with a 0 integer value (no bits set).

I have since tried to do it with pointer tricks. I’m seeing weird behavior.

result is an unsigned integer array. This is code from the kernel…

``````unsigned int start=0;

float my1, my2;

my1=*( (float*) &start );

my2= *( (float*) &tid );

result[tid] = *( (int*) &my2 );
``````

The result in result array is:

``````    0, 1, 2, 3, 4, 5, 6
``````

Which is as expected.

BUT,

``````unsigned int start = 0;

float my1, my2;

float fresult;

my1=*( (float*) &start );

my2= *( (float*) &tid );

fresult=my1-my2;

result[tid] = *( (int*) &fresult );
``````

Gives me all 0’s as a result. I expect to see something like

``````   0 0x80000001 0x80000002, etc
``````

which is what I see if I do a similar kind of calculations on the CPU using union datatypes.

Any ideas?

Looks like it’s because of denormalized floating point being flushed to 0 on pre-fermi devices… Guess I understand now.

CUDA offers dedicated device functions for re-interpreting a float as an int and vice-versa, and I would recommend using those:

``````int i;

float f;

i = __float_as_int(f);

f = __int_as_float(i);
``````

Alternatively, reinterpret_cast (as known from C++) can be used. I find that a bit more cumbersome to use, especially when the re-interpretation is used within an expression.