I’m simply trying to do some floating point subtractions, then interpret the result as an integer (not casting, simply accessing it as an integer to see the actual bit value).

I’ve tried using a union, doesn’t seem to work. I end up with a 0 integer value (no bits set).

I have since tried to do it with pointer tricks. I’m seeing weird behavior.

result is an unsigned integer array. This is code from the kernel…

```
unsigned int start=0;
float my1, my2;
my1=*( (float*) &start );
my2= *( (float*) &tid );
result[tid] = *( (int*) &my2 );
```

The result in result array is:

```
0, 1, 2, 3, 4, 5, 6
```

Which is as expected.

BUT,

```
unsigned int start = 0;
float my1, my2;
float fresult;
my1=*( (float*) &start );
my2= *( (float*) &tid );
fresult=my1-my2;
result[tid] = *( (int*) &fresult );
```

Gives me all 0’s as a result. I expect to see something like

```
0 0x80000001 0x80000002, etc
```

which is what I see if I do a similar kind of calculations on the CPU using union datatypes.

Any ideas?