how can FP support integer type? integer support

shameful that one thing i still can’t understand for such long time:
G80 only has floating-point stream processors, how could it support 32-bit integer arithmetic? why couldn’t the prior floating point GPUs (e.g.6800) support 32-bit integer? I found that AMD R600 also has units for float<->integer conversion. How to convert between floating-point and fixed-point decimals? it seems so hard.
Any hardware insight would be appreciated. Thanks!

ps. what about supporting strings? like cpu, strings are made of char’s. Does this mean the GPU should support char’s?

Internal G80 SP stages are shared by integer and floating-point operations. Previous GPU models are not capable of such flexibility, since integer math support is not a part of DirectX9 standart.

Thanks. you mean that “because dx9 doesn’t require integer, so previous GPUs doesn’t support it.” But is it possible that “because previous GPUs simply couldn’t support integer, so dx before 10 didn’t require it”?

And very want to know a bit how this int<->float conversion is possible, in hardware bkgd. Thanks!

The history is held back. O:) But every floating point operation, including int2float and float2int, ultimately amounts to integer and/or bitwise operations, so in theory exposing those wouldn’t be extremly overwhelming for GPU architects.