shameful that one thing i still can’t understand for such long time:
G80 only has floating-point stream processors, how could it support 32-bit integer arithmetic? why couldn’t the prior floating point GPUs (e.g.6800) support 32-bit integer? I found that AMD R600 also has units for float<->integer conversion. How to convert between floating-point and fixed-point decimals? it seems so hard.
Any hardware insight would be appreciated. Thanks!
ps. what about supporting strings? like cpu, strings are made of char’s. Does this mean the GPU should support char’s?