Performance on 16 bit vs. 32 bit integers? Huge performance drop

Hi all,

I am facing a problem that when I switch to use 32 bit integer data (i.e. int data type), the performance is significantly drop (~50% slower) compared to the same kernel but with 16 bit integer data (i.e. short data type). I run my application on GTX 480 card. As far as I known, on 2.x computability cards the operations on 8 bit and 16 bits operands utilize the 32bit operations at the hardware level. So it should not has big difference on performance when we switch from 16 bit integers to 32 bit integer operands, right?

Can anyone delight me on this problem? What is the main reason cause the performance drop here?

Thank you,
Roto

Hi all,

I am facing a problem that when I switch to use 32 bit integer data (i.e. int data type), the performance is significantly drop (~50% slower) compared to the same kernel but with 16 bit integer data (i.e. short data type). I run my application on GTX 480 card. As far as I known, on 2.x computability cards the operations on 8 bit and 16 bits operands utilize the 32bit operations at the hardware level. So it should not has big difference on performance when we switch from 16 bit integers to 32 bit integer operands, right?

Can anyone delight me on this problem? What is the main reason cause the performance drop here?

Thank you,
Roto

I’m sorry for the incorrect question. The reason for performance drop is not because of the data type of the operands.

p/s: does any one know how to delete a post? After posting this thread I aware that it was a wrong question but I don’t know how to delete it.

thanks

I’m sorry for the incorrect question. The reason for performance drop is not because of the data type of the operands.

p/s: does any one know how to delete a post? After posting this thread I aware that it was a wrong question but I don’t know how to delete it.

thanks

I’m guessing that you now need twice the memory?