64-bit integer division!

Hi everyone. I have two, 64-bit integers. Each integer is represented as two, 32-bit variables: unsigned int myVar1_hi, myVar1_low; myVar1 is guaranteed to be less than myVar2;

The operation I want to perform is as follows:

double myDouble = myVar1 / myVar2;

What is the best way to go about doing this? I have a feeling that if I convert myVar1 and myVar2 to 64-bit doubles for the division, too much accuracy will be lost, and my program will produce incorrect results, which is not acceptable. Should I be using unsigned long long instead of two unsigned 32-bit integers for each 64-bit integer? I really don’t want to spend time writing my own division function, especially if something already exists for this type of thing. I’m using a GTX280, which I understand has some double precision units. My code doesn’t need to be compatible with previous generation GPUs.

Thanks!

Are you sure that double roundoff error would be so high that it’d affect your results? That’s awfully strict!

The GPU’s double precision math is full IEEE, so you don’t need to worry about GPU-specific issues.

And since your final result is a double, you’re going to have roundoff anyway no matter how you compute the results.

Think of it by asking a related question first: if you needed to do this on the CPU, would you use doubles? Then use doubles on the GPU as well.

If instead you’d use some infinite-precision library on the CPU, then you do indeed have a little GPU programming strategy question to answer.

Yeah, now that I think about it more and more, I think the best solution is to use 96 or 128-bit fixed point for the result. It’s unfortunate, but I think it’s the only way to keep the errors from accumulating. It looks like I’ve got some work to do! Thanks.