Adaptive arbitrary precision in CUDA?

Hey, has anyone had any success using adaptive arbitrary arithemtic routines in CUDA yet?

I’ve seen that there’s CUMP but it doesn’t look like the precision is adaptive. I’m looking for routines that can track when an arithemtic operation’s result will not fit in the current mantissa and react accordingly.

Am I just misinterpreting what CUMP does or is there a whole library I’m just not aware of ?

It is not clear to me what you mean by “adaptive arbitrary arithmetic”. What is meant by a “result [that] will not fit in the current mantissa”?

Are you referring to rational arithmetic? It may be easiest to understand what features you are looking for if you could point to an existing library in CPU space that supports the desired features.

What is the use case where this kind of arithmetic would come in handy? What advantages does it have compared to double-double for example?

CUMP is awesome because it supports the three operands I need, addition, subtraction and multiplication.

But the problem is, I found an example on github and they use a static precision (say like 1024 decimal places).

I was just wondering if there was something that could react to the numbers being operated on and so the precision would adapt. If an operation could be exactly represented, then it would be evaluated as such otherwise it would be evaluated to some such precision specified somewhere else.

I think I want something like MPFR, something with “correct rounding”. I don’t need every operation to have 1024 decimal places of precision. Maybe I’m just not expressing myself clearly T_T

I am not aware of a feature of MPFR where it changes precision dynamically and/or automagically. But I am not overly familiar with it. Could you point at an example of this particular MPFR feature?

What leads you to believe that your application requires an “auto precision” feature? How is your use case currently handled in the absence of such a feature? By what criteria would the precision be adapted automatically? Are you thinking about something like an interval arithmetic package where the precision is increased when interval bounds drift apart by more than a specified amount?

Hmm… I could be wrong about MPFR. I just googled “adaptive arithmetic gmp” and that was the first hit XD

I’m kind of referring to something more like this : http://www.cs.berkeley.edu/~jrs/papers/robustr.pdf

But in a much more general and approachable sense. Maybe the routines in the paper truly are all I need but this stuff seems so scary and hard lol.

I’m really just looking for a way to exactly compute the inverse of a 3x3 matrix and the subsequent product with a 3x1 matrix.

I’ve been thinking about rewriting the inverse routine to write answers in terms of something like this :

struct almost_rational
{
    float top, float bot; // store in terms of numerator and denominator
};

and then overload the appropriate operators because division is what’s been messing me up in the past in terms of accuracy (i.e. something that should be 0 in the end but the division operation will make it like .0000000001 instead).

It has been a while since I last looked at Jonathan Shewchuk’s thesis. It is basically describing a generalization and extension of well-known double-double techniques.

Have you tried starting with double-double to see how far that gets you? There is ready-to-use double-double code available for download on the CUDA registered developer website. It is under BSD license so it will fit any project.

Note that computing via matrix inverses is typically not what you would want to do from a numerical robustness standpoint; normally using a solver is the superior solution. Given that the matrix is so tiny you could easily spell out the discrete steps of a solver based on Gauss elimination with partial pivoting for a 3x3 system as a straight line piece of code.

Why is achieving an exact result of zero important here? Having results of epsilon instead of zero is usually well tolerated in practical use cases involving floating-point computation.

Actually njuffa, that’s good advice. I found rosettacode and they have a working example of Gaussian elimination in C so I’ve been reading up on it. It’s so much more simple than I thought it would be!

Thanks for the help!

And for also letting me know that it’s totally fine to work with an epsilon. The only issue is, because of people like Shewchuk, exact arithmetic almost feels like a standard to me. Won’t all the other programmers laugh at me if I’m using an epsilon?

How accurate an computation needs to be depends very much on context. In many important real-life scenarios (e.g. medical imaging, oil & gas exploration), inputs are sensor data with maybe 10 to 12 bits of resolution and outputs likewise require only 8 to 12 bits of accuracy (e.g. a color channel intensity or a screen co-ordinate).

In Shewchuk’s context getting extremely accurate results is important to prevent artifacts in the application because it is important to determine whether two geometrical primitives intersect, touch, or do not intersect. Due to rounding errors, one might erroneously find two primitives intersect when in fact they don’t, or vice versa.

If your operands are of type float, and your intermediate computation utilizes a double-double based solver, the chances of getting an incorrect result would be extremely slim even if your use case has requirements very similar to Shewchuk’s. For most applications, performing the intermediate computations using any kind of extended precision would be complete overkill.

Sorry for not responding in a more timely fashion! I swear, by the time I’m on my computer, I’m just too tired to post.

Anyway, I’m doing the same type of thing that Shewchuk is. I need the equivalent of his orientation routines, only I’m trying to do it in a faster manner.

There was some paper (I can find the link if need be) where they treat a tetrahedron as three independent vectors and you can find how they combine to represent a specific point so you can find out if the point is inside/on the tetrahedron.

I was hoping there’d be some library I could use so all I would have to do is worry about just writing the Gaussian solver (btw, that was an amazing suggestion [much easier]) but alas, I guess it’s not meant to be.

I think one thing to do would be to write the code but track some error parameter, epsilon, and then perform the Shewchuk tests should the need present itself. It doesn’t sound very performant but I suppose safe is better than fast.

But I wanna be fast too! XD

Sorry, now I’m just being childish. Thanks for taking the time to reply to me!