# Fortran Formatting Oddity

I discovered these behaviors of every fortran compiler i have tried (gfortran, intel fortran, ftn95, pgfortran). I don’t understand it. I specify to compute in double precision. The output of the calculation of pi seems to be in double precision. Moreover, 15 digits of pi are accurate. However, the formatting statement allows me to print any number of digits beyond 15.

The other odd thing is there are digits in all of these locations. I would expect zeros, since I never computed anything beyond double precision. Does anyone understand these oddities? Below is the test code and output. Thanks, Anthony

DOUBLE PRECISION PI
PI = 4.0D0 * DATAN ( 1.0D0 )
WRITE (,)
WRITE (,’(1X,A,G25.16)’) ‘PI SET TO =’, PI
WRITE (
,’(1X,A,G35.26)’) ‘PI SET TO =’, PI
WRITE (,’(1X,A,G45.36)’) ‘PI SET TO =’, PI
WRITE (
,’(1X,A,G55.46)’) ‘PI SET TO =’, PI
END

PI SET TO = 3.141592653589793
PI SET TO = 3.1415926535897931159979635
PI SET TO = 3.14159265358979311599796346854418516
PI SET TO = 3.141592653589793115997963468544185161590576172

This looks right to me, double precision should get you 8 bytes which is more than enough. If you reduce the size, you’ll see trailing zeros.

The compiler/language/hw doesn’t know you are printing pi. For any 52 bits of mantissa, there is an exact decimal representation of those bits. For instance, the last bit might represent 2**-51 which is this exact number: 4.440892098500626161694526672363281250e-16. That’s more than 15 digits, but the key is that what isn’t represented (is the bit at the 2**-52 place a zero or one?) is what produces the error in your number. Printing and intrinsics always assume the number provided is exact, not that it is the result of rounding, for better or worse.

well it can be any computation. i just used pi as an example because it’s easy to check what it’s supposed to be. in the example i posted, you can see it’s correct to the 15th digit. but the formatting lets me output beyond what i specified. in my mind, if you write your code in double precision then the output should be in double precision. but the format specifier lets you go beyond that. to seemingly anything. i can kind of understand why they would do that. it’s just easier for them to not have to think about it. but the weirdest thing is there aren’t zeros in these locations. if you compute in double precision, i don’t know why there are any numbers beyond it. maybe i’m not explaining it well.

i do know what you are saying about computers not being able to represent certain numbers. i understand that. i just don’t understand how i’m getting numbers beyond what i’m computing. that’s the best i can explain it.

thanks,

anthony

I think I understand what’s going on now. I did some more tests. If you type in a simple double precision number like 2.1D0 you get out the double precision version which is:

2.10000000000000008881784197001252323389053344726562500000000000000

The format specifier is a bit open ended. But you can set it however you want and see the numbers coming out. I changed the code I posted before a little, so I can see the zeros for pi when computing double precision. I did a check using single precision and the right amount of significant digits are reported.

Single precision significant digits is rounddown(23log(2))=6
Double precision significant digits is rounddown(52
log(2))=15

I was originally thinking if you specified double precision the output would show the correct amount of significant digits, but that’s not the case. It just shows whatever you tell it to. If you print enough digits you can see the limits of the numbers you enter. Doing simple tests with numbers like 2.1 and 2.3 and using a decimal to binary converter, I could check what the compiler was doing. It is reporting the right numbers.

One other thing I thought to check is the default formatting WRITE(,). Interestingly, it does show all the significant digits and no more. So that’s a nice feature.

Hope that makes sense.

Thanks for the help