I can’t get enough digits when printing out double precision variables in the debugger. Older version of PGF printed out many more digits.
Example :
a is a double precision variable with value : 18886.624629629601
The GNU debugger (GDB) prints out : 18886.624629629601 (good)
In the PGF debugger (PGDBG):
pgdbg> print a
18886.625 (correct but not accurate enough)
pgdbg> print a - 18886.625
0 (not accurate enough)
pgdbg> print a - 18886.62
0.0050000000010186341 (not accurate enough, only the first non zero digit is significant )
pgdbg> printf “%16.10f”,tdeb
0.0000000000 (wrong)
pgdbg> printf “%f”,tdeb
0.000000 (wrong)
pgdbg> printf “%G”,tdeb
0 (wrong)
pgdbg> dread &tdeb
All real values in our programs are double precision variable.
Most of them are computed. Some come from input file. When they are initialized with a constant, we put a “d” at the end of the value, like this :
x = 1.0d.
The problem is only with the debugger because the programs work and output the good values with all the required digits.
I remember that the debugger from older version of PGF (it was console only on Linux) did a good job.
The GNU debugger also prints the required digits, but unfortunately it can’t read some of the variables (for example Fortran 90 structures) because it doesn’t understand all the symbols in the PGF binary.
This is really bothering because we need many more digits to debug our programs.