basic question about precision

I saw in the faq that extended precision should be default for the pgf90 compiler, but I don’t seem to be getting that.
I’m working on a supercomputer (cluster of 64-bit machines) in England (JET, Abingdon) which is using your compiler.
I’m testing the precision by running a programme which is doing the simple division of 1.d0/3.d0 (in complex notation)

Compilation is done by
pgf90 -pc 80 -r8 -Kieee prog.f

After running this, I still only get 16 significant digits in stead of 32.
Both -pc 80 and -pc 64 give exactly the same result. Variables are declared as double precision and complex*16

Can somebody help me in solving this (basic) problem?

First, on 64-bit machines the -pc 64 and -pc 80 don’t apply, since the
arithmetic is not done on the floating point stack, but in the SSE registers.

Complex16 is equivalent to real8 real and real*8 imaginary, i.e. 64 bit
floating point arithmetic.

In 64 bit floating point arithmetic, you have 1 sign bit, 11 exponent bits,
and 52 mantissa bits (plus an implied 1.xxx)

These are binary bits. Roughly it takes ~3 or 4 binary bits for each decimal digit of precision.

So, getting 16 significant digits of precision out of 52 bits is correct.