program does not compile with real*16

Hi, I need some helps.

My code, which has real*16 in it, does not compile with pgfortran. Is there a workaround?

I made a small code for test:

      program testKind
       real*16 :: a, b
       a=1.d0; b=2.d0
       write(*,*) a+b
      end

The compiler says:

PGI$ pgf90 -o testKind testKind.f90
PGF90-W-0031-Illegal data type length specifier for real (testKind.f90: 2)
PGF90-W-0031-Illegal data type length specifier for a (testKind.f90: 2)
PGF90-W-0031-Illegal data type length specifier for b (testKind.f90: 2)

thanks

Hi dypang,

We don’t support REAL*16 types. The reason being 128-bit floating point calculations are not supported by hardware, assuming you’re using an x86 processor.

There are software libraries out there that will emulate quad precision at the cost of performance. There are a few existing threads on this topic that go into more detail:

https://forums.developer.nvidia.com/t/real-16-implementation/129969/1
https://forums.developer.nvidia.com/t/real-16/134046/1
https://www.pgroup.com/userforum/viewtopic.php?p=530

Hi aglobus, Thanks for explanations!