Fortran compiler: Problem with boz-literal-constants

The 17.4-0 compiler seems to contradict the Fortran 2008 and probably the Fortran 2003 standards in the way it manipulates binary and hexadecimal constants in a corner case.

The offending situation happens (at least) when:

  1. the boz-literal-constant is a binary or hexadecimal constant, and
  2. the boz constant is the first argument of the INT intrinsic, and
  3. the second argument of the INT intrinsic is kind=selected_int_kind(18) or equivalent (so that, given the sizes of integers supported by the PGI compiler, the result is a 64-bit integer), and
  4. the written boz constant specifies 32 bits (that is, is composed of 8 hexadecimal digits or 32 binary digits), and
  5. the most significant bit of the boz constant is a one.

Under these circumstances, the result value of the INT function according to the 2003 standard (section 13.7.53) should follow the following rule

“If A is a boz-literal-constant, it is treated as if it were an int-literal-constant with a kind-param that specifies the representation method with the largest decimal exponent range supported by the processor.”

The 2008 standard (section 13.3.3) is more clear:

“When a boz-literal-constant is the argument A of the intrinsic function INT or REAL, if the length of the sequence of bits specified by A is less than the size in bits of a scalar variable of the same type and kind type parameter as the result, the boz-literal-constant is treated as if it were extended to the length equal to the size in bits of the result by padding on the left with zero bits […],”

According to this, the output one would expect from

program minbozint
   implicit none
   integer, parameter :: LONG = selected_int_kind(18)

   integer(kind=LONG) :: Vz, Vo, Vb, LONG_huge

   LONG_huge = huge(LONG_huge)

! Hexadecimal
   Vz = int(z'80000000', kind=LONG)

! Octal
             !10        0
   Vo = int(o'20000000000', kind=LONG)

! Binary
             !332        21       10         0
   Vb = int(b'10000000000000000000000000000000', kind=LONG)

!!!!! Write everything !!!!!

   write(*,'(a, 28x,    a)') "Position", "76543210"
   write(*,'(a,i20,1x,z16)') "Vz   = ", Vz, Vz
   write(*,'(a,i20,1x,z16)') "Vo   = ", Vo, Vo
   write(*,'(a,i20,1x,z16)') "Vb   = ", Vb, Vb
   write(*,'(a,i20,1x,z16)') "huge = ", LONG_huge, LONG_huge
end program minbozint

should be (for a compiler that supports 64-bit integers)

Position                            76543210
Vz   =           2147483648         80000000
Vo   =           2147483648         80000000
Vb   =           2147483648         80000000
huge =  9223372036854775807 7FFFFFFFFFFFFFFF

However, the output the PGI compiled-code produces is

Position                            76543210
Vz   =          -2147483648 FFFFFFFF80000000
Vo   =           2147483648         80000000
Vb   =          -2147483648 FFFFFFFF80000000
huge =  9223372036854775807 7FFFFFFFFFFFFFFF

If this is a compiler bug, could you please fix it?

Mind that I have not fully investigated the conditions a-e above: e.g., I cannot make sense of the results of REAL(, kind=selected_real_kind(15,300)).

Thanks for the example. I have created TPR 24583 to address this problem.


This should be fixed in release 18.7