Compile failes at TRANSFER intrinsic transfering over 2GiB data

Dear all,

I am working on an existing program and the data produced by the program.
To convert the data in a byte array to the desired derived type, I use the TRANSFER intrinsic.

When transferring data with sizes over 2GiB, nvfotran does not compile the program, leaving the following message;

=========
nvfortran -O1 -g -Kieee -i4 -mcmodel=medium -Mnofma -Minfo -Mnofpapprox -Mnofprelaxed -traceback test_transfer.F90 -o test_transfer_nvf_ntz64 -D _NTZ=64
NVFORTRAN-S-0151-Empty TYPE, STRUCTURE, UNION, or MAP (test_transfer.F90: 104)
0 inform, 0 warnings, 1 severes, 0 fatal for test_transfer

Does TRANSFER support the data conversion over 2GiB data?

I have attached a test program that tests data size below and over 2GiB, compiled with ifx, gfortran, and nvfortran.

Best,
Ken-Ichi

TEST_TRANSFER.tar.gz (1.4 KB)

Hi Ken-Ichi,

My guess is that this is a similar issue as your previous “storage_size” problem when some compile time size value is overflowing. While I didn’t test, I suspect if you used a pointer here like the other work around, which delays the sizing till runtime, it would work around the issue.

I filed TPR #37515 and sent it to engineering for investigation.

-Mat

Dear Mat,

Thank you very much.

I have another issue related to these postings, unformatted I/O.

When using C_F_POINTER and C_PTR to replace TRANSFER between a byte array and a derived type, the unformatted stream write output through a character byte array hangs up at 2GiB even with the -Mlfs option.

I will submit the test case later.
Best,
Ken-Ichi

Dear Mat,

Here is the test program for writing big data.

This tests writing a 8GiB-data

TEST.sh runs make for

big_fail_gft
big_fail_ifx
big_fail_nvf
big_fail_wa_ok_gft
big_fail_wa_ok_ifx
big_fail_wa_ok_nvf
big_ok_gft
big_ok_ifx
big_ok_nvf

using gfortran (_gft), ifx (_ifx), and nvfortran (_nvf for sufixes)

big_fail_nvf will fail to write 8GiB data on a file with the following error;

% ./big_fail_nvf
Compiled with nvfortran
FAIL VERSION, write through c_ptr
size of buff <= a(:) [byte]: 8589934592
FIO/stdio: No such file or directory
FIO-F-/unformatted write/unit=-13/error code returned by host stdio - 2.
File name = ‘big.dat’, unformatted, stream access record = 0
In source file big.F90, at line number 76

big_fail_wa_ok_nvf is the workaround version of big_fail_nvf, in which the write statement is
replaced with an external subroutine containing the write statement.

Test environment:

% cat /etc/redhat-release
Fedora release 39 (Thirty Nine)

% gfortran --version
GNU Fortran (GCC) 13.3.1 20240913 (Red Hat 13.3.1-3)
Copyright (C) 2023 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

% ifx --version
ifx (IFX) 2025.1.1 20250418
Copyright (C) 1985-2025 Intel Corporation. All rights reserved.

% nvfortran --version

nvfortran 25.5-0 64-bit target on x86-64 Linux -tp skylake-avx512
NVIDIA Compilers and Tools
Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

Best,
Ken-Ichi
TEST_BIGIO_CPTR.tar.gz (2.0 KB)

Thanks! I see two issues here. First use “-i8” instead of “-i4”. This will change the default kind to integer*8 so the bounds set by c_f_pointer don’t overflow.

The second has to do with the “write(iout) buff”. I originally thought it might be due to the bounds for “buff” not getting set correctly due to the -i4, but it still fails with -i8. I tried using explicit bounds, i.e. “buff(1:N)”, while this doesn’t error, the resulting file is empty. I’m not sure what’s wrong so filed a problem report, TPR #37525, and sent it to engineering.

-Mat

Dear Mat,

Thank you for testing!

I think this limitation should be removed, even with the “-i4” option,
when the option “-mcmodel=medium or large” is specified explicitly.

Best,
Ken-Ichi