PGF90-F-0000-Internal compiler error. interf:new_symbol

Hi,
Recently I am compiling the AM2 model of gfdl with pgi, but I don’t understand what the following errors mean. I have tried PGI 6.1.6,6.2.5 and 7.1.4. All of them report the errors.

error messages:

/usr/local/pgilinux86-64-714/linux86-64/7.1-4/bin/pgf902 /tmp/pgf907C9eXfyuXKqz.ilm -fn /home/taolj/am2/src/atmos_param/sea_esf_rad/ozone.f90 -debug -x 120 0x200 -opt 0 -x 51 0x20 -x 119 0xa10000 -x 122 0x40 -x 123 0x1000 -x 127 4 -x 127 17 -x 19 0x400000 -x 28 0x40000 -x 70 0x8000 -x 122 1 -quad -x 59 4 -x 59 4 -tp core2-64 -x 124 0x1400 -y 15 2 -x 57 0x3b0000 -x 58 0x48000000 -x 49 0x100 -x 120 0x200 -astype 0 -x 124 1 -x 24 1 -x 119 0x4000 -cmdline ‘+pgf90 /home/taolj/am2/src/atmos_param/sea_esf_rad/ozone.f90 -r8 -Ktrap=fp -pc 64 -g -v -c -I/home/taolj/tools/netcdf3.6/include’ -asm /tmp/pgf90tC9e5Z4GyNYG.s
0 inform, 0 warnings, 0 severes, 0 fatal for ozone_mod
0 inform, 0 warnings, 0 severes, 0 fatal for ozone_init
0 inform, 0 warnings, 0 severes, 0 fatal for ozone_driver
0 inform, 0 warnings, 0 severes, 0 fatal for ozone_end
0 inform, 0 warnings, 0 severes, 0 fatal for obtain_input_file_data
0 inform, 0 warnings, 0 severes, 0 fatal for obtain_gfdl_zonal_ozone_data
0 inform, 0 warnings, 0 severes, 0 fatal for obtain_clim_zonal_ozone_data
0 inform, 0 warnings, 0 severes, 0 fatal for geto3_3d
0 inform, 0 warnings, 0 severes, 0 fatal for geto3_2d
0 inform, 0 warnings, 0 severes, 0 fatal for get_clim_ozone
PGF90/x86-64 Linux 7.1-4: compilation successful

/usr/bin/as /tmp/pgf90tC9e5Z4GyNYG.s -I/home/taolj/tools/netcdf3.6/include -o ozone.o
Unlinking /tmp/pgf907C9eXfyuXKqz.ilm
Unlinking /tmp/pgf90tC9e5Z4GyNYG.s
pgf90 -r8 -Ktrap=fp -pc 64 -g -v -c /home/taolj/am2/src/atmos_param/sea_esf_rad/original_fms_rad.f90
/bin/csh

/usr/local/pgilinux86-64-714/linux86-64/7.1-4/bin/pgf901 /home/taolj/am2/src/atmos_param/sea_esf_rad/original_fms_rad.f90 -debug -x 120 0x200 -opt 0 -nohpf -nostatic -x 19 0x400000 -quad -x 59 4 -x 59 4 -x 15 2 -x 49 0x400004 -x 51 0x20 -x 57 0x4c -x 58 0x10000 -x 124 0x1000 -x 57 0xfb0000 -x 58 0x78031040 -x 48 4608 -x 49 0x100 -x 120 0x200 -stdinc /usr/local/pgilinux86-64-714/linux86-64/7.1-4/include:/usr/local/include:/usr/lib/gcc/x86_64-redhat-linux/3.4.6/include:/usr/lib/gcc/x86_64-redhat-linux/3.4.6/include:/usr/include -def unix -def __unix -def unix -def linux -def __linux -def linux -def __NO_MATH_INLINES -def x86_64 -def LONG_MAX=9223372036854775807L -def ‘SIZE_TYPE=unsigned long int’ -def ‘PTRDIFF_TYPE=long int’ -def __THROW= -def extension= -def amd64 -def SSE -def MMX -def SSE2 -def SSE3 -def SSSE3 -freeform -vect 48 -x 124 0x8 -x 49 0x1000 -output /tmp/pgf908h_e0fOu5KWA.ilm
PGF90-F-0000-Internal compiler error. interf:new_symbol, symbol not found 582 (/home/taolj/am2/src/atmos_param/sea_esf_rad/original_fms_rad.f90: 25)
PGF90/x86-64 Linux 7.1-4: compilation aborted
pgf90-Fatal-f901 completed with exit code 1

Unlinking /tmp/pgf908h_e0fOu5KWA.ilm
Unlinking /tmp/pgf908h_e0fOu5KWA.dbg
make: *** [original_fms_rad.o] Error 2
if ( 2 != 0 ) then
unset echo
ERROR: make failed for am2


The code of original_fms_rad.f90 is

1 !FDOC_TAG_GFDL
2 module original_fms_rad_mod
3 !
4 !
5 !
6 !
7 !
8 !
9 !
10 !
11 !
12 !
13 !
14 !
15 !
16 !
17
18
19 !-----------------------------------------------------------------------
20 ! radiation interface module
21 !-----------------------------------------------------------------------
22
23 use mcm_lw_mod, only: mcm_lw_init
24
25 use mcm_sw_driver_mod, only: mcm_sw_driver_init
26
27 use fsrad_mod, only: rdparm_init, fsrad, co2_data
28
29 use clouds_mod, only: clouds, clouds_init, clouds_end
30
31 use diag_manager_mod, only: register_diag_field, send_data
32
33 use time_manager_mod, only: time_type, set_date, set_time, &
34 get_time, operator(+), &
35 operator(-), operator(/=), get_date,&
36 increment_time, operator(<), &
37 operator(>=), operator(>)
38
39
40 use diag_integral_mod, only: diag_integral_field_init, &
41 sum_diag_integral_field
42
43 use fms_mod, only: FATAL, WARNING, NOTE, &
44 close_file, read_data, write_data, &
45 open_namelist_file, &
46 check_nml_error, file_exist, &
47 error_mesg, &
48 mpp_pe, mpp_root_pe, mpp_npes, &
49 write_version_number, stdlog
50
51 use optical_path_mod, only: optical_path_init
52
53 use longwave_tables_mod, only: longwave_tables_init
54
55 !use longwave_aerosol_mod, only: longwave_aerosol_init
56
57 use gas_tf_mod, only: gas_tf_init, gas_tf_end
58
59 use longwave_clouds_mod, only: longwave_clouds_init
60
61 use radiation_diag_mod, only: radiation_diag_init, radiation_diag_end
62
63 use longwave_fluxes_mod, only: longwave_fluxes_init
64
65 use longwave_params_mod, only: longwave_params_init
66
67 use rad_utilities_mod, only: rad_utilities_init, &
68 radiation_control_type, &
69 Rad_control, &
70 ! define_environment, &
71 radiative_gases_type, &
72 cldrad_properties_type, &
73 cld_specification_type, &
74 astronomy_type, &
75 atmos_input_type, &
76 surface_type, &
77 shortwave_control_type, Sw_control, &
78 sw_output_type, lw_output_type, &
79 rad_output_type, &
80 fsrad_output_type, &
81 cloudrad_control_type, &
82 rad_utilities_init, &
83 Cldrad_control
84
85 use esfsw_parameters_mod, only: esfsw_parameters_init
86
87 use ozone_mod, only: ozone_init
88
89 use lw_gases_stdtf_mod, only: lw_gases_stdtf_init
90




I don’t know what the error messages means, is there any problem on line 25 ?

By the way, the whole error messages and original_fms_rad.f90 are big files,but I can pass to you if needed.

Thanks.

Hi taolijun

Internal compiler errors (ICE) mean the compiler has failed in an unexpected way. I’ll need the source (or a reproducing example) in order to investigate. I have put a request in to the GFDL administrator to obtain the AM2 source, but it may be a few days.

Thanks,
Mat

Hi taolijun,

I was able to download the AM2 “Memphis” source as well as recreate the error. I have created a technical problem report (TPR#16177) and sent it to our engineers for further investigation.

Thank you for the report,
Mat

But, how can I deal with the problem?

Hi taolijun,

I’ve sent a request to our engineers for a work around.

Note that I mistyped the TPR number. It should be TPR#16117, not 16177.

  • Mat

Hi taolijun,

From our engineers:

The ICE is occurring for the USE statement:

use fsrad_mod, only: rdparm_init, fsrad, co2_data

fsrad_mod is produced by …am2/src/atmos_param/fsrad/fsrad.f90.
The first line of am2/src/atmos_param/fsrad/fsrad.f90 is just a semi-colon, i.e., the first two lines of fsrad.f90 are

;
Module FSrad_Mod

For some reason, the presence of the ‘;’ causes a corrupt .mod file for fsrad_mod.

Try deleting the first line of fsrad.f90 and recompile fsrad.f90

Hope this helps,
Mat

Hi,

I happened to have the same “ICE” problem when compiling AM2 model with pgi 7.25. The ICE is solved by correcting the fsrad.f90. However, I still got some other problem. The error message is:

mpp.o: In function mpp_mod_system_clock_mpi_': mpp.F90:(.text+0x44): undefined reference to mpi_wtime_’
mpp.F90:(.text+0x51): undefined reference to mpi_wtick_' mpp.F90:(.text+0x7b): undefined reference to mpi_wtime_’
mpp.o: In function mpp_mod_mpp_error_basic_': mpp.F90:(.text+0x615): undefined reference to mpi_abort_’
mpp.o: In function mpp_mod_get_peset_': mpp.F90:(.text+0xe52): undefined reference to mpi_group_incl_’
mpp.F90:(.text+0xea7): undefined reference to mpi_comm_create_' mpp.o: In function mpp_mod_mpp_sync_‘:
mpp.F90:(.text+0x13d7): undefined reference to mpi_barrier_' mpp.o: In function mpp_mod_mpp_sync_self_’:
mpp.F90:(.text+0x1503): undefined reference to mpi_wait_' mpp.o: In function mpp_mod_mpp_init_‘:
mpp.F90:(.text+0x4d27): undefined reference to mpi_initialized_' mpp.F90:(.text+0x4d72): undefined reference to mpi_init_’
mpp.F90:(.text+0x4da0): undefined reference to mpi_comm_rank_' mpp.F90:(.text+0x4db4): undefined reference to mpi_comm_size_’
mpp.F90:(.text+0x506c): undefined reference to mpi_comm_group_' mpp.o: In function mpp_mod_mpp_exit_‘:
mpp.F90:(.text+0x6d9f): undefined reference to mpi_finalize_' mpp.o: In function mpp_mod_mpp_transmit_real8_’:
mpp.F90:(.text+0x7260): undefined reference to mpi_wait_' mpp.F90:(.text+0x72a8): undefined reference to mpi_isend_’
mpp.F90:(.text+0x7421): undefined reference to mpi_recv_' mpp.F90:(.text+0x74a3): undefined reference to mpi_recv_’
mpp.o: In function mpp_mod_mpp_broadcast_real8_': mpp.F90:(.text+0x7a42): undefined reference to mpi_bcast_’
mpp.o: In function mpp_mod_mpp_transmit_cmplx8_': mpp.F90:(.text+0x8de0): undefined reference to mpi_wait_’
mpp.F90:(.text+0x8e28): undefined reference to mpi_isend_' mpp.F90:(.text+0x8fb1): undefined reference to mpi_recv_’
mpp.F90:(.text+0x9033): undefined reference to mpi_recv_' mpp.o: In function mpp_mod_mpp_broadcast_cmplx8_‘:
mpp.F90:(.text+0x95d2): undefined reference to mpi_bcast_' mpp.o: In function mpp_mod_mpp_transmit_real4_’:
mpp.F90:(.text+0xa9c0): undefined reference to mpi_wait_' mpp.F90:(.text+0xaa08): undefined reference to mpi_isend_’
mpp.F90:(.text+0xab81): undefined reference to mpi_recv_' mpp.F90:(.text+0xac03): undefined reference to mpi_recv_’
mpp.o: In function mpp_mod_mpp_broadcast_real4_': mpp.F90:(.text+0xb1a2): undefined reference to mpi_bcast_’
mpp.o: In function mpp_mod_mpp_transmit_cmplx4_': mpp.F90:(.text+0xc540): undefined reference to mpi_wait_’
mpp.F90:(.text+0xc588): undefined reference to mpi_isend_' mpp.F90:(.text+0xc707): undefined reference to mpi_recv_’
mpp.F90:(.text+0xc788): undefined reference to mpi_recv_' mpp.o: In function mpp_mod_mpp_broadcast_cmplx4_‘:
mpp.F90:(.text+0xcd22): undefined reference to mpi_bcast_' mpp.o: In function mpp_mod_mpp_transmit_int8_’:
mpp.F90:(.text+0xe0c0): undefined reference to mpi_wait_' mpp.F90:(.text+0xe108): undefined reference to mpi_isend_’
mpp.F90:(.text+0xe281): undefined reference to mpi_recv_' mpp.F90:(.text+0xe303): undefined reference to mpi_recv_’
mpp.o: In function mpp_mod_mpp_broadcast_int8_': mpp.F90:(.text+0xe8a2): undefined reference to mpi_bcast_’
mpp.o: In function mpp_mod_mpp_transmit_int4_': mpp.F90:(.text+0xfc40): undefined reference to mpi_wait_’
mpp.F90:(.text+0xfc88): undefined reference to mpi_isend_' mpp.F90:(.text+0xfe01): undefined reference to mpi_recv_’
mpp.F90:(.text+0xfe83): undefined reference to mpi_recv_' mpp.o: In function mpp_mod_mpp_broadcast_int4_‘:
mpp.F90:(.text+0x10422): undefined reference to mpi_bcast_' mpp.o: In function mpp_mod_mpp_transmit_logical8_’:
mpp.F90:(.text+0x117c0): undefined reference to mpi_wait_' mpp.F90:(.text+0x11808): undefined reference to mpi_isend_’
mpp.F90:(.text+0x11981): undefined reference to mpi_recv_' mpp.F90:(.text+0x11a03): undefined reference to mpi_recv_’
mpp.o: In function mpp_mod_mpp_broadcast_logical8_': mpp.F90:(.text+0x11fa2): undefined reference to mpi_bcast_’
mpp.o: In function mpp_mod_mpp_transmit_logical4_': mpp.F90:(.text+0x13340): undefined reference to mpi_wait_’
mpp.F90:(.text+0x13388): undefined reference to mpi_isend_' mpp.F90:(.text+0x13501): undefined reference to mpi_recv_’
mpp.F90:(.text+0x13583): undefined reference to mpi_recv_' mpp.o: In function mpp_mod_mpp_broadcast_logical4_‘:
mpp.F90:(.text+0x13b22): undefined reference to mpi_bcast_' mpp.o: In function mpp_mod_mpp_max_real8_’:
mpp.F90:(.text+0x14d55): undefined reference to mpi_allreduce_' mpp.o: In function mpp_mod_mpp_max_real4_‘:
mpp.F90:(.text+0x14e95): undefined reference to mpi_allreduce_' mpp.o: In function mpp_mod_mpp_max_int8_’:
mpp.F90:(.text+0x14fd5): undefined reference to mpi_allreduce_' mpp.o: In function mpp_mod_mpp_max_int4_‘:
mpp.F90:(.text+0x15115): undefined reference to mpi_allreduce_' mpp.o: In function mpp_mod_mpp_min_real8_’:
mpp.F90:(.text+0x15255): undefined reference to mpi_allreduce_' mpp.o:mpp.F90:(.text+0x15395): more undefined references to mpi_allreduce_’ fol
low
mod_comm.o: In function mod_comm_mp_exit_': mod_comm.F90:(.text+0xd29): undefined reference to mpi_finalize_’
mod_comm.o: In function mod_comm_mpi_start_': mod_comm.F90:(.text+0xd85): undefined reference to mpi_initialized_’
mod_comm.F90:(.text+0xd98): undefined reference to mpi_init_' mod_comm.F90:(.text+0xdaa): undefined reference to mpi_comm_rank_’
mod_comm.F90:(.text+0xdbc): undefined reference to mpi_comm_size_' mod_comm.F90:(.text+0xdce): undefined reference to mpi_comm_dup_’
mod_comm.o: In function mod_comm_mp_reduce_sum_': mod_comm.F90:(.text+0x30d1): undefined reference to mpi_allreduce_’
mod_comm.o: In function mod_comm_mp_reduce_max_': mod_comm.F90:(.text+0x3187): undefined reference to mpi_allreduce_’
mod_comm.o: In function mod_comm_mp_minmax_': mod_comm.F90:(.text+0x322f): undefined reference to mpi_allreduce_’
mod_comm.F90:(.text+0x3261): undefined reference to mpi_allreduce_' mod_comm.o: In function mod_comm_mp_barrier_‘:
mod_comm.F90:(.text+0x4d4b): undefined reference to mpi_barrier_' mod_comm.o: In function mod_comm_win_finalize_’:
mod_comm.F90:(.text+0x589f): undefined reference to mpi_waitall_' mod_comm.o: In function mod_comm_ga_put4d_r8_‘:
mod_comm.F90:(.text+0x5be5): undefined reference to mpi_isend_' mod_comm.o: In function mod_comm_ga_recvinit_r8_’:
mod_comm.F90:(.text+0x5c84): undefined reference to mpi_irecv_' mod_comm.o: In function mod_comm_ga_get4d_r8_‘:
mod_comm.F90:(.text+0x5d5d): undefined reference to mpi_wait_' mod_comm.o: In function mod_comm_ga_put4d_r4_’:
mod_comm.F90:(.text+0x61e5): undefined reference to mpi_isend_' mod_comm.o: In function mod_comm_ga_recvinit_r4_‘:
mod_comm.F90:(.text+0x627a): undefined reference to mpi_irecv_' mod_comm.o: In function mod_comm_ga_get4d_r4_’:
mod_comm.F90:(.text+0x635d): undefined reference to mpi_wait_' mod_comm.o: In function mod_comm_ga_put4d_i4_‘:
mod_comm.F90:(.text+0x67e5): undefined reference to mpi_isend_' mod_comm.o: In function mod_comm_ga_recvinit_i4_’:
mod_comm.F90:(.text+0x687a): undefined reference to mpi_irecv_' mod_comm.o: In function mod_comm_ga_get4d_i4_‘:
mod_comm.F90:(.text+0x695d): undefined reference to mpi_wait_' mod_comm.o: In function mod_comm_ga_broadcast_r8_’:
mod_comm.F90:(.text+0x6b2f): undefined reference to mpi_bcast_' mod_comm.o: In function mod_comm_ga_broadcast_r4_‘:
mod_comm.F90:(.text+0x6b6f): undefined reference to mpi_bcast_' mod_comm.o: In function mod_comm_ga_broadcast_i4_’:
mod_comm.F90:(.text+0x6baf): undefined reference to mpi_bcast_' mod_comm.o: In function mod_comm_ga_alltoall_r8_‘:
mod_comm.F90:(.text+0x6c04): undefined reference to mpi_allgather_' mod_comm.o: In function mod_comm_ga_alltoall_r4_’:
mod_comm.F90:(.text+0x6c54): undefined reference to mpi_allgather_' mod_comm.o: In function mod_comm_ga_alltoall_i4_‘:
mod_comm.F90:(.text+0x6ca4): undefined reference to `mpi_allgather_’
make: *** [fms.x] Error 2
if ( 2 != 0 ) then
unset echo
ERROR: make failed for am2

I don’t quite understand what these “undefined reference” problems mean? Could you please help solve this problem? Thanks!

-Eddy

Hi Eddy,

These message mean that the linker can’t find these symbols. In particular these symbols are found in a MPI library. So either you forgot to include a MPI library on your link or the MPI library that you’re using was built for a different compiler (like gfortran).

What does your link line look like?

  • Mat

Thanks for your reply, Mat.
My system administrator built the MPICH under the following directory: /usr/local/pgi725/linux86-64/7.2/mpi/mpich. Thus, I think it should be built for PGI compiler. Then, how do I “include a MPI library”? Before I compile, I have set “setenv LD_LIBRARY_PATH /usr/local/pgi725/linux86-64/7.2/mpi/mpich/lib”, is that taking effect?

-Eddy

Hi Eddy,

You need to add the MPICH library to the link line. This can be done a couple different ways:

  1. add -lmpich to the link line (and possibly -lmpichf90 if you need the Fortran 90 interfaces)
  2. Use the “mpif90” driver to accompanies your MPICH library. The driver should automatically add the correct libraries.
  3. If you’re using the MPICH libraries packaged with the compilers add the flag “-Mmpi=mpich”, and the correct libraries will be added.

Hope this helps,
Mat

Thanks, Mat.
I’ve tried (1) and (3), but unfortunately, they didn’t work. I am not sure how to do (2), so I haven’t tried it.
For (1), I appended -lmpich (or -lmpichf90) to the LIBS = … in Makefile; Then, the compiler complained “-lmpich not found”
For (3), I added -Mmpi = mpich to the line FFLAGS = … in Makefile; no changes happened, and I still got those “no reference to …” error.
Any problem here please? Thanks!

  • Eddy

Hi Eddy,

Sorry, I should mention that for #1 you also need to add the path to the library (i.e. “-L /usr/local/pgi725/linux86-64/7.2/mpi/mpich/lib”).

As for #2, set your PATH environment variable to include the directory " /usr/local/pgi725/linux86-64/7.2/mpi/mpich/bin", then instead of using pgf90 (or pgf95), use mpif90. mpif90 is just a wrapper script which will set-up the correct MPI environment and then call pgf90.

For #3, this should work since you’re using the MPICH that comes with the compilers. Can you please post your link command using the verbose flag (i.e just add “-v” to your link flags)? Not the errors, just the line above the errors which starts with “ld” and shows the objects and libraries being linked.

Thanks,
Mat

Many thanks, Mat.
Now the #1 way works. It can compile and generate the executable file. #2 way also works fine.
By the way, another trifle problem is that sometimes the compiler complains “Can’t find include file mpif.h” (or some other .h files). Usually, I found the path of the file, and link (ln -s) the file manually to the position of the source .f90 code. Is there easier ways to fix it?

#3, with “LDFLAGS = -v” shows:

pgf90 update_fv_phys.o cumulus_closure_k.o mo_chemdr.o atmos_soa.o rad_output_file.o sw_core.o specified_clouds_W.o init_dry_atm.o atmosphere.o fms_io.o edt.o time_interp_external.o climap_albedo.o mo_usrrxt.o atmos_tracer_driver.o ocean_model.o coupler_types.o cloud_interpolator.o diag_util.o land_properties.o ocean_albedo.o xgrid.o rivers.o time_interp.o atmos_dust.o random_numbers.o memuse.o mo_jpl.o memutils.o horiz_interp_bicubic.o gaussian_topog.o donner_utilities_k.o numerics.o diag_axis.o diag_manager.o strat_chem_driver.o mo_read_sim_chm.o cloud_spec.o mo_setinv.o tracer_2d.o shortwave.o horiz_interp_conserve.o ice_model.o quicksort.o diag_cloud_rad.o mpp_parameter.o axis_utils.o surface_flux.o atmos_ocean_fluxes.o platform.o donner_meso_k.o amip_interp.o mpp_domains.o land_types.o longwave_driver.o diffusivity.o drifters.o rdparm.o tridiagonal.o mo_exp_slv.o diag_cloud.o mpp_io.o sat_vapor_pres_k.o sat_vapor_pres_params.o standalone_clouds.o vegetation.o fv_physics.o monin_obukhov.o dyn_core.o coupler_main.o sea_esf_rad.o fm_util.o fft.o ozone.o moist_conv.o fv_pack.o diag_data.o lhsw_driver.o get_cal_time.o vert_diff_driver.o constants.o my25_turb.o radiation_driver.o shallow_conv.o vert_turb_driver.o column_diagnostics.o donner_cloud_model_k.o nsclock.o mcm_lw.o atmos_tracer_utilities.o physics_driver.o donner_deep_clouds_W.o fft99.o atmos_nudge.o moist_processes.o donner_lscloud_k.o shortwave_driver.o diag_integral.o ocean_rough.o cu_mo_trans.o diag_clouds_W.o mo_photo.o gmean.o pmaxmin.o mpp_data.o rad_diag.o zetac_clouds_W.o radiation_diag.o mo_imp_slv.o tp_core.o moz.subs.o init_sw_ic.o data_override.o mo_setsox.o soil.o donner_types.o tracer_manager.o atmos_sea_salt.o timingModule.o mg_drag.o mcm_swnew.o strat_cloud.o mo_chem_utls.o isccp_clouds.o mo_rodas_slv.o longwave_fluxes.o hconst.o mcm_swtbls.o shallow_physics.o aerosol.o threadloc.o astronomy.o donner_cape_k.o vert_diff.o mo_chemini.o atmos_convection_tracer.o betaDistribution.o cloud_generator.o station_data.o fs_profile.o pv_module.o cloud_rad.o aerosolrad_package.o ice_albedo.o cloudrad_package.o mpp_pset.o fms.o longwave.o mcm_sw_driver.o diag_output.o ecmfft.o drifters_comm.o optical_path.o bulkphys_rad.o rad_utilities.o microphys_rad.o m_tracname.o cg_drag.o fv_dynamics.o par_vecsum.o time_manager.o rh_based_clouds.o cloudrad_diagnostics.o sealw99.o vert_advection.o longwave_tables.o atmos_ch3i.o fv_arrays.o fill_module.o atmos_model.o stable_bl_turb.o donner_deep.o clouds.o donner_rad_k.o horiz_interp_spherical.o original_fms_rad.o damping_driver.o fv_restart.o strat_clouds_W.o topography.o mgrp_prscr_clds.o land_model.o monin_obukhov_kernel.o strat_chem_model.o co2int.o horiz_interp.o esfsw_driver.o moz.mods.o esfsw_parameters.o gas_tf.o dry_adj.o fv_diagnostics.o topo_drag.o ocean_tpm.o atmos_sulfur_hex.o upper.o drifters_input.o entrain.o set_eta.o horiz_interp_bilinear.o cloud_zonal.o drifters_io.o moz.mat.o shr_kind_mod.o radiative_gases.o pft_module.o longwave_clouds.o donner_deep_k.o longwave_params.o mapz_module.o flux_exchange.o tropchem_driver.o microphys_cloud.o sat_vapor_pres.o mo_hook.o rh_clouds.o field_manager.o interpolator.o atmos_radon.o lscale_cond.o cloud_obs.o age_of_air.o aer_ccn_act.o fsrad.o atmos_co2.o atmos_carbon_aerosol.o ras.o lw_gases_stdtf.o mpp.o mod_comm.o atmos_sulfate.o co2_data.o getmax.o MersenneTwister.o horiz_interp_type.o drifters_core.o -o fms.x -L/usr/local/lib -lnetcdf -v
This is tau, opacity or omega; Using /sb4/edyi/softs/cshrcOnT

/usr/bin/ld /usr/lib64/crt1.o /usr/lib64/crti.o /usr/local/pgi725/linux86-64/7.2-5/lib/trace_init.o /usr/lib/gcc/x86_64-redhat-linux/4.1.2/crtbegin.o /usr/local/pgi725/linux86-64/7.2-5/lib/f90main.o -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 /usr/local/pgi725/linux86-64/7.2-5/lib/pgi.ld -L/usr/local/lib -L/usr/local/pgi725/linux86-64/7.2-5/lib -L/usr/lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 update_fv_phys.o cumulus_closure_k.o mo_chemdr.o atmos_soa.o rad_output_file.o sw_core.o specified_clouds_W.o init_dry_atm.o atmosphere.o fms_io.o edt.o time_interp_external.o climap_albedo.o mo_usrrxt.o atmos_tracer_driver.o ocean_model.o coupler_types.o cloud_interpolator.o diag_util.o land_properties.o ocean_albedo.o xgrid.o rivers.o time_interp.o atmos_dust.o random_numbers.o memuse.o mo_jpl.o memutils.o horiz_interp_bicubic.o gaussian_topog.o donner_utilities_k.o numerics.o diag_axis.o diag_manager.o strat_chem_driver.o mo_read_sim_chm.o cloud_spec.o mo_setinv.o tracer_2d.o shortwave.o horiz_interp_conserve.o ice_model.o quicksort.o diag_cloud_rad.o mpp_parameter.o axis_utils.o surface_flux.o atmos_ocean_fluxes.o platform.o donner_meso_k.o amip_interp.o mpp_domains.o land_types.o longwave_driver.o diffusivity.o drifters.o rdparm.o tridiagonal.o mo_exp_slv.o diag_cloud.o mpp_io.o sat_vapor_pres_k.o sat_vapor_pres_params.o standalone_clouds.o vegetation.o fv_physics.o monin_obukhov.o dyn_core.o coupler_main.o sea_esf_rad.o fm_util.o fft.o ozone.o moist_conv.o fv_pack.o diag_data.o lhsw_driver.o get_cal_time.o vert_diff_driver.o constants.o my25_turb.o radiation_driver.o shallow_conv.o vert_turb_driver.o column_diagnostics.o donner_cloud_model_k.o nsclock.o mcm_lw.o atmos_tracer_utilities.o physics_driver.o donner_deep_clouds_W.o fft99.o atmos_nudge.o moist_processes.o donner_lscloud_k.o shortwave_driver.o diag_integral.o ocean_rough.o cu_mo_trans.o diag_clouds_W.o mo_photo.o gmean.o pmaxmin.o mpp_data.o rad_diag.o zetac_clouds_W.o radiation_diag.o mo_imp_slv.o tp_core.o moz.subs.o init_sw_ic.o data_override.o mo_setsox.o soil.o donner_types.o tracer_manager.o atmos_sea_salt.o timingModule.o mg_drag.o mcm_swnew.o strat_cloud.o mo_chem_utls.o isccp_clouds.o mo_rodas_slv.o longwave_fluxes.o hconst.o mcm_swtbls.o shallow_physics.o aerosol.o threadloc.o astronomy.o donner_cape_k.o vert_diff.o mo_chemini.o atmos_convection_tracer.o betaDistribution.o cloud_generator.o station_data.o fs_profile.o pv_module.o cloud_rad.o aerosolrad_package.o ice_albedo.o cloudrad_package.o mpp_pset.o fms.o longwave.o mcm_sw_driver.o diag_output.o ecmfft.o drifters_comm.o optical_path.o bulkphys_rad.o rad_utilities.o microphys_rad.o m_tracname.o cg_drag.o fv_dynamics.o par_vecsum.o time_manager.o rh_based_clouds.o cloudrad_diagnostics.o sealw99.o vert_advection.o longwave_tables.o atmos_ch3i.o fv_arrays.o fill_module.o atmos_model.o stable_bl_turb.o donner_deep.o clouds.o donner_rad_k.o horiz_interp_spherical.o original_fms_rad.o damping_driver.o fv_restart.o strat_clouds_W.o topography.o mgrp_prscr_clds.o land_model.o monin_obukhov_kernel.o strat_chem_model.o co2int.o horiz_interp.o esfsw_driver.o moz.mods.o esfsw_parameters.o gas_tf.o dry_adj.o fv_diagnostics.o topo_drag.o ocean_tpm.o atmos_sulfur_hex.o upper.o drifters_input.o entrain.o set_eta.o horiz_interp_bilinear.o cloud_zonal.o drifters_io.o moz.mat.o shr_kind_mod.o radiative_gases.o pft_module.o longwave_clouds.o donner_deep_k.o longwave_params.o mapz_module.o flux_exchange.o tropchem_driver.o microphys_cloud.o sat_vapor_pres.o mo_hook.o rh_clouds.o field_manager.o interpolator.o atmos_radon.o lscale_cond.o cloud_obs.o age_of_air.o aer_ccn_act.o fsrad.o atmos_co2.o atmos_carbon_aerosol.o ras.o lw_gases_stdtf.o mpp.o mod_comm.o atmos_sulfate.o co2_data.o getmax.o MersenneTwister.o horiz_interp_type.o drifters_core.o -lnetcdf -rpath /usr/local/pgi725/linux86-64/7.2-5/lib -o fms.x -lpgf90 -lpgf90_rpm1 -lpgf902 -lpgf90rtl -lpgftnrtl -lnspgc -lpgc -lrt -lpthread -lm -lgcc -lc -lgcc /usr/lib/gcc/x86_64-redhat-linux/4.1.2/crtend.o /usr/lib64/crtn.o

Hi Eddy,

Now the #1 way works. It can compile and generate the executable file. #2 way also works fine.

Great.

By the way, another trifle problem is that sometimes the compiler complains “Can’t find include file mpif.h” (or some other .h files). Usually, I found the path of the file, and link (ln -s) the file manually to the position of the source .f90 code. Is there easier ways to fix it?

Add “-I” to your compilation, use the mpif90 driver, or add “-Mmpi=mpich”. “-I” tells the compiler where the header files are located if not in the same directory as the source file. mpif90 and “-Mmpi” adds “-I” for you.

#3, with “LDFLAGS = -v” shows:

The “-Mmpi=mpich” needs to be added to both the compilation and link. “LDFLAGS = -v -Mpi=mpich” should work.

  • Mat

The “-Mmpi=mpich” needs to be added to both the compilation and link. “LDFLAGS = -v -Mpi=mpich” should work.

Sure, it works. #3 way is also OK now.
Thank you very much for your help about this!

  • Eddy[/quote]

Well, although the compilation is successful, there is a run-time error related with mpi.

==================
mpirun -np 1 fms.x
PGFIO-F-235/formatted write/unit=11/edit descriptor does not match item type.
File name = logfile.0000.out formatted, sequential access record = 29
In source file /home/user01/memphis_am2/am2/src/shared/mpp/mpp.F90, at line number 88

But line 88 of mpp.F90 is simply a comment line. I think this problem is related to the ‘write’ statement in mpp.F90. But the only two ‘write’ statements in this program seems to be fine (see following). So I am not sure whether this is due to the compiler or to the program itself. Thanks!

====The two write statements====

if( pe.EQ.root )write( stdout(),‘(/a,i8,f13.6,f8.2)’ )‘MPP_TRANSMIT length, time, bw(Mb/s)=’, l, dt, l8e-6/dt

if( pe.EQ.root )write( stdout(),‘(a,2i4,f9.1,i8,f13.6,f8.2/)’ ) &
‘mpp_sum: pe, npes, sum(pe+1), length, time, bw(Mb/s)=’, pe, npes, a(1), n, dt, n
8e-6/dt

Hi Eddy,

I don’t see anything obvious. Maybe the line is being truncated be cause it’s over 72 columns? Try adding “-Mextend” to your compile to extend to 132 column or break the first write into two lines.

  • Mat

I supposed this problem may have nothing to do with the ‘write’ statements mentioned before. Because, when I remove those two lines in mpp.F90, recompile and run, the same error still came out. Sadly, the compiler could not give a correct information about which line is faulty. Is that possible that ‘Line 88’ refers to a subroutine? Because, I found that the file name “logfile.0000.out” is used in a called subroutine in mpp.F90.

!-------------- mpp.F90 code ---------------------------------------------
!                 Communication for message-passing codes
!
! AUTHOR: V. Balaji (V.Balaji@noaa.gov)
!         SGI/GFDL Princeton University
!
! This program is free software; you can redistribute it and/or modify
! it under the terms of the GNU General Public License as published by
! the Free Software Foundation; either version 2 of the License, or
! (at your option) any later version.
!
! This program is distributed in the hope that it will be useful,
! but WITHOUT ANY WARRANTY; without even the implied warranty of
! MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
! GNU General Public License for more details.
!
! For the full text of the GNU General Public License,
! write to: Free Software Foundation, Inc.,
!           675 Mass Ave, Cambridge, MA 02139, USA.  
!-----------------------------------------------------------------------
module mpp_mod
!a generalized communication package for use with shmem and MPI
!will add: co_array_fortran, MPI2
!Balaji (V.Balaji@noaa.gov) 11 May 1998

! <CONTACT EMAIL="V.Balaji@noaa.gov">
!   V. Balaji
! </CONTACT>

! <HISTORY SRC="http://www.gfdl.noaa.gov/fms-cgi-bin/cvsweb.cgi/FMS/"/>
! <RCSLOG SRC="http://www.gfdl.noaa.gov/~vb/changes_mpp.html"/>

! <OVERVIEW>
!   <TT>mpp_mod</TT>, is a set of simple calls to provide a uniform interface
!   to different message-passing libraries. It currently can be
!   implemented either in the SGI/Cray native SHMEM library or in the MPI
!   standard. Other libraries (e.g MPI-2, Co-Array Fortran) can be
!   incorporated as the need arises.
! </OVERVIEW>

! <DESCRIPTION>
!   The data transfer between a processor and its own memory is based
!   on <TT>load</TT> and <TT>store</TT> operations upon
!   memory. Shared-memory systems (including distributed shared memory
!   systems) have a single address space and any processor can acquire any
!   data within the memory by <TT>load</TT> and
!   <TT>store</TT>. The situation is different for distributed
!   parallel systems. Specialized MPP systems such as the T3E can simulate
!   shared-memory by direct data acquisition from remote memory. But if
!   the parallel code is distributed across a cluster, or across the Net,
!   messages must be sent and received using the protocols for
!   long-distance communication, such as TCP/IP. This requires a
!   ``handshaking'' between nodes of the distributed system. One can think
!   of the two different methods as involving <TT>put</TT>s or
!   <TT>get</TT>s (e.g the SHMEM library), or in the case of
!   negotiated communication (e.g MPI), <TT>send</TT>s and
!   <TT>recv</TT>s.
!   
!   The difference between SHMEM and MPI is that SHMEM uses one-sided
!   communication, which can have very low-latency high-bandwidth
!   implementations on tightly coupled systems. MPI is a standard
!   developed for distributed computing across loosely-coupled systems,
!   and therefore incurs a software penalty for negotiating the
!   communication. It is however an open industry standard whereas SHMEM
!   is a proprietary interface. Besides, the <TT>put</TT>s or
!   <TT>get</TT>s on which it is based cannot currently be implemented in
!   a cluster environment (there are recent announcements from Compaq that
!   occasion hope).
!   
!   The message-passing requirements of climate and weather codes can be
!   reduced to a fairly simple minimal set, which is easily implemented in
!   any message-passing API. <TT>mpp_mod</TT> provides this API.
!
!    Features of <TT>mpp_mod</TT> include:
!   
!    1) Simple, minimal API, with free access to underlying API for
!       more complicated stuff.<BR/>
!    2) Design toward typical use in climate/weather CFD codes.<BR/>
!    3) Performance to be not significantly lower than any native API.
!   
!   This module is used to develop higher-level calls for <LINK 
!   SRC="mpp_domains.html">domain decomposition</LINK> and <LINK
!   SRC="mpp_io.html">parallel I/O</LINK>.
!   
!   Parallel computing is initially daunting, but it soon becomes
!   second nature, much the way many of us can now write vector code
!   without much effort. The key insight required while reading and
!   writing parallel code is in arriving at a mental grasp of several
!   independent parallel execution streams through the same code (the SPMD
!   model). Each variable you examine may have different values for each
!   stream, the processor ID being an obvious example. Subroutines and
!   function calls are particularly subtle, since it is not always obvious
!   from looking at a call what synchronization between execution streams
!   it implies. An example of erroneous code would be a global barrier
!   call (see <LINK SRC="#mpp_sync">mpp_sync</LINK> below) placed
!   within a code block that not all PEs will execute, e.g:
!   
!   <PRE>
!   if( pe.EQ.0 )call mpp_sync()
!   </PRE>
!   
!   Here only PE 0 reaches the barrier, where it will wait
!   indefinitely. While this is a particularly egregious example to
!   illustrate the coding flaw, more subtle versions of the same are
!   among the most common errors in parallel code.
!   
!   It is therefore important to be conscious of the context of a
!   subroutine or function call, and the implied synchronization. There
!   are certain calls here (e.g <TT>mpp_declare_pelist, mpp_init,
!   mpp_malloc, mpp_set_stack_size</TT>) which must be called by all
!   PEs. There are others which must be called by a subset of PEs (here
!   called a <TT>pelist</TT>) which must be called by all the PEs in the
!   <TT>pelist</TT> (e.g <TT>mpp_max, mpp_sum, mpp_sync</TT>). Still
!   others imply no synchronization at all. I will make every effort to
!   highlight the context of each call in the MPP modules, so that the
!   implicit synchronization is spelt out.  
!   
!   For performance it is necessary to keep synchronization as limited
!   as the algorithm being implemented will allow. For instance, a single
!   message between two PEs should only imply synchronization across the
!   PEs in question. A <I>global</I> synchronization (or <I>barrier</I>)
!   is likely to be slow, and is best avoided. But codes first
!   parallelized on a Cray T3E tend to have many global syncs, as very
!   fast barriers were implemented there in hardware.
!   
!   Another reason to use pelists is to run a single program in MPMD
!   mode, where different PE subsets work on different portions of the
!   code. A typical example is to assign an ocean model and atmosphere
!   model to different PE subsets, and couple them concurrently instead of
!   running them serially. The MPP module provides the notion of a
!   <I>current pelist</I>, which is set when a group of PEs branch off
!   into a subset. Subsequent calls that omit the <TT>pelist</TT> optional
!   argument (seen below in many of the individual calls) assume that the
!   implied synchronization is across the current pelist. The calls
!   <TT>mpp_root_pe</TT> and <TT>mpp_npes</TT> also return the values
!   appropriate to the current pelist. The <TT>mpp_set_current_pelist</TT>
!   call is provided to set the current pelist.

! </DESCRIPTION>
! <PUBLIC>
!  F90 is a strictly-typed language, and the syntax pass of the
!  compiler requires matching of type, kind and rank (TKR). Most calls
!  listed here use a generic type, shown here as <TT>MPP_TYPE_</TT>. This
!  is resolved in the pre-processor stage to any of a variety of
!  types. In general the MPP operations work on 4-byte and 8-byte
!  variants of <TT>integer, real, complex, logical</TT> variables, of
!  rank 0 to 5, leading to 48 specific module procedures under the same
!  generic interface. Any of the variables below shown as
!  <TT>MPP_TYPE_</TT> is treated in this way.
! </PUBLIC>

#include <fms_platform.h>

#if defined(use_libSMA) && defined(sgi_mipspro)
  use shmem_interface
#endif

#if defined(use_libMPI) && defined(sgi_mipspro)
  use mpi
#endif

  use mpp_parameter_mod, only : MPP_VERBOSE, MPP_DEBUG, ALL_PES, ANY_PE, NULL_PE
  use mpp_parameter_mod, only : NOTE, WARNING, FATAL, MPP_CLOCK_DETAILED,MPP_CLO
CK_SYNC
  use mpp_parameter_mod, only : CLOCK_COMPONENT, CLOCK_SUBCOMPONENT, CLOCK_MODUL
E_DRIVER
  use mpp_parameter_mod, only : CLOCK_MODULE, CLOCK_ROUTINE, CLOCK_LOOP, CLOCK_I
NFRA
  use mpp_parameter_mod, only : MAX_EVENTS, MAX_BINS, MAX_EVENT_TYPES, PESET_MAX
, MAX_CLOCKS
  use mpp_parameter_mod, only : MAXPES, EVENT_WAIT, EVENT_ALLREDUCE, EVENT_BROAD
CAST
  use mpp_parameter_mod, only : EVENT_RECV, EVENT_SEND, MPP_READY, MPP_WAIT
  use mpp_parameter_mod, only : mpp_parameter_version=>version, mpp_parameter_ta
gname=>tagname
  use mpp_data_mod,      only : stat, mpp_stack, ptr_stack, status, ptr_status, 
sync, ptr_sync  
  use mpp_data_mod,      only : mpp_from_pe, ptr_from, remote_data_loc, ptr_remo
te
  use mpp_data_mod,      only : mpp_data_version=>version, mpp_data_tagname=>tag
name

implicit none
private

#if defined(use_libSMA) || defined(use_GSM)
#include <mpp/shmem.fh>
#endif

#if defined(use_libMPI) && !defined(sgi_mipspro)
#include <mpif.h>   
!sgi_mipspro gets this from 'use mpi'
#endif

  !--- public paramters  -----------------------------------------------
  public :: MPP_VERBOSE, MPP_DEBUG, ALL_PES, ANY_PE, NULL_PE, NOTE, WARNING, FAT
AL
  public :: MPP_CLOCK_SYNC, MPP_CLOCK_DETAILED, CLOCK_COMPONENT, CLOCK_SUBCOMPON
ENT
  public :: CLOCK_MODULE_DRIVER, CLOCK_MODULE, CLOCK_ROUTINE, CLOCK_LOOP, CLOCK_
INFRA
  public :: MAXPES

  !--- public data from mpp_data_mod ------------------------------
  public :: request

  !--- public interface from mpp_util.h ------------------------------
  public :: stdin, stdout, stderr, stdlog, lowercase, uppercase, mpp_error, mpp_
error_state
  public :: mpp_set_warn_level, mpp_sync, mpp_sync_self, mpp_set_stack_size, mpp
_pe
  public :: mpp_node, mpp_npes, mpp_root_pe, mpp_set_root_pe, mpp_declare_pelist
  public :: mpp_get_current_pelist, mpp_set_current_pelist, mpp_clock_begin, mpp
_clock_end
  public :: mpp_clock_id, mpp_clock_set_grain, mpp_record_timing_data

  !--- public interface from mpp_comm.h ------------------------------
  public :: mpp_chksum, mpp_max, mpp_min, mpp_sum, mpp_transmit, mpp_send, mpp_r
ecv
  public :: mpp_broadcast, mpp_malloc, mpp_init, mpp_exit
#ifdef use_MPI_GSM
  public :: mpp_gsm_malloc, mpp_gsm_free
#endif

  !*********************************************************************
  !
  !    public data type
  !
  !*********************************************************************
  !peset hold communicators as SHMEM-compatible triads (start, log2(stride), num
)
  type :: communicator
     private
     character(len=32) :: name
     integer, pointer  :: list(:) =>NULL()
     integer           :: count
     integer           :: start, log2stride ! dummy variables when libMPI is def
ined.
     integer           :: id, group         ! MPI communicator and group id for 
this PE set.
                                            ! dummy variables when libSMA is def
ined.
  end type communicator

  type :: event
     private
     character(len=16)                         :: name
     integer(LONG_KIND), dimension(MAX_EVENTS) :: ticks, bytes
     integer                                   :: calls
  end type event

  !a clock contains an array of event profiles for a region
  type :: clock
     private
     character(len=32)    :: name
     integer(LONG_KIND)   :: tick
     integer(LONG_KIND)   :: total_ticks
     integer              :: peset_num
     logical              :: sync_on_begin, detailed
     integer              :: grain
     type(event), pointer :: events(:) =>NULL() !if needed, allocate to MAX_EVEN
T_TYPES
  end type clock

  type :: Clock_Data_Summary
     private
     character(len=16)  :: name
     real(DOUBLE_KIND)  :: msg_size_sums(MAX_BINS)
     real(DOUBLE_KIND)  :: msg_time_sums(MAX_BINS)
     real(DOUBLE_KIND)  :: total_data
     real(DOUBLE_KIND)  :: total_time
     integer(LONG_KIND) :: msg_size_cnts(MAX_BINS)
     integer(LONG_KIND) :: total_cnts
  end type Clock_Data_Summary

  type :: Summary_Struct
     private
     character(len=16)         :: name
     type (Clock_Data_Summary) :: event(MAX_EVENT_TYPES)
  end type Summary_Struct

!***********************************************************************
!
!     public interface from mpp_util.h
!
!***********************************************************************
  ! <INTERFACE NAME="mpp_error">
  !  <OVERVIEW>
  !    Error handler.
  !  </OVERVIEW>
  !  <DESCRIPTION>
  !    It is strongly recommended that all error exits pass through
  !    <TT>mpp_error</TT> to assure the program fails cleanly. An individual
  !    PE encountering a <TT>STOP</TT> statement, for instance, can cause the
  !    program to hang. The use of the <TT>STOP</TT> statement is strongly
  !    discouraged.
  !    
  !    Calling mpp_error with no arguments produces an immediate error
  !    exit, i.e:
  !    <PRE>
  !    call mpp_error
  !    call mpp_error(FATAL)
  !    </PRE>
  !    are equivalent.
  !    
  !    The argument order
  !    <PRE>
  !    call mpp_error( routine, errormsg, errortype )
  !    </PRE>
  !    is also provided to support legacy code. In this version of the
  !    call, none of the arguments may be omitted.
  !    
  !    The behaviour of <TT>mpp_error</TT> for a <TT>WARNING</TT> can be
  !    controlled with an additional call <TT>mpp_set_warn_level</TT>.
  !    <PRE>
  !    call mpp_set_warn_level(ERROR)
  !    </PRE>
  !    causes <TT>mpp_error</TT> to treat <TT>WARNING</TT>
  !    exactly like <TT>FATAL</TT>.
  !    <PRE>
  !    call mpp_set_warn_level(WARNING)
  !    </PRE>
  !    resets to the default behaviour described above.
  !    
  !    <TT>mpp_error</TT> also has an internal error state which
  !    maintains knowledge of whether a warning has been issued. This can be
  !    used at startup in a subroutine that checks if the model has been
  !    properly configured. You can generate a series of warnings using
  !    <TT>mpp_error</TT>, and then check at the end if any warnings has been
  !    issued using the function <TT>mpp_error_state()</TT>. If the value of
  !    this is <TT>WARNING</TT>, at least one warning has been issued, and
  !    the user can take appropriate action:
  !    
  !    <PRE>
  !    if( ... )call mpp_error( WARNING, '...' )
  !    if( ... )call mpp_error( WARNING, '...' )
  !    if( ... )call mpp_error( WARNING, '...' )
  !    ...
  !    if( mpp_error_state().EQ.WARNING )call mpp_error( FATAL, '...' )
  !    </PRE>
  !  </DESCRIPTION>
  !  <TEMPLATE>
  !    call mpp_error( errortype, routine, errormsg )
  !  </TEMPLATE>
  !  <IN NAME="errortype">
  !    One of <TT>NOTE</TT>, <TT>WARNING</TT> or <TT>FATAL</TT> 
  !    (these definitions are acquired by use association).
  !    <TT>NOTE</TT> writes <TT>errormsg</TT> to <TT>STDOUT</TT>. 
  !    <TT>WARNING</TT> writes <TT>errormsg</TT> to <TT>STDERR</TT>.
  !    <TT>FATAL</TT> writes <TT>errormsg</TT> to <TT>STDERR</TT>,
  !    and induces a clean error exit with a call stack traceback.
  !  </IN>
  ! </INTERFACE>
  interface mpp_error
     module procedure mpp_error_basic
     module procedure mpp_error_mesg
     module procedure mpp_error_noargs
  end interface

!***********************************************************************
!
!    public interface from mpp_comm.h
!
!***********************************************************************
#ifdef use_libSMA
  !currently SMA contains no generic shmem_wait for different integer kinds:
  !I have inserted one here
  interface shmem_integer_wait
     module procedure shmem_int4_wait_local
     module procedure shmem_int8_wait_local
  end interface
#endif

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
  !                                                                             
!
  !       ROUTINES TO INITIALIZE/FINALIZE MPP MODULE: mpp_init, mpp_exit        
!
  !                                                                             
!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

  ! <SUBROUTINE NAME="mpp_init">
  !  <OVERVIEW>
  !   Initialize <TT>mpp_mod</TT>.
  !  </OVERVIEW>
  !  <DESCRIPTION>
  !   Called to initialize the <TT>mpp_mod</TT> package. It is recommended
  !   that this call be the first executed line in your program. It sets the
  !   number of PEs assigned to this run (acquired from the command line, or
  !   through the environment variable <TT>NPES</TT>), and associates an ID
  !   number to each PE. These can be accessed by calling <LINK
  !   SRC="#mpp_npes"><TT>mpp_npes</TT></LINK> and <LINK
  !   SRC="#mpp_pe"><TT>mpp_pe</TT></LINK>.
  !  </DESCRIPTION>
  !  <TEMPLATE>
  !   call mpp_init( flags )
  !  </TEMPLATE>
  !  <IN NAME="flags" TYPE="integer">
  !   <TT>flags</TT> can be set to <TT>MPP_VERBOSE</TT> to
  !   have <TT>mpp_mod</TT> keep you informed of what it's up to.
  !  </IN>
  ! </SUBROUTINE>

  ! <SUBROUTINE NAME="mpp_exit">
  !  <OVERVIEW>
  !   Exit <TT>mpp_mod</TT>.
  !  </OVERVIEW>
  !  <DESCRIPTION>
  !   Called at the end of the run, or to re-initialize <TT>mpp_mod</TT>,
  !   should you require that for some odd reason.
  !
  !   This call implies synchronization across all PEs.
  !  </DESCRIPTION>
  !  <TEMPLATE>
  !   call mpp_exit()
  !  </TEMPLATE>
  ! </SUBROUTINE>

  !#######################################################################
  ! <SUBROUTINE NAME="mpp_malloc">
  !  <OVERVIEW>
  !    Symmetric memory allocation.
  !  </OVERVIEW>
  !  <DESCRIPTION>
  !    This routine is used on SGI systems when <TT>mpp_mod</TT> is
  !    invoked in the SHMEM library. It ensures that dynamically allocated
  !    memory can be used with <TT>shmem_get</TT> and
  !    <TT>shmem_put</TT>. This is called <I>symmetric
  !    allocation</I> and is described in the
  !    <TT>intro_shmem</TT> man page. <TT>ptr</TT> is a <I>Cray
  !    pointer</I> (see the section on <LINK
  !    SRC="#PORTABILITY">portability</LINK>).  The operation can be expensive
  !    (since it requires a global barrier). We therefore attempt to re-use
  !    existing allocation whenever possible. Therefore <TT>len</TT>
  !    and <TT>ptr</TT> must have the <TT>SAVE</TT> attribute
  !    in the calling routine, and retain the information about the last call
  !    to <TT>mpp_malloc</TT>. Additional memory is symmetrically
  !    allocated if and only if <TT>newlen</TT> exceeds
  !    <TT>len</TT>.
  !
  !    This is never required on Cray PVP or MPP systems. While the T3E
  !    manpages do talk about symmetric allocation, <TT>mpp_mod</TT>
  !    is coded to remove this restriction.
  !
  !    It is never required if <TT>mpp_mod</TT> is invoked in MPI.
  !
  !   This call implies synchronization across all PEs.
  !  </DESCRIPTION>
  !  <TEMPLATE>
  !   call mpp_malloc( ptr, newlen, len )
  !  </TEMPLATE>
  !  <IN NAME="ptr">
  !     a cray pointer, points to a dummy argument in this routine.
  !  </IN>
  !  <IN NAME="newlen" TYPE="integer">
  !     the required allocation length for the pointer ptr
  !  </IN>
  !  <IN NAME="len" TYPE="integer">
  !     the current allocation (0 if unallocated).
  !  </IN>
  ! </SUBROUTINE>

  !#####################################################################

  ! <SUBROUTINE NAME="mpp_set_stack_size">
  !  <OVERVIEW>
  !    Allocate module internal workspace.
  !  </OVERVIEW>
  !  <DESCRIPTION>
  !    <TT>mpp_mod</TT> maintains a private internal array called
  !    <TT>mpp_stack</TT> for private workspace. This call sets the length,
  !    in words, of this array. 
  !
  !    The <TT>mpp_init</TT> call sets this
  !    workspace length to a default of 32768, and this call may be used if a
  !    longer workspace is needed.
  !    
  !    This call implies synchronization across all PEs.
  !    
  !    This workspace is symmetrically allocated, as required for
  !    efficient communication on SGI and Cray MPP systems. Since symmetric
  !    allocation must be performed by <I>all</I> PEs in a job, this call
  !    must also be called by all PEs, using the same value of
  !    <TT>n</TT>. Calling <TT>mpp_set_stack_size</TT> from a subset of PEs,
  !    or with unequal argument <TT>n</TT>, may cause the program to hang.
  !    
  !    If any MPP call using <TT>mpp_stack</TT> overflows the declared
  !    stack array, the program will abort with a message specifying the
  !    stack length that is required. Many users wonder why, if the required
  !    stack length can be computed, it cannot also be specified at that
  !    point. This cannot be automated because there is no way for the
  !    program to know if all PEs are present at that call, and with equal
  !    values of <TT>n</TT>. The program must be rerun by the user with the
  !    correct argument to <TT>mpp_set_stack_size</TT>, called at an
  !    appropriate point in the code where all PEs are known to be present.
  !  </DESCRIPTION>
  !  <TEMPLATE>
  !    call mpp_set_stack_size(n)
  !  </TEMPLATE>
  !  <IN NAME="n" TYPE="integer"></IN>
  ! </SUBROUTINE>

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
  !                                                                             
!
  !            GLOBAL REDUCTION ROUTINES: mpp_max, mpp_sum, mpp_min             
!
  !                                                                             
!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

  ! <INTERFACE NAME="mpp_max">
  !  <OVERVIEW>
  !    Reduction operations.
  !  </OVERVIEW>
  !  <DESCRIPTION>
  !    Find the max of scalar a the PEs in pelist
  !    result is also automatically broadcast to all PEs
  !  </DESCRIPTION>
  !  <TEMPLATE>
  !    call  mpp_max( a, pelist )
  !  </TEMPLATE>
  !  <IN NAME="a">
  !    <TT>real</TT> or <TT>integer</TT>, of 4-byte of 8-byte kind.
  !  </IN>
  !  <IN NAME="pelist">
  !    If <TT>pelist</TT> is omitted, the context is assumed to be the
  !    current pelist. This call implies synchronization across the PEs in
  !    <TT>pelist</TT>, or the current pelist if <TT>pelist</TT> is absent.
  !  </IN>
  ! </INTERFACE>

  interface mpp_max
     module procedure mpp_max_real8
#ifndef no_8byte_integers
     module procedure mpp_max_int8
#endif
#ifndef no_4byte_reals
     module procedure mpp_max_real4
#endif
     module procedure mpp_max_int4
  end interface

  interface mpp_min
     module procedure mpp_min_real8
#ifndef no_8byte_integers
     module procedure mpp_min_int8
#endif
#ifndef no_4byte_reals
     module procedure mpp_min_real4
#endif
     module procedure mpp_min_int4
  end interface


  ! <INTERFACE NAME="mpp_sum">
  !  <OVERVIEW>
  !    Reduction operation.
  !  </OVERVIEW>
  !  <DESCRIPTION>
  !    <TT>MPP_TYPE_</TT> corresponds to any 4-byte and 8-byte variant of
  !    <TT>integer, real, complex</TT> variables, of rank 0 or 1. A
  !    contiguous block from a multi-dimensional array may be passed by its
  !    starting address and its length, as in <TT>f77</TT>.
  !
  !    Library reduction operators are not required or guaranteed to be
  !    bit-reproducible. In any case, changing the processor count changes
  !    the data layout, and thus very likely the order of operations. For
  !    bit-reproducible sums of distributed arrays, consider using the
  !    <TT>mpp_global_sum</TT> routine provided by the <LINK
  !    SRC="mpp_domains.html"><TT>mpp_domains</TT></LINK> module.
  !
  !    The <TT>bit_reproducible</TT> flag provided in earlier versions of
  !    this routine has been removed.
  !
  !
  !    If <TT>pelist</TT> is omitted, the context is assumed to be the
  !    current pelist. This call implies synchronization across the PEs in
  !    <TT>pelist</TT>, or the current pelist if <TT>pelist</TT> is absent.
  !  </DESCRIPTION>
  !  <TEMPLATE>
  !    call mpp_sum( a, length, pelist )
  !  </TEMPLATE>
  !  <IN NAME="length"></IN>
  !  <IN NAME="pelist"></IN>
  !  <INOUT NAME="a"></INOUT>
  ! </INTERFACE>

  interface mpp_sum
#ifndef no_8byte_integers
     module procedure mpp_sum_int8
     module procedure mpp_sum_int8_scalar
     module procedure mpp_sum_int8_2d
     module procedure mpp_sum_int8_3d
     module procedure mpp_sum_int8_4d
     module procedure mpp_sum_int8_5d
#endif
     module procedure mpp_sum_real8
     module procedure mpp_sum_real8_scalar
     module procedure mpp_sum_real8_2d
     module procedure mpp_sum_real8_3d
     module procedure mpp_sum_real8_4d
     module procedure mpp_sum_real8_5d
     module procedure mpp_sum_cmplx8
     module procedure mpp_sum_cmplx8_scalar
     module procedure mpp_sum_cmplx8_2d
     module procedure mpp_sum_cmplx8_3d
     module procedure mpp_sum_cmplx8_4d
     module procedure mpp_sum_cmplx8_5d
     module procedure mpp_sum_int4
     module procedure mpp_sum_int4_scalar
     module procedure mpp_sum_int4_2d
     module procedure mpp_sum_int4_3d
     module procedure mpp_sum_int4_4d
     module procedure mpp_sum_int4_5d
#ifndef no_4byte_reals
     module procedure mpp_sum_real4
     module procedure mpp_sum_real4_scalar
     module procedure mpp_sum_real4_2d
     module procedure mpp_sum_real4_3d
     module procedure mpp_sum_real4_4d
     module procedure mpp_sum_real4_5d
     module procedure mpp_sum_cmplx4
     module procedure mpp_sum_cmplx4_scalar
     module procedure mpp_sum_cmplx4_2d
     module procedure mpp_sum_cmplx4_3d
     module procedure mpp_sum_cmplx4_4d
     module procedure mpp_sum_cmplx4_5d
#endif
  end interface

  !#####################################################################

  ! <INTERFACE NAME="mpp_transmit">
  !  <OVERVIEW>
  !    Basic message-passing call.
  !  </OVERVIEW>
  !  <DESCRIPTION>
  !    <TT>MPP_TYPE_</TT> corresponds to any 4-byte and 8-byte variant of
  !    <TT>integer, real, complex, logical</TT> variables, of rank 0 or 1. A
  !    contiguous block from a multi-dimensional array may be passed by its
  !    starting address and its length, as in <TT>f77</TT>.
  !    
  !    <TT>mpp_transmit</TT> is currently implemented as asynchronous
  !    outward transmission and synchronous inward transmission. This follows
  !    the behaviour of <TT>shmem_put</TT> and <TT>shmem_get</TT>. In MPI, it
  !    is implemented as <TT>mpi_isend</TT> and <TT>mpi_recv</TT>. For most
  !    applications, transmissions occur in pairs, and are here accomplished
  !    in a single call.
  !    
  !    The special PE designations <TT>NULL_PE</TT>,
  !    <TT>ANY_PE</TT> and <TT>ALL_PES</TT> are provided by use
  !    association.
  !    
  !    <TT>NULL_PE</TT>: is used to disable one of the pair of
  !    transmissions.<BR/>
  !    <TT>ANY_PE</TT>: is used for unspecific remote
  !    destination. (Please note that <TT>put_pe=ANY_PE</TT> has no meaning
  !    in the MPI context, though it is available in the SHMEM invocation. If
  !    portability is a concern, it is best avoided).<BR/>
  !    <TT>ALL_PES</TT>: is used for broadcast operations.
  !    
  !    It is recommended that <LINK
  !    SRC="#mpp_broadcast"><TT>mpp_broadcast</TT></LINK> be used for
  !    broadcasts.
  !    
  !    The following example illustrates the use of
  !    <TT>NULL_PE</TT> and <TT>ALL_PES</TT>:
  !    
  !    <PRE>
  !    real, dimension(n) :: a
  !    if( pe.EQ.0 )then
  !        do p = 1,npes-1
  !           call mpp_transmit( a, n, p, a, n, NULL_PE )
  !        end do
  !    else
  !        call mpp_transmit( a, n, NULL_PE, a, n, 0 )
  !    end if
  !    
  !    call mpp_transmit( a, n, ALL_PES, a, n, 0 )
  !    </PRE>
  !    
  !    The do loop and the broadcast operation above are equivalent.
  !    
  !    Two overloaded calls <TT>mpp_send</TT> and
  !     <TT>mpp_recv</TT> have also been
  !    provided. <TT>mpp_send</TT> calls <TT>mpp_transmit</TT>
  !    with <TT>get_pe=NULL_PE</TT>. <TT>mpp_recv</TT> calls
  !    <TT>mpp_transmit</TT> with <TT>put_pe=NULL_PE</TT>. Thus
  !    the do loop above could be written more succinctly:
  !    
  !    <PRE>
  !    if( pe.EQ.0 )then
  !        do p = 1,npes-1
  !           call mpp_send( a, n, p )
  !        end do
  !    else
  !        call mpp_recv( a, n, 0 )
  !    end if
  !    </PRE>
  !  </DESCRIPTION>
  !  <TEMPLATE>
  !    call mpp_transmit( put_data, put_len, put_pe, get_data, get_len, get_pe )
  !  </TEMPLATE>
  ! </INTERFACE>
  interface mpp_transmit
     module procedure mpp_transmit_real8
     module procedure mpp_transmit_real8_scalar
     module procedure mpp_transmit_real8_2d
     module procedure mpp_transmit_real8_3d
     module procedure mpp_transmit_real8_4d
     module procedure mpp_transmit_real8_5d
     module procedure mpp_transmit_cmplx8
     module procedure mpp_transmit_cmplx8_scalar
     module procedure mpp_transmit_cmplx8_2d
     module procedure mpp_transmit_cmplx8_3d
     module procedure mpp_transmit_cmplx8_4d
     module procedure mpp_transmit_cmplx8_5d
#ifndef no_8byte_integers
     module procedure mpp_transmit_int8
     module procedure mpp_transmit_int8_scalar
     module procedure mpp_transmit_int8_2d
     module procedure mpp_transmit_int8_3d
     module procedure mpp_transmit_int8_4d
     module procedure mpp_transmit_int8_5d
     module procedure mpp_transmit_logical8
     module procedure mpp_transmit_logical8_scalar
     module procedure mpp_transmit_logical8_2d
     module procedure mpp_transmit_logical8_3d
     module procedure mpp_transmit_logical8_4d
     module procedure mpp_transmit_logical8_5d
#endif
#ifndef no_4byte_reals
     module procedure mpp_transmit_real4
     module procedure mpp_transmit_real4_scalar
     module procedure mpp_transmit_real4_2d
     module procedure mpp_transmit_real4_3d
     module procedure mpp_transmit_real4_4d
     module procedure mpp_transmit_real4_5d
     module procedure mpp_transmit_cmplx4
     module procedure mpp_transmit_cmplx4_scalar
     module procedure mpp_transmit_cmplx4_2d
     module procedure mpp_transmit_cmplx4_3d
     module procedure mpp_transmit_cmplx4_4d
     module procedure mpp_transmit_cmplx4_5d
#endif
     module procedure mpp_transmit_int4
     module procedure mpp_transmit_int4_scalar
     module procedure mpp_transmit_int4_2d
     module procedure mpp_transmit_int4_3d
     module procedure mpp_transmit_int4_4d
     module procedure mpp_transmit_int4_5d
     module procedure mpp_transmit_logical4
     module procedure mpp_transmit_logical4_scalar
     module procedure mpp_transmit_logical4_2d
     module procedure mpp_transmit_logical4_3d
     module procedure mpp_transmit_logical4_4d
     module procedure mpp_transmit_logical4_5d
  end interface
  interface mpp_recv
     module procedure mpp_recv_real8
     module procedure mpp_recv_real8_scalar
     module procedure mpp_recv_real8_2d
     module procedure mpp_recv_real8_3d
     module procedure mpp_recv_real8_4d
     module procedure mpp_recv_real8_5d
     module procedure mpp_recv_cmplx8
     module procedure mpp_recv_cmplx8_scalar
     module procedure mpp_recv_cmplx8_2d
     module procedure mpp_recv_cmplx8_3d
     module procedure mpp_recv_cmplx8_4d
     module procedure mpp_recv_cmplx8_5d
#ifndef no_8byte_integers
     module procedure mpp_recv_int8
     module procedure mpp_recv_int8_scalar
     module procedure mpp_recv_int8_2d
     module procedure mpp_recv_int8_3d
     module procedure mpp_recv_int8_4d
     module procedure mpp_recv_int8_5d
     module procedure mpp_recv_logical8
     module procedure mpp_recv_logical8_scalar
     module procedure mpp_recv_logical8_2d
     module procedure mpp_recv_logical8_3d
     module procedure mpp_recv_logical8_4d
     module procedure mpp_recv_logical8_5d
#endif
#ifndef no_4byte_reals
     module procedure mpp_recv_real4
     module procedure mpp_recv_real4_scalar
     module procedure mpp_recv_real4_2d
     module procedure mpp_recv_real4_3d
     module procedure mpp_recv_real4_4d
     module procedure mpp_recv_real4_5d
     module procedure mpp_recv_cmplx4
     module procedure mpp_recv_cmplx4_scalar
     module procedure mpp_recv_cmplx4_2d
     module procedure mpp_recv_cmplx4_3d
     module procedure mpp_recv_cmplx4_4d
     module procedure mpp_recv_cmplx4_5d
#endif
     module procedure mpp_recv_int4
     module procedure mpp_recv_int4_scalar
     module procedure mpp_recv_int4_2d
     module procedure mpp_recv_int4_3d
     module procedure mpp_recv_int4_4d
     module procedure mpp_recv_int4_5d
     module procedure mpp_recv_logical4
     module procedure mpp_recv_logical4_scalar
     module procedure mpp_recv_logical4_2d
     module procedure mpp_recv_logical4_3d
     module procedure mpp_recv_logical4_4d
     module procedure mpp_recv_logical4_5d
  end interface
  interface mpp_send
     module procedure mpp_send_real8
     module procedure mpp_send_real8_scalar
     module procedure mpp_send_real8_2d
     module procedure mpp_send_real8_3d
     module procedure mpp_send_real8_4d
     module procedure mpp_send_real8_5d
     module procedure mpp_send_cmplx8
     module procedure mpp_send_cmplx8_scalar
     module procedure mpp_send_cmplx8_2d
     module procedure mpp_send_cmplx8_3d
     module procedure mpp_send_cmplx8_4d
     module procedure mpp_send_cmplx8_5d
#ifndef no_8byte_integers
     module procedure mpp_send_int8
     module procedure mpp_send_int8_scalar
     module procedure mpp_send_int8_2d
     module procedure mpp_send_int8_3d
     module procedure mpp_send_int8_4d
     module procedure mpp_send_int8_5d
     module procedure mpp_send_logical8
     module procedure mpp_send_logical8_scalar
     module procedure mpp_send_logical8_2d
     module procedure mpp_send_logical8_3d
     module procedure mpp_send_logical8_4d
     module procedure mpp_send_logical8_5d
#endif
#ifndef no_4byte_reals
     module procedure mpp_send_real4
     module procedure mpp_send_real4_scalar
     module procedure mpp_send_real4_2d
     module procedure mpp_send_real4_3d
     module procedure mpp_send_real4_4d
     module procedure mpp_send_real4_5d
     module procedure mpp_send_cmplx4
     module procedure mpp_send_cmplx4_scalar
     module procedure mpp_send_cmplx4_2d
     module procedure mpp_send_cmplx4_3d
     module procedure mpp_send_cmplx4_4d
     module procedure mpp_send_cmplx4_5d
#endif
     module procedure mpp_send_int4
     module procedure mpp_send_int4_scalar
     module procedure mpp_send_int4_2d
     module procedure mpp_send_int4_3d
     module procedure mpp_send_int4_4d
     module procedure mpp_send_int4_5d
     module procedure mpp_send_logical4
     module procedure mpp_send_logical4_scalar
     module procedure mpp_send_logical4_2d
     module procedure mpp_send_logical4_3d
     module procedure mpp_send_logical4_4d
     module procedure mpp_send_logical4_5d
  end interface

  ! <INTERFACE NAME="mpp_broadcast">

  !   <OVERVIEW>
  !     Parallel broadcasts.
  !   </OVERVIEW>
  !   <DESCRIPTION>
  !     The <TT>mpp_broadcast</TT> call has been added because the original
  !     syntax (using <TT>ALL_PES</TT> in <TT>mpp_transmit</TT>) did not
  !     support a broadcast across a pelist.
  !
  !     <TT>MPP_TYPE_</TT> corresponds to any 4-byte and 8-byte variant of
  !     <TT>integer, real, complex, logical</TT> variables, of rank 0 or 1. A
  !     contiguous block from a multi-dimensional array may be passed by its
  !     starting address and its length, as in <TT>f77</TT>.
  !
  !     Global broadcasts through the <TT>ALL_PES</TT> argument to <LINK
  !     SRC="#mpp_transmit"><TT>mpp_transmit</TT></LINK> are still provided for
  !     backward-compatibility.
  !
  !     If <TT>pelist</TT> is omitted, the context is assumed to be the
  !     current pelist. <TT>from_pe</TT> must belong to the current
  !     pelist. This call implies synchronization across the PEs in
  !     <TT>pelist</TT>, or the current pelist if <TT>pelist</TT> is absent.
  !   </DESCRIPTION>
  !   <TEMPLATE>
  !     call mpp_broadcast( data, length, from_pe, pelist )
  !   </TEMPLATE>
  !   <IN NAME="length"> </IN>
  !   <IN NAME="from_pe"> </IN>
  !   <IN NAME="pelist"> </IN>
  !   <INOUT NAME="data(*)"> </INOUT>
  ! </INTERFACE>
  interface mpp_broadcast
     module procedure mpp_broadcast_real8
     module procedure mpp_broadcast_real8_scalar
     module procedure mpp_broadcast_real8_2d
     module procedure mpp_broadcast_real8_3d
     module procedure mpp_broadcast_real8_4d
     module procedure mpp_broadcast_real8_5d
     module procedure mpp_broadcast_cmplx8
     module procedure mpp_broadcast_cmplx8_scalar
     module procedure mpp_broadcast_cmplx8_2d
     module procedure mpp_broadcast_cmplx8_3d
     module procedure mpp_broadcast_cmplx8_4d
     module procedure mpp_broadcast_cmplx8_5d
#ifndef no_8byte_integers
     module procedure mpp_broadcast_int8
     module procedure mpp_broadcast_int8_scalar
     module procedure mpp_broadcast_int8_2d
     module procedure mpp_broadcast_int8_3d
     module procedure mpp_broadcast_int8_4d
     module procedure mpp_broadcast_int8_5d
     module procedure mpp_broadcast_logical8
     module procedure mpp_broadcast_logical8_scalar
     module procedure mpp_broadcast_logical8_2d
     module procedure mpp_broadcast_logical8_3d
     module procedure mpp_broadcast_logical8_4d
     module procedure mpp_broadcast_logical8_5d
#endif
#ifndef no_4byte_reals
     module procedure mpp_broadcast_real4
     module procedure mpp_broadcast_real4_scalar
     module procedure mpp_broadcast_real4_2d
     module procedure mpp_broadcast_real4_3d
     module procedure mpp_broadcast_real4_4d
     module procedure mpp_broadcast_real4_5d
     module procedure mpp_broadcast_cmplx4
     module procedure mpp_broadcast_cmplx4_scalar
     module procedure mpp_broadcast_cmplx4_2d
     module procedure mpp_broadcast_cmplx4_3d
     module procedure mpp_broadcast_cmplx4_4d
     module procedure mpp_broadcast_cmplx4_5d
#endif
     module procedure mpp_broadcast_int4
     module procedure mpp_broadcast_int4_scalar
     module procedure mpp_broadcast_int4_2d
     module procedure mpp_broadcast_int4_3d
     module procedure mpp_broadcast_int4_4d
     module procedure mpp_broadcast_int4_5d
     module procedure mpp_broadcast_logical4
     module procedure mpp_broadcast_logical4_scalar
     module procedure mpp_broadcast_logical4_2d
     module procedure mpp_broadcast_logical4_3d
     module procedure mpp_broadcast_logical4_4d
     module procedure mpp_broadcast_logical4_5d
  end interface

  !#####################################################################
  ! <INTERFACE NAME="mpp_chksum">

  !   <OVERVIEW>
  !     Parallel checksums.
  !   </OVERVIEW>
  !   <DESCRIPTION>
  !     <TT>mpp_chksum</TT> is a parallel checksum routine that returns an
  !     identical answer for the same array irrespective of how it has been
  !     partitioned across processors. <TT>LONG_KIND</TT>is the <TT>KIND</TT>
  !     parameter corresponding to long integers (see discussion on
  !     OS-dependent preprocessor directives) defined in
  !     the header file <TT>fms_platform.h</TT>. <TT>MPP_TYPE_</TT> corresponds 
to any
  !     4-byte and 8-byte variant of <TT>integer, real, complex, logical</TT>
  !     variables, of rank 0 to 5.
  !
  !     Integer checksums on FP data use the F90 <TT>TRANSFER()</TT>
  !     intrinsic.
  !
  !     The <LINK SRC="http://www.gfdl.noaa.gov/fms-cgi-bin/cvsweb.cgi/FMS/share
d/chksum/chksum.html">serial checksum module</LINK> is superseded
  !     by this function, and is no longer being actively maintained. This
  !     provides identical results on a single-processor job, and to perform
  !     serial checksums on a single processor of a parallel job, you only
  !     need to use the optional <TT>pelist</TT> argument.
  !     <PRE>
  !     use mpp_mod
  !     integer :: pe, chksum
  !     real :: a(:)
  !     pe = mpp_pe()
  !     chksum = mpp_chksum( a, (/pe/) )
  !     </PRE>
  !
  !     The additional functionality of <TT>mpp_chksum</TT> over
  !     serial checksums is to compute the checksum across the PEs in
  !     <TT>pelist</TT>. The answer is guaranteed to be the same for
  !     the same distributed array irrespective of how it has been
  !     partitioned.
  !
  !     If <TT>pelist</TT> is omitted, the context is assumed to be the
  !     current pelist. This call implies synchronization across the PEs in
  !     <TT>pelist</TT>, or the current pelist if <TT>pelist</TT> is absent.
  !   </DESCRIPTION>
  !   <TEMPLATE>
  !     mpp_chksum( var, pelist )
  !   </TEMPLATE>
  !   <IN NAME="pelist" TYPE="integer" DIM="(:)"> </IN>
  !   <IN NAME="var" TYPE="MPP_TYPE_"> </IN>
  ! </INTERFACE>
  interface mpp_chksum
#ifndef no_8byte_integers
     module procedure mpp_chksum_i8_1d
     module procedure mpp_chksum_i8_2d
     module procedure mpp_chksum_i8_3d
     module procedure mpp_chksum_i8_4d
#endif
     module procedure mpp_chksum_i4_1d
     module procedure mpp_chksum_i4_2d
     module procedure mpp_chksum_i4_3d
     module procedure mpp_chksum_i4_4d
     module procedure mpp_chksum_r8_0d
     module procedure mpp_chksum_r8_1d
     module procedure mpp_chksum_r8_2d
     module procedure mpp_chksum_r8_3d
     module procedure mpp_chksum_r8_4d
     module procedure mpp_chksum_r8_5d
     module procedure mpp_chksum_c8_0d
     module procedure mpp_chksum_c8_1d
     module procedure mpp_chksum_c8_2d
     module procedure mpp_chksum_c8_3d
     module procedure mpp_chksum_c8_4d
     module procedure mpp_chksum_c8_5d
#ifndef no_4byte_reals
     module procedure mpp_chksum_r4_0d
     module procedure mpp_chksum_r4_1d
     module procedure mpp_chksum_r4_2d
     module procedure mpp_chksum_r4_3d
     module procedure mpp_chksum_r4_4d
     module procedure mpp_chksum_r4_5d
     module procedure mpp_chksum_c4_0d
     module procedure mpp_chksum_c4_1d
     module procedure mpp_chksum_c4_2d
     module procedure mpp_chksum_c4_3d
     module procedure mpp_chksum_c4_4d
     module procedure mpp_chksum_c4_5d
#endif
  end interface

!***********************************************************************
!
!            module variables 
!
!***********************************************************************
  type(communicator),save :: peset(0:PESET_MAX) !0 is a dummy used to hold singl
e-PE "self" communicator
  logical              :: module_is_initialized = .false.
  logical              :: debug = .false.
  integer              :: npes=1, root_pe=0, pe=0
  integer(LONG_KIND)   :: tick, ticks_per_sec, max_ticks, start_tick, end_tick, 
tick0=0
  integer              :: mpp_comm_private
  logical              :: first_call_system_clock_mpi=.TRUE.
  real(DOUBLE_KIND)    :: mpi_count0=0  ! use to prevent integer overflow
  real(DOUBLE_KIND)    :: mpi_tick_rate=0.d0  ! clock rate for mpi_wtick()
  logical              :: mpp_record_timing_data=.TRUE.
  type(clock),save     :: clocks(MAX_CLOCKS)
  integer              :: log_unit, etc_unit
  character(len=32)    :: configfile='logfile'
  integer              :: peset_num=0, current_peset_num=0
  integer              :: world_peset_num                  !the world communicat
or
  integer              :: error
  integer              :: clock_num=0, num_clock_ids=0,current_clock=0, previous
_clock(MAX_CLOCKS)=0
  real                 :: tick_rate
  integer, allocatable :: request(:)
  character(len=32)    :: etcfile='._mpp.nonrootpe.stdout'
#ifdef SGICRAY
  integer :: in_unit=100, out_unit=101, err_unit=102 !see intro_io(3F): to see w
hy these values are used rather than 5,6,0
#else
  integer :: in_unit=5, out_unit=6, err_unit=0
#endif

  !--- variables used in mpp_util.h
  type(Summary_Struct) :: clock_summary(MAX_CLOCKS)
  logical              :: warnings_are_fatal = .FALSE.
  integer              :: error_state=0
  integer              :: clock_grain=CLOCK_LOOP-1

  !--- variables used in mpp_comm.h
#ifdef use_libMPI
#ifdef _CRAYT3E
  !BWA: mpif.h on t3e currently does not contain MPI_INTEGER8 datatype
  !(O2k and t90 do)
  !(t3e: fixed on 3.3 I believe)
  integer, parameter :: MPI_INTEGER8=MPI_INTEGER
#endif
#endif use_libMPI

  integer            :: clock0    !measures total runtime from mpp_init to mpp_e
xit
  integer            :: mpp_stack_size=0, mpp_stack_hwm=0
  integer            :: tag=1
  logical            :: verbose=.FALSE.
#ifdef _CRAY
  integer(LONG_KIND) :: word(1)
#endif
#if defined(sgi_mipspro) || defined(__ia64)
  integer(INT_KIND)  :: word(1)
#endif

  character(len=128), public :: version= &
       '$Id mpp.F90 $'
  character(len=128), public :: tagname= &
       '$Name: latest $'

  contains
#include <system_clock.h>
#include <mpp_util.inc>
#include <mpp_comm.inc>

  end module mpp_mod


#ifdef test_mpp
#ifdef SYSTEM_CLOCK
#undef SYSTEM_CLOCK
#endif

program test   !test various aspects of mpp_mod
#include <fms_platform.h>

#ifdef sgi_mipspro
  use shmem_interface
#endif

  use mpp_mod, only : mpp_init, mpp_exit, mpp_pe, mpp_npes, mpp_root_pe, stdout
  use mpp_mod, only : mpp_clock_id, mpp_clock_begin, mpp_clock_end, mpp_sync, mp
p_malloc
  use mpp_mod, only : mpp_declare_pelist, mpp_set_current_pelist, mpp_set_stack_
size
  use mpp_mod, only : mpp_broadcast, mpp_transmit, mpp_sum, mpp_max, mpp_chksum,
 ALL_PES
#ifdef use_MPI_GSM
  use mpp_mod, only : mpp_gsm_malloc, mpp_gsm_free
#endif

  implicit none

  integer, parameter              :: n=1048576
  real, allocatable, dimension(:) :: a, b, c
#ifdef use_MPI_GSM
  real                            :: d(n)
  pointer (locd, d)
#else
  real, allocatable, dimension(:) :: d
  integer(LONG_KIND) :: locd
#endif
  integer                         :: tick, tick0, ticks_per_sec, id
  integer                         :: pe, npes, root, i, j, k, l, m, n2, istat
  real                            :: dt


  call mpp_init()

  call mpp_set_stack_size(3145746)
  pe = mpp_pe()
  npes = mpp_npes()
  root = mpp_root_pe()
  call SYSTEM_CLOCK( count_rate=ticks_per_sec )
  allocate( a(n), b(n) )
  id = mpp_clock_id( 'Random number' )
  call mpp_clock_begin(id)
  call random_number(a)
  call mpp_clock_end  (id)
  !---------------------------------------------------------------------!
  !   time transmit, compare against shmem_put and get                  !
  !---------------------------------------------------------------------!
  if( pe.EQ.root )then
     print *, 'Time mpp_transmit for various lengths...'
#ifdef SGICRAY
     print *, 'For comparison, times for shmem_get and shmem_put are also provid
ed.'
#endif
     print *
  end if
  id = mpp_clock_id( 'mpp_transmit' )
  call mpp_clock_begin(id)
  !timing is done for cyclical pass (more useful than ping-pong etc)
  l = n
  do while( l.GT.0 )
     !--- mpp_transmit -------------------------------------------------
     call mpp_sync()
     call SYSTEM_CLOCK(tick0)
     do i = 1,npes
        call mpp_transmit( put_data=a(1), plen=l, to_pe=modulo(pe+npes-i,npes), 
&
                           get_data=b(1), glen=l, from_pe=modulo(pe+i,npes) )
        !          call mpp_sync_self( (/modulo(pe+npes-i,npes)/) )
     end do
     call mpp_sync()
     call SYSTEM_CLOCK(tick)
     dt = real(tick-tick0)/(npes*ticks_per_sec)
     dt = max( dt, epsilon(dt) )
     if( pe.EQ.root )write( stdout(),'(/a,i8,f13.6,f8.2)' )'MPP_TRANSMIT length,
 time, bw(Mb/s)=', l, dt, l*8e-6/dt

!#ifdef SGICRAY
!     !--- shmem_put ----------------------------------------------------
!     call mpp_sync()
!     call SYSTEM_CLOCK(tick0)
!     do i = 1,npes
!       call shmem_real_put( b, a, l, modulo(pe+1,npes) )
!     end do
!     call mpp_sync()
!     call SYSTEM_CLOCK(tick)
!     dt = real(tick-tick0)/(npes*ticks_per_sec)
!     dt = max( dt, epsilon(dt) )
!     if( pe.EQ.root )write( stdout(),'( a,i8,f13.6,f8.2)' )'SHMEM_PUT    length
, time, bw(Mb/s)=', l, dt, l*8e-6/dt
!     !--- shmem_get ----------------------------------------------------
!     call mpp_sync()
!     call SYSTEM_CLOCK(tick0)
!     do i = 1,npes
!        call shmem_real_get( b, a, l, modulo(pe+1,npes) )
!     end do
!     call SYSTEM_CLOCK(tick)
!     dt = real(tick-tick0)/(npes*ticks_per_sec)
!     dt = max( dt, epsilon(dt) )
!     if( pe.EQ.root )write( stdout(),'( a,i8,f13.6,f8.2)' )'SHMEM_GET    length
, time, bw(Mb/s)=', l, dt, l*8e-6/dt
!#endif
     l = l/2
  end do
  !---------------------------------------------------------------------!
  !                   test mpp_sum                                      !
  !---------------------------------------------------------------------!
  if( pe.EQ.root )then
     print '(/a)', 'Time mpp_sum...'
  end if
  a = real(pe+1)
  call mpp_sync()
  call SYSTEM_CLOCK(tick0)
  call mpp_sum(a(1:1000),1000)
  call SYSTEM_CLOCK(tick)
  dt = real(tick-tick0)/ticks_per_sec
  dt = max( dt, epsilon(dt) )
  if( pe.EQ.root )write( stdout(),'(a,2i4,f9.1,i8,f13.6,f8.2/)' ) &
       'mpp_sum: pe, npes, sum(pe+1), length, time, bw(Mb/s)=', pe, npes, a(1), 
n, dt, n*8e-6/dt
  call mpp_clock_end(id)
  !---------------------------------------------------------------------!
  !                   test mpp_max                                      !
  !---------------------------------------------------------------------!
  if( pe.EQ.root )then
     print *
     print *, 'Test mpp_max...'
  end if
  a = real(pe+1)
  print *, 'pe,     pe+1 =', pe, a(1)
  call mpp_max( a(1) )
  print *, 'pe, max(pe+1)=', pe, a(1)
  !pelist check
  call mpp_sync()
  call flush(stdout(),istat)
  if( npes.GE.2 )then
     if( pe.EQ.root )print *, 'Test of pelists: bcast, sum and max using PEs 0..
.npes-2 (excluding last PE)'
     call mpp_declare_pelist( (/(i,i=0,npes-2)/) )
     a = real(pe+1)
     if( pe.NE.npes-1 )call mpp_broadcast( a, n, npes-2, (/(i,i=0,npes-2)/) )
     print *, 'bcast(npes-1) from 0 to npes-2=', pe, a(1)
     a = real(pe+1)
     if( pe.NE.npes-1 )then
        call mpp_set_current_pelist( (/(i,i=0,npes-2)/) )
        id = mpp_clock_id( 'Partial mpp_sum' )
        call mpp_clock_begin(id)
        call mpp_sum( a(1:1000), 1000, (/(i,i=0,npes-2)/) )
        call mpp_clock_end  (id)
     end if
     if( pe.EQ.root )print *, 'sum(pe+1) from 0 to npes-2=', a(1)
     a = real(pe+1)
     if( pe.NE.npes-1 )call mpp_max( a(1), (/(i,i=0,npes-2)/) )
     if( pe.EQ.root )print *, 'max(pe+1) from 0 to npes-2=', a(1)
  end if
  call mpp_set_current_pelist()
  
#ifdef use_CRI_pointers
  !---------------------------------------------------------------------!
  !                   test mpp_chksum                                   !
  !---------------------------------------------------------------------!
  if( modulo(n,npes).EQ.0 )then  !only set up for even division
     n2 = 1024
     a = 0.d0
     if( pe.EQ.root )call random_number(a(1:n2))
!    if( pe.EQ.root )call random_number(a)
     call mpp_sync()
     call mpp_transmit( put_data=a(1), plen=n2, to_pe=ALL_PES, &
                        get_data=a(1), glen=n2, from_pe=root )
!    call mpp_transmit( put_data=a(1), plen=n, to_pe=ALL_PES, &
!                       get_data=a(1), glen=n, from_pe=root )
     m= n2/npes
!    m= n/npes
     allocate( c(m) )
     c = a(pe*m+1:pe*m+m)
     
     if( pe.EQ.root )then
        print *
        print *, 'Test mpp_chksum...'
        print *, 'This test shows that a whole array and a distributed array giv
e identical checksums.'
     end if
     print *, 'chksum(a(1:1024))=', mpp_chksum(a(1:n2),(/pe/))
     print *, 'chksum(c(1:1024))=', mpp_chksum(c)
!    print *, 'chksum(a)=', mpp_chksum(a,(/pe/))
!    print *, 'chksum(c)=', mpp_chksum(c)
  end if
!test of pointer sharing
#ifdef use_MPI_GSM
      call mpp_gsm_malloc( locd, sizeof(d) )
#else
  if( pe.EQ.root )then
      allocate( d(n) )
      locd = LOC(d)
  end if
  call mpp_broadcast(locd,root)
#endif
  if( pe.EQ.root )then
      call random_number(d)
  end if
  call mpp_sync()
  call test_shared_pointers(locd,n)

#ifdef use_MPI_GSM
  call mpp_gsm_free( locd )
#else
  if( pe.EQ.root )then
      deallocate( d )
  end if
#endif
#endif
  call mpp_exit()

contains

  subroutine test_shared_pointers(locd,n)
    integer(LONG_KIND), intent(in) :: locd
    integer :: n
    real :: dd(n)
    pointer( p, dd )

    p = locd
    print *, 'TEST_SHARED_POINTERS: pe, locd=', pe, locd
    print *, 'TEST_SHARED_POINTERS: pe, chksum(d)=', pe, mpp_chksum(dd,(/pe/))
    return
  end subroutine test_shared_pointers
end program test
  
#endif test_mpp
  • Eddy

Hi Eddy,

Searching the web I found this post about the issue: https://fms.gfdl.noaa.gov/pipermail/oar.gfdl.fms-atmos/2008-February/000126.html

The problem seems to be that AM2 uses a I/O statement within a WRITE, which is explicitly against the Fortran standard. Some compilers have added extensions to allow this, but we have not.

  • Mat

Hi Mat,

Thanks! It’s my fault. I should have found this post because I once found that group discussion. I am too careless.
But, another problem emerged as following:

==================

unlimit stacksize
mpirun -np 1 fms.x
0: ALLOCATE: 18446744073709551615 bytes requested; not enough memory

My platform has 32 GB memory, which should be enough. But how to limit the model to use so large memory??

  • Eddy