Defined Type memory requirements

I have a defined type that when allocated is using a much larger amount of memory that I expected. an example of this is as follows:

TYPE some
integer, dimension(:), allocatable :: TempData
integer :: Counter1
END TYPE some

TYPE(some), dimension(:), allocatable :: NewData

allocate (NewData(7277640))

When executing this the allocated memory is between 580 MB and 600 MB. I expected some overhead but not nearly that much.

if I also tested using a different technique as follows:

integer, dimension(:,:), allocatable :: TempData
integer, dimension(:), allocatable :: Counter1

allocate (TempData(7277640,1))
allocate (Counter1(7277640)

This allocates closer to 60 MB for the data. That is about a 10 to 1 ratio of overhead with the defined type.

Just as a sanity check I also tried the following:


TYPE some
integer :: TempData
integer :: Counter1
END TYPE some


TYPE(some), dimension(:), allocatable :: NewData

allocate (NewData(7277640))

This produces a memory allocation of around 60MB as well.

Is this a known issue or is there just tht much overhead associated with having an allocatable array as part of a defined type?

Hi Ronald,

Yes, derived types do have quite a bit of record keeping overhead. There’s also a bit of alignment padding which helps performance.

Is this a major problem for you? If so, I can certainly send a request to Engineers to see if they can get the size down, but it may a be trade-off of size versus performance.

Thanks,
Mat

Mat,

This is a contributing factor to a memory issue for 32 bit O/S platforms.

We have a situation where during a portion of the run time our memory requirements are near or exceeding 4GB. When this occurs, an error of “no memory availalbe” is returned by an allocation call elsewhere in the code.

If we run on a 64 bit O/S then there is no issue because the address space is significantly greater there. Many of our customers may not have a 64 bit O/S available to them, so we are trying to stay under 4GB.

I have not had a chance to see if there are any areas of the code that may be overly allocating, or have duplicate allocations that were introduced during coding. I have also not determined if there are arrays that could be allocated later in the process which would reduce the peak memory requirements.

We are taking two exsting codes and merging them together, while providing backward compatability and additional capability. We are also merging FORTRAN 77, 90, and C code together into shared objects.

Performance is also something we need to watch, so I’m not sure that reducing the memory peak at the sacrifice of performance would be a good tradeoff for us either.

I may look at a different approach using cray pointers, or some other way to collect the data. I was just rather surprised to see that the pointer takes 16 integer*4 locations when looking at it in the debugger. This is one of the arrays that will directly vary with user input. One of the current larger ones has over 7.2 million pieces of data and we are told this may double over the next year.

If there is a better way to do this, or something simple that might reduce the memory requirement, then that would be great.

Thanks,

Ron C.

Hi Ronald,

I sent a note to our chief compiler architect to see what can be done.

Thanks,
Mat