ARSC T3D Users' Newsletter 51, September 8, 1995

T3D Class at ARSC: Introduction to Parallel Processing and the Cray T3D

For CRI users traveling to the Alaska CUG from the Lower 48 there are still vacancies for this class. If you are interested in attending please contact Mike Ess quickly.

This will be an introductory class on the T3D taught at the Butrovich Building, on the University of Alaska Fairbanks main campus from September 18th to 20th. Each day of classes will begin at 9:00 AM and last until about 4:00 PM. Each user will receive a copy of the CRI training manual Cray T3D Applications Programming and supplemental handouts. The instructor will be Mike Ess, ARSC T3D consultant.

This is the first time the class has been taught in the Butrovich Building and the schedule below is tentatively based on current room and lab availability:

Introduction to Parallel Programming and the T3D
Sept. 18 9:00 - 11:00 lecture 1 Motivation for MPP and background
11:00 - 12:00 lab 1 Performance speeds on uniprocessor jobs, T3D and Y-MP
1:00 - 3:00 lecture 2 Programming models and PVM
3:00 - 4:00 lab 2 Summation of integers with PVM
Sept. 19 9:00 - 10:30 lecture 3 Hardware overview
10:45 - 12:00 lecture 4 Software overview
1:00 - 3:00 lecture 5 Fortran, C and HPF
Sept. 20 9:00 - 10:00 lab 3 CRAFT Fortran
10:00 - 12:00 lab 4 Timing communications, shmem, PVM, CRAFT and HPF
1:00 - 2:00 lecture 6 Programming tools, performance, machine differences
2:00 - 3:00 lecture 7 NQS, Totalview, Apprentice, I/O
3:00 - 4:00 lab 5 NQS scripts, Totalview, Apprentice, IOzone

Any questions about this class should be directed to Mike Ess at:
  or  907-474-5405

Dynamic Memory on the T3D

In the past two weeks we have had two users who ran into the inflexible wall of limited memory on the T3D. One user was using the 'automatic array' feature of cf77. With this feature a user can dynamically allocate an array during a subroutine call. A typical example is:

example 1

  program main
  n = 1000000
  call sub( n )
  subroutine sub( n )
  real a( n )
During the call to the subroutine 'sub' the array of 1 million reals has been allocated at the beginning and is released at the end. The extra storage is implemented with a call to the heap manager.

Another example of dynamically allocating space is opening a file. Usually each file I/O operation does not cause a physical disk operation. Each I/O file has buffers associated with it and these buffers accumulate changes to the file and only when the buffer has accumulated enough changes are these changes effected with physical disk operations. Here is a typical example:

example 2

  program io
  open( 10, access=direct, recl=4096 )
From the description of direct unformatted I/O given in the man page for assign command, we know that this open requires 4 buffers of length 4096 bytes. And of course, these buffers are not allocated until the open is executed.

On the T3D when a job is running and tries to allocate more memory than is available then something has got to give. Usually the process running on the PE that reaches this error condition first will abort and then the user gets a register dump and a mppcore file. Sometimes there is a descriptive error message but for these two users last week there was none.

Recompiling for a debug run and running under totalview provides a traceback of where the programs had aborted. A routine called alloc was the tip off that there were memory problems. For the user using the automatic arrays, the solution was to print out how much storage was requested, see than it was more than physically available and then break the problem into pieces that could fit in the available memory.

For the user with I/O buffers that put him over the limit, it was possible to use the hpalloc routine and its returned error messages to determine how much memory was available. With this number and the knowledge that a direct access file requires 4 times the length of a record in buffer the solution was something like:

  call asnunit( 10, '-u 2', ier )
  print *, ier
  open( 10, access='direct', recl=nrecl, iostat=ios )
  print *, ios
The asunit function is used to open the file with only 2 buffers instead of the default of 4.

List of Differences Between T3D and Y-MP

The current list of differences between the T3D and the Y-MP is:
  1. Data type sizes are not the same (Newsletter #5)
  2. Uninitialized variables are different (Newsletter #6)
  3. The effect of the -a static compiler switch (Newsletter #7)
  4. There is no GETENV on the T3D (Newsletter #8)
  5. Missing routine SMACH on T3D (Newsletter #9)
  6. Different Arithmetics (Newsletter #9)
  7. Different clock granularities for gettimeofday (Newsletter #11)
  8. Restrictions on record length for direct I/O files (Newsletter #19)
  9. Implied DO loop is not "vectorized" on the T3D (Newsletter #20)
  10. Missing Linpack and Eispack routines in libsci (Newsletter #25)
  11. F90 manual for Y-MP, no manual for T3D (Newsletter #31)
  12. RANF() and its manpage differ between machines (Newsletter #37)
  13. CRAY2IEG is available only on the Y-MP (Newsletter #40)
  14. Missing sort routines on the T3D (Newsletter #41)
I encourage users to e-mail in differences that they have found, so we all can benefit from each other's experience.
Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top