ARSC T3D Users' Newsletter 19, January 20, 1995

ARSC T3D Upgrades

The next month or two will be a busy time for the ARSC T3D. We will be upgrading the following:

  1. 2MW to 8MW per PE, tentatively set for Feburary 7th and 8th
  2. The T3D OS, MAX 1.1 to MAX 1.2, tentatively set for January 31st
  3. The T3D Programming Environment (libraries, tools and compilers) P.E. 1.1 to P.E. 1.2 sometime in the next two months.
Users will be notified exactly when these upgrades will happen in mailings to the ARSC T3D user's group (i.e., those who receive this newsletter).

I/O to the Same File from Multiple PEs

There are two types of I/O on the T3D, Private I/O and Shared I/O. From the MPP Fortran Manual: "A private READ or WRITE statement is one that, when encountered, is executed in its entirety by the processor that encounters it. It requires no synchronization across, nor communication with, other processors. It is executed without regard for the activity of the other processors. ... ". This situation of Private I/O is why multiple PEs writing to the same file can cause problems.

But there is one case where multiple PEs can read and write to the same file with a few restrictions that is still Private I/O. This is when:

  1. the file is assigned for direct I/O
  2. the record length is a multiple of a 512-word block
  3. each PE can not write to the same record at the "same time"
In Fortran, this looks something like:

  call asnunit( iun, '-a /tmp/ess/file', ier )
  open( iun, form = 'unformatted', access = 'direct', recl=LR )
  write( iun, rec = i ) array
where LR is a multiple of 4096 and i is the number of a record in the file. The restriction, that the record length be a multiple of 4096 bytes is different from the Y-MP, where the record length is arbitrary.

Shared I/O is not supported in the current release, but it will be supported when we move to the 1.2 Programming Environment. We'll revisit these direct I/O files after ARSC upgrades to the new Programming Environment.

Next T3D Class at ARSC

Below is the announcement of the next T3D class at ARSC:

Title: Applications Programming on the CRAY T3D Dates: February 8 - 10, 1995 Time: 9:00 AM - 5:00 PM Location: University of Alaska Fairbanks main campus, room TBA Instructor: Mike Ess, Parallel Applications Specialist Course Description: To satisfy increasing computational demands, computers of the future must have multiple processors executing on the same program. The Cray T3D is a step in this direction. The Cray T3D, a MPP or Massively Parallel Processor, consists of 128 processors attached to the Cray Y-MP.

This class will cover the characterization and history of MPPs. With this background the students will experience how the T3D approaches the problem of executing a program in parallel. The class will cover the three programming paradigms for extracting parallelism:

  1. Data-sharing, as with Fortran 90
  2. Work-sharing, as with Craft Fortran and
  3. Message-passing as implemented with PVM or shmem
The primary goal is to provide practical experience in getting codes up and running efficiently on the T3D. Examples used in the class can be used as models for application programs on the T3D. Also covered will be:
  1. Performance measurement and tools
  2. Debugging techniques and tools
This class will have directed lab sessions and users will have an opportunity to have their applications examined with the instructor. Intended Audience: Researchers who will be developing programs to run on the T3D and current users of the T3D who want a comprehensive, up-to-date survey of programming on the T3D. Prerequisites: Applicants should have a denali userid or be in the process of applying for a userid. Applicants should be familiar with programming in Fortran or C on a UNIX system. Application Procedure: There is no charge for attendance, but enrollment will be limited to 15. In the event of greater demand, applicants will be selected by ARSC staff based on qualifications, need and order of application. The class may be cancelled if there are fewer than 5 applicants.

To apply send e-mail to with the following information:

  • course name
  • your name
  • UA status (e.g., undergrad, grad, Asst. Prof.)
  • institution/dept.
  • phone
  • advisor (if you are a student)
  • denali userid
  • preferred e-mail address
  • describe programming experience
  • describe need for this class

CUG Articles on the T3D

ARSC has the CUG (Cray User Group) proceedings for the last two meetings:
  • Spring 1994 in San Diego
  • Fall 1994 in Tours, France
There are several articles about the T3D in both proceedings and I am willing to mail hard copies of them to ARSC T3D group members who request them. To receive the hard copies, please e-mail me your mailing address (and phone number won't hurt) and the titles of the articles you want.

I have at least skimmed all of the articles and I have found the six articles below that are in bold type to be the most informative. The complete list of articles concerning the T3D from the 2 proceedings is:

Spring 1994

  • Cray T3D Project Update Steve Reinhardt (CRI)
  • Porting Third-Party Application Packages to the Cray MPP: Experience at PSC Frank C. Wimberley, Suscheel Chitre, Carlos Gonzales, Michael H. Lambert Nicholas Nystrom, Alex Ropelewski, William Young (PITTSCC)
  • Providing Breakthrough Gains: Cray Research MPP for Commercial Applications Denton A. Olson (CRI)
  • A PVM Implementation of a Conjugate Gradient Solution Algorithm for Ground-Water Flow Modeling Dennis Morrow, John Thorp, Bill Holter (CRI and NASA/Goddard)
  • Heterogeneous Computing Using the Cray Y-MP and T3D Bob Carruthers (CRI)
  • The MPP Apprentice Performance Tool: Delivering the Performance of the Cray T3D Winifred Williams, Timothy Hoel, Douglas Pase (CRI)
  • Fortran I/O Libraries on T3D Suzanne LaCroix (CRI)
  • T3D SN6004 is Well, Alive and Computing Martine Gigandet, Monique Patron, Francois Robin (CEA-CEL)
  • System Administration Tasks and Operational Tools for the Cray T3D System Susan J. Crawford (CRI)
  • The Performance of Synchronization and Broadcast Communication on the Cray T3D Computer System F. Ray Barriuso (CRI)
  • High Performance Programming Using Explicit Shared Memory Model on the Cray T3D Subhash Saini, Horst Simon(CSC-NASA/Ames) and Charles Grassl (CRI)
  • Architecture and Performance for the Cray T3D Charles M. Grassl (CRI)

Fall 1994

  • Quantum Molecular Dynamics on Massively Parallel Computers A. Canning, A. DeVita, G. Galli, F. Gygi, F. Mauri, R. Car
  • Moving CFD to the T3D: A Progress Report Stephen R. Behling
  • Porting of a Quantum Chemistry FCI Algorithm to the Cray T3D Elda Rossi, Roberto Ansaloni, Stefano Evangelisti
  • Solving Symmetric Eigenvalue Problems on Distributive Memory Machines David C. O'Neal, Raghurama Reddy
  • 1994-1995 Applications Focus Sara Graffunder
  • The Solution of Large Unstructured Sparse Systems of Equations on the T3D with Application to Finite Element Modeling Tom Cwik, Cinzia Zuffada, Vahraz Jamnejad, Don Katz
  • Message Passing on the Cray T3D R. Paul Marcelin
  • Accepting the T3D David O. Rich, Stephen C. Pope, Jerry G. DeLapp
  • The MPI Message Passing Standard on the Cray T3D Lyndon J. Clark
  • Fortran 90 and T3D Optimizations Greg Fischer
  • A Numerical Simulation of Groundwater Flow and Contaminant Transport on the Cray T3D and C90 Supercomputers S.F. Ashby, W.J Brosl, R.D. Falgout, S.G. Smith, A.F.B. Tompson, T.J. Williams
  • Parallel Computational Fluid Dynamics on the Cray T3D Using Block Structured Meshes Mark L. Sawley and Jon K. Tegner
  • Electron-Molecule Collisions on the Cray T3D Carl Winstead, Howard P. Pritchard, Chuo-Han Lee, Vincent McKoy
  • Status of CFDLIB Performance Tests on the T3D Nely T. Padial, Bryan A. Kashiwa, Douglas B. Kothe
  • Porting Fortran Programs to the T3D Using the CRAFT Model - An Incremental Approach T. Ming Jiang
  • The Cray T3D at EPFL: First Activities Marie-Christine Sawley
  • Cray T3D System Common Conversion Questions and Problems Philip G. Garnatz
  • MPP Operating Systems - A Cray Research Operating System Perspective Jim Harrell
  • An Update on System Administration Tasks, Operational Tools and Release Plans for the Cray T3D System. Susan J. Crawford
  • Using High Performance FORTRAN on the Cray T3D John M. Levesque
  • First Experiments with CRAFT Programming Model on the T3D at CEA-CEL-V Cecile Chaigneau, Thierry Nkaoua, Monique Patron

Optimization on the T3D

CRI has produced a collection of papers on optimizing on the T3D. I will email any of them to T3D users who request them. I have also placed them on denali in the directory: /usr/local/examples/mpp/papers. Here is a short list of the papers:     Single-PE Optimization Techniques for the CRAY
                 T3D System    Programming for Performance in CRAFT on the
                 CRAY T3D System  Data Layout and Its Affect on Single-processor
                 Performance on CRAY T3D Systems
  io_opt.ascii   Input/Output (I/O) Optimization for CRAY T3D
  pvm_opt.ascii  Parallel Virtual Machine (PVM) Optimization
  shmem.ascii    Shared Memory Gets and Puts on the CRAY T3D


List of Differences Between T3D and Y-MP

The current list of differences between the T3D and the Y-MP is:
  1. Data type sizes are not the same (Newsletter #5)
  2. Uninitialized variables are different (Newsletter #6)
  3. The effect of the -a static compiler switch (Newsletter #7)
  4. There is no GETENV on the T3D (Newsletter #8)
  5. Missing routine SMACH on T3D (Newsletter #9)
  6. Different Arithmetics (Newsletter #9)
  7. Different clock granularities for gettimeofday (Newsletter #11)
  8. Restrictions on record length for direct I/O files (Newsletter #19)
I encourage users to e-mail in differences that they have found, so we all can benefit from each other's experience.
Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top