ARSC T3E Users' Newsletter 180, Oct 22, 1999

CUG T3E Workshop Review

[[ Thanks to Guy for this review of the CUG T3E Workshop, which was held 7th-8th October, in Princeton, New Jersey. ]]

The goal of the CUG T3E Workshop was to discuss the past success of the T3E, in terms of the parallel applications developed, and the future of the T3E in general. The meeting was attended by a mixture of both users and site administrators from the U.S. and Europe, and this resulted in a healthy discussion of how best to utilize the T3E and its future, parts of which are summarized below.

On Thursday, speakers concentrated on applications and how these had been developed to get the most from the T3E.

Thomas Clune from NASA Goddard described how to get 100+ Gflops from a variety of codes within NASA and Jeff Brooks from CRI summarized experiences as a whole in a talk entitled "scalability 101". Both presented very interesting performance figures for applications on 1000+ nodes of a T3E, these varied from 600+ Gflops for real applications to a case of 1180 Gflops on a 1200 Gflop system. (The latter case was entirely cache resident so might fail the real application test, but it is always useful to know what is needed to get the ultimate performance from a system as this guides designers of future systems and applications.) Mike Merril, DOD, asked users how far they should be willing to go to get performance and offered some simple and safe tips on optimizations which would not require entire codes to be rewritten.

In the afternoon, James Lebak, from the MIT Lincoln Lab, described a project on how to make the T3E mobile and replace dedicated systems which are currently part of the AEGIS shipboard Signal Processing system. Returning to firmer ground and more established computational areas, speakers covered how they had developed optimized parallel codes for spectral atmosphere models and protein folding, looking at the very large and the very small.

The afternoon closed with two overviews, future T3E developments and Cray plans overall from William White, CRI, and a descriptionof the SNIA from SGI from Steve Reinhardt, SGI, both of whom were subjected to heavy questioning by the meeting attendees.

On Friday, matters turned to software to ease the cost of programming complex parallel systems with the desired algorithms.

Bob Numrich, CRI, presented a tutorial on Co-Arrays in Fortran and Bill Carlson, CCS, presented a description of UPC, an extension to C. There was much interplay between the two speakers in this session and some C and Fortran differences were debated with the audience joining in with much constructive debate on the future of high-level parallel languages and what is needed arrive at a better basis for developing parallel code.

Doug Miles, PGI, described how several sites were using HPF to ease the burden of developing parallel code on the T3E. Yang Wang, Pittsburgh Supercomputing Center, followed these presentations with a description of how to get one Teraflop out of the T3E, but note, this paper had the largest list of authors of any paper at the conference, so we assume it isn't as easy as it was made to appear in the presentation and took some teamwork. (This work was awarded the Bell prize at SC98.) The morning session closed with a description of the use of the T3E for plasma simulations by Zhihong Lin of Princeton. A common feature of all talks was that there was no "silver bullet" and the conclusion that really good parallel applications need good programmers and lots of effort.

On Friday afternoon, various computer centers described how they supported users' endevours through the configuration of systems. Tina Butler, NERSC, gave two presentations and covered key issues such as IO and scheduling, along with the challenge of determining if a change was actually an improvement. I described how ARSC uses a combination of this newsletter, personal contact, manual scheduling, plus other less subtle forms of persuasion, to promote the use of tools. Paul Burton described why the UK Metrological Office needed two T3E systems and how the mixture of both operational and research work was managed on the systems.

In conclusion, there is still much life left in the T3E as both centers and users continue to master the challenges of parallel computing. It is an interesting time for a variety of reasons, and out of such times often come the best futures. This meeting was a very useful moment to take stock of the T3E and how it has been used, and provided much input to Cray to guide future plans.

Future of Fortran

> Date: Thu, 30 Sep 1999 05:19:52 -0400
> Subject: "The Future of Fortran"
> From: Michael Metcalf <>
> To: comp-fortran-90 <>
> Reply-To: Michael Metcalf <>
> For my sins, I was recently asked to give a talk on the future of
> Fortran.  For anyone who might be interested, the gzipped Postscript
> version of my slides is available by ftp from in the
> directory dist as the file
> Comments welcome.
> Mike Metcalf

NAS MG Implementations?

One of our readers has implemented the NAS MG benchmark code in ZPL. If you've parallelized this code using any method, please let us know. We might devote an issue to different approaches and their performance on the T3E. To get the serial and/or parallel MPI implementations, go to:

and download the "NPB- NAS Parallel Benchmarks."

MG or "Multi-Grid" Methods

It is sometimes easy to forget that there are alternatives to throwing teraflops of computing power at problems, that the human brain is still a wonderful tool, and that often a little thought can be a greater help that brute force.

One mathematical technique that offers both a faster and a better quality of solution is multigrid methods.

These methods are used to solve partial differential equations, and do so using linear time and space. In theory, solution times should show near linear scaling with the number of unknowns. This is a considerable improvement when compared with many conventional techniques. A good source of information on this technique is to be found at:

This site contain a wide range of items including:
  • tutorials, in particular a basic introduction to the method and a parallel tutorial.
  • papers, including important references, latest breaking research, and a bibliography.
  • newsletters with the latest multigrid news; you can subscribe on-line, if you so wish.
  • actual multigrid codes as examples of the algorithm and its implementation.
  • other methods which are associated with multigrid such as domain decomposition.

The "" site is maintained by Craig Douglas and thanks should be given to him for putting together such a collection of items and keeping it current.

Quick-Tip Q & A

A:{{ Any theories about this?
  {{   chilkoot% 
  {{   chilkoot% ls -l 
  {{   total 32
  {{   -rw-------   1 testacct tstgroup    1864 Oct  7 17:33 myfile
  {{   chilkoot% rm myfile
  {{   cmd-1025 rm: myfile: No such file or directory
  {{   chilkoot% ls -l
  {{   total 32
  {{   -rw-------   1 testacct tstgroup    1864 Oct  7 17:33 myfile
  {{   chilkoot% 
  {{ (Except for the indentation, this session was copied exactly as
  {{ it appeared.) 

Thanks to Barbara Herron, Kurt Carlson, Brad Chamberlain, Gene McGill,
Alan Wallcraft, and Don Morton, all of whom surmised that there were
non-printing character(s) in the file name. Each had a favorite method
for finding them and/or deleting the file.

Here are three ways to show nongraphic characters in a file name:

 od -c    

   "od" prints nongraphic characters (including EOL) as octal codes or
   escape codes.  Output is hard to read, but this is the only method
   that gives proof of trailing white space characters.

ls -lb         

    shows nongraphic characters as octal codes.

ls -lq  

    replaces nongraphic characters with question marks. Note that
    spaces will still be hidden.

Removing the file:  

My favorite method uses the question mark wild-card character, which
replaces exactly one character in rm, cp, or mv commands.  You can copy
the output of "ls -lq" directly into the "rm -i" command.

To "rm" file names containing spaces or tabs, use wild-card characters
yourself or delimit the spaces with quotation marks.

Sample Session:

(Note, \010 is the backspace character code.  Sometimes, when the
terminal or telnet session is misconfigured, backspacing can seem to
delete characters, but actually leave them and introduce the backspace
code itself into typed text. This is actually pretty common.)

  chilkoot% ls -l
  total 32
  -rw-------   1 testacct tstgroup    2168 Oct 21 16:24 myfile
  -rw-------   1 testacct tstgroup       0 Oct 21 16:27 myfile  
  -rw-------   1 testacct tstgroup       0 Oct 21 16:27 myfile
  chilkoot% ls -lb
  total 32
  -rw-------   1 testacct tstgroup    2168 Oct 21 16:24 myfile
  -rw-------   1 testacct tstgroup       0 Oct 21 16:27 myfile  
  -rw-------   1 testacct tstgroup       0 Oct 21 16:27 myvile\010\010\010\010file
  chilkoot% ls -lq
  total 32
  -rw-------   1 testacct tstgroup    2168 Oct 21 16:24 myfile
  -rw-------   1 testacct tstgroup       0 Oct 21 16:27 myfile  
  -rw-------   1 testacct tstgroup       0 Oct 21 16:27 myvile????file
  chilkoot% rm myvile????file
  chilkoot% ls 
 od -c
  0000000000000   m   y   f   i   l   e  \n   m   y   f   i   l   e          \n
  chilkoot% rm -i "myfile  "
  cmd-3187 rm: remove myfile  ? y
  chilkoot% ls -l
  total 32
  -rw-------   1 testacct tstgroup    2168 Oct 21 16:24 myfile

Expert Method:

  Use "ls -i" and "find" to delete the file by inode number, but be
  careful. A goof with "find ... -exec rm" could easily wipe out all 
  your files.  Sample session:

    chilkoot% ls -1 -ib
          3020 myfile
          5385 myvile\010\010\010\010file
    chilkoot% find . -inum 5385 -exec rm -i {} ";"
    cmd-3187 rm: remove ./myfile? yes
    chilkoot% ls -1 -ib
          3020 myfile

Q: I want "find" to work on files in the current directory only.
   How can I prevent it from descending recursively into

[ Answers, questions, and tips graciously accepted. ]

Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top