ARSC T3E Users' Newsletter 150, September 4, 1998

Sesquicentennial T3D/T3E Newsletter

Sometimes you have to blow your own horn. This is the 150th issue of this newsletter!

We currently have 330 subscribers; about 260 are non-ARSC, about 230 are outside of Alaska, and about 50 are outside of the U.S. (primarily Europe, the U.K., Scandinavia, and Japan).

Here are some highlights from previous issues:


#1
 (Aug. 25, 1994) 

  Mike Ess sends T3D 
Newsletter #1
:
  
  Lead article:
    "...the latest release of the T3D software (MAX 1.1.0.1) was
    installed on the T3D at ARSC..."



#17
 (Jan. 6, 1995)

  From an article in The Wall Street Journal: 
  
      "... a 256-processor CRAY T3D massively parallel processing
      system and an interim four-processor CRAY Y-MP supercomputer
      system are scheduled to be installed this month at Los Alamos. In
      1995, Los Alamos expects to upgrade the CRAY T3D system to 512
      processors..."



#27
 (Mar 17, 1995)

  Report on the Denver CUG:
  
    "It was a tremendous meeting for T3D users as there was a T3D
    conference running concurrently with the CUG meeting. ...there were
    over 27 hours of presentations about the T3D."



#53
 (Sept 22, 1995)

  Anticipating the Alaska CUG:

    "There will be a real air cooled T3E machine with a 4 processor
    module installed, on display at the CUG meeting in Alaska."



#68
 (Jan 5, 1996)

  Accessing 128 PEs at ARSC:

    "It is not always possible to access 128 PEs in a predictable way
    now that the ARSC T3D is more active... "



#83
 (Apr 19, 1996)

  T3E!

    Eagan, Minn. -- The first CRAY T3E scalable parallel system has
    been installed at the Pittsburgh Supercomputing Center (PSC) and is
    already running parallel applications, Cray Research announced
    earlier this week.



#89
 (May 31, 1996)

  Farewell to Mike Ess... ARSC seeks MPP specialist... Tom Baring's
  first issue.



#101
 (Aug 23, 1996)

  The first "Quick-Tip Q&A":

     Q: How can you delete a file named "-i" ??? 

  

#115
 (Mar 7, 1996) 

  ARSC welcomes Guy Robinson. Newsletter returns after 3-month
  vacation, it becomes bi-weekly (thank goodness!), Guy and Tom
  co-editors, and the big news:

    "ARSC's 88PE, liquid cooled, 128Mb/PE, 300Mhz CRAY T3E has been
    plumbed, electrified, and named. It is:  yukon.arsc.edu. UNICOS/mk
    1.4.1 will be installed next week, and user test codes will start
    running soon after that."

CS Students Study Parallel Programming

Last spring semester, 13 students in the University of Alaska's computer architecture course, CS341/541, were assigned semester projects (worth 30% of their grade) on ARSC's T3E. The class was taught by Professor Peter Knoke, with guest lectures by Guy Robinson of ARSC. Such courses are of benefit to both the students and the entire University research community, as they help train future programmers to make good use of local resources.

The project was to:

  • develop a parallel program for the T3E using MPI
  • measure speedup for that program, and
  • explain the speedup results in computer and software architecture terms.

Most of the programs were developed by students with no prior experience with either the T3E or MPI.

Here are quick descriptions of the projects:

A parallel implementation of a ray-tracing program.

Dart board technique used to estimate PI. Speedup about 45 with 50 processors

Exploration of factoring algorithm (Lenstra) using the T3E. Speedup of about 12 with 24 PE's, but scaling problems.

Estimating PI by integral approximations. Speedup 7.5 while using 8 PEs.

Integration with the Riemann sum approximation. Speedup of about 11 with 11 PEs. Very good discussion of results.

Pythagorean Cubic problem (x^3 + y^3 = z^3, search for x,y and Z integers). Speedup 82 with 90 processors.

Port to CRAY T3E of a basic ray tracer for spheres, with modifications. Speedup of about 8 with 8 PEs.

Parallel programming used to test whether or not the distribution of numbers produced by a typical random number generator was uniform. The absolute speedup was 13 with 30 PE's used. Incremental speedup was also defined and measured.

This project was related to parallelization of a large scale computational model of the ionosphere. The model is called "The UAF Eulerian Polar Ionosphere Model", or EPIM for short. Dr. Maurits of ARSC is the author of this model. The project program led to a speedup of 11 with 41 PE's.

The program developed calculates one of three different integrals. It uses trapezoidal approximation. A speedup of 20 with 20 PE's was achieved.

The project was to code a parallel prime number sieve using the Sieve of Eratosthenes. This program showed almost no speedup with 8 PE's.

The project was to estimate PI using a simulation. A speedup of 49 using 50 PE's was obtained.

A large weather simulation model called "Large Eddy Simulation" was modified to permit parallel processing. A speedup of 36 using 64 PE's was obtained.

Fortran... Fortran... Fortran...

Great sources of information about the latest developments in Fortran are the comp-fortran-90 mailing list for fortran 90 and the Rice University mailing list for high performance Fortran.

For instance, a recent mailing by Michael Metcalf to both lists presents a useful summary of the status of the Fortran world. Here are some highlights:


WHERE CAN I OBTAIN A FORTRAN 90 COMPILER?
  23 compilers were described, and instruction provided
  on obtaining them.

OTHER USEFUL PRODUCTS
  16 similarly described.

WHAT BOOKS ARE AVAILABLE?
  59 Titles, some with a paragraph description were listed.  The 
  language breakdown was:

    English:  32 
    Chinese:  1
    Dutch:    1
    Finnish:  1
    French:   10
    German:   10
    Japanese: 1
    Russian:  2
    Swedish:  1

WHERE CAN I OBTAIN COURSES, COURSE MATERIAL OR CONSULTANCY?
  19 References total
    13 U.S.
    5  Europe
    1  Japan

WHERE CAN I FIND THE FORTRAN AND HPF STANDARDS?
  6 Locations

To join any mailbase list see:       http://www.mailbase.ac.uk/

Or, for the high performance Fortran list, follow these instructions:

( hpff@cs.rice.edu is a mailing list for announcements related to High Performance Fortran.)

To (un)subscribe to this list, send mail to hpff-request@cs.rice.edu . Leave the subject line blank, and in the body put the line (un)subscribe <email-address>

CUG Origin2000 Workshop


> MEETING ANNOUNCEMENT - CUG Origin2000 Workshop
> 
>          October 11-13, 1998, Denver, Colorado
>                 (check www.cug.org)
> 
> Where and When
> --------------------
> The second Cray User Group Origin2000 Workshop will be held in Denver,
> Colorado beginning on Sunday, October 11 and ending at noon on Tuesday,
> October 13, 1998. If you went to the meeting last Fall, you will not want
> to miss this one. If you have any interest in the Origin2000 at all, you
> need to attend this meeting.
> 
> Registration
> --------------
> The registration form will be available on line at www.cug.org
> 
> Conference Program
> ------------------------
> The meeting will be one-track,  for two and one half days high in
> technical content, with much more customer content than our last
> Origin2000 workshop and will include tutorials each morning.
> 
> For Assistance, Contact
> ---------------------------
> Gary Jensen,
> Local Arrangements Chair
> National Center for Supercomputing Applications
> 5598 Colt Dr.
> Longmont, Colorado 80503 USA
> (1-503) 530-0354<BR>
> guido@ncsa.uiuc.edu
> 
> OR
> 
> Bob Winget, Office Manager
> Cray User Group Office
> 2911 Knoll Road
> Shepherdstown, WV 25443 USA
> (1-304) 263-1756
> bwinget@pobox.com
> 

Quick-Tip Q & A


A: {{ ARSC permits of the "chaining" of NQS jobs, as long as the 
      new job goes to the end of the queues. This can increase system
      utilization.

      Put another way, at the end of your qsub script, you may include
      a qsub call which submits your next job--provided that jobs in
      all other queues, including lower priority queues, get a chance
      to run first.

      What is a safe method to implement such chaining which is fair to
      other users? }}


First off, we strongly recommend against self-resubmitting, or
recursive, qsub scripts.  In the past we've seen these get into
infinite recursive loops, where the job resubmitted itself over and
over, filled the system logs and bogged things down until someone
noticed and killed the job.

A safe approach is to chain a small number of discrete jobs together.
For instance,

  job_A.script does its normal work, then calls job_B.script
  job_B.script   "   "     "     "     "    "   job_C.script
  job_C.script   "   "     "     "     "    "   job_D.script
  job_D.script   "   "     "     " and terminates.


Although this is safe, it can be unfair to other users.  If the jobs
all ran in a high-priority queue, for instance, no other queues would 
have a chance.  

One solution is shown below.  The "qalter -l mpp_p=0" command releases
the job's PEs.  But, unfortunately, "qalter" doesn't trigger NQS to
reevaluate the queues and attempt to use the released PEs.

Thus, the script itself submits a null job, which does trigger a
rescan.  The job then sleeps for 20 seconds, plenty of time for NQS to
notice that PEs are available and start any waiting job that might
fit. It's then okay for the job to submit the next "real" job in 
the chain. 


  job_A.script
  ------------------
  [... normal qsub options and script for job_A...]
  
  # This appears at the very end of the script
  qalter -l mpp_p=0                # Release this job's PEs
  qsub do_nothing.script           # A new submission forces NQS rescan
  sleep 20                         # Let waiting jobs start
  qsub job_B.script                # Submit next job in chain 


Here's the null job.


  do_nothing.script
  ------------------
  #QSUB -q mpp                     # This will run in the single queue

  echo "This script did nothing, and ran at: "  
  date



Q: The Portland Group high-performance fortran compiler, pghpf, exists
   on this T3E...  but when I run "man pghpf" I get this:

      yukon$ man pghpf          
      cmd-2494 man: No manual entry for pghpf.

   Isn't there a man page?  What's the story?

[ Answers, questions, and tips graciously accepted. ]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top