ARSC T3D Users' Newsletter 50, September 1, 1995

PATP Meeting at JPL

Below is the final schedule for the PATP conference given last week. JPL is binding each of the presenter's material for each attendee and that bound material will be available in the next few weeks. I will pass along some of this information when the bound material is available.

PATP Scientific Conference Building 167 Conference Room Jet Propulsion Laboratory Pasadena, California

Thursday, August 24, 1995

Opening Remarks
8:30: Welcome to JPL Ed Stone, Director of JPL
8:45: Welcome to the PATP Conference Carl Kukkonen, JPL
Caltech/JPL
9:00: Visualization of Earth & Planetary Data Dave Curkendall, JPL
9:30: Ocean Modeling on the Cray T3D Yi Chao, JPL
10:00: High Performance CFD Applications Steve Taylor, Caltech
10:30: break
Lawrence Livermore National Laboratory (LLNL)
11:00: The High Performance Parallel Processing Project and MPP Access Program Alice Koniges, LLNL
11:30: Ab Initio Material Simulations for Massively Parallel Environment Lin Yang, LLNL
12:00: Structures and Acoustics Rich Procassini, LLNL
12:30: lunch
Los Alamos National Laboratory (LANL)
1:30: High Performance Parallel Processor Project at LANL Bruce Wienke-LANL
2:00: Deterministic Neutral Particle Calculations for Well Logging on the T3D Randy Baker, LANL
2:30: Oil Reservoir Models Olaf Lubeck, LANL
3:00: break
Swiss Federal Institute of Technology Lausanne (EPFL)
3:30: Modeling Materials with Ab Initio Molecular Dynamics Roberto Car, EPFL
4:00: Parallel CF on the Cray T3D: Programming Models, Methods and Applications Mark Sawley, EPFL
4:30: Parallel Implementation and Interactive Optimization of Video Data Compression Techniques T. Ebrahimi, EPFL
5:00: Kinetic Modeling of Fusion Relevant Plasmas Kurt Appert, EPFL
6:00: reception - Ritz Carlton Hotel, Plaza Room

Friday, August 25, 1995

Cray Research, Inc.
8:30: System Architectural Frontiers Steve Nelson, CRI (Consultant)
Pittsburgh Supercomputer Center (PSC)
9:30: Science on the CRI T3D System: An Overview Sergiu Sanielevici, PSC
10:30: break
North Carolina State University (NCSU)
11:00: Ab Initio Simulations of Advanced Materials Jerry Bernholc, NCSU
T3D Research Centers
11:30: Arctic Region Supercomputing Center Michael Ess, ARSC, Univ. of Alaska at Fairbanks
12:00 Edinburgh Parallel Computing Center Arthur Trew, EPCC, Univ. of Edinburgh, Scotland

CRI/EPCC MPI for T3D Release 1.2a

There has been a new release of MPICH from Edinburgh Parallel Computing Center. There have been several bug fixes, enhancements and greater conformance with the MPI standard with this release. The changes from the 1.1a release are described in the html files below. I have installed the following files of the 1.2a release on denali:

  /usr/local/examples/mpp/mpi/include/mpi.h
  /usr/local/examples/mpp/mpi/include/mpif.h
  /usr/local/examples/mpp/mpi/lib/libmpi.a
  /usr/local/examples/mpp/mpi/user.ps
  /usr/local/examples/mpp/mpi/bugs.html
  /usr/local/examples/mpp/mpi/news.html
Currently the 1.1a files are in the default locations:

  /usr/include/mpp/mpi.h
  /usr/include/mpp/mpif.h
  /mpp/lib/libmpi.a
In the future we will replace the 1.1a files with the 1.2a files and that change will be reported in this newsletter. If you encounter a problem with this new release please contact Mike Ess.

Additions and Changes to BENCHLIB

Brent Swartz of the Environmental Protection Agency at the National Environmental Supercomputing Center in Bay City Michigan passes on this further information about BENCHLIB:

  > I have ftped BENCHLIB from your site, and though you might be interested
  > in knowing that the flipper perl script is missing from BENCHLIB (even
  > though Jeff's paper indicates it's in BENCHLIB).  I have obtained it from
  > Jeff so if you'd like it, let me know.  And if you're creating a new tar
  > file, you might as well fix the other problems encountered (you must have
  > seen these too!)
  > 
  > 1.  makefile's reference asm, not cam, which is not how the assembler is
  >     currently installed.  So all asm references should be changed to cam.
  > 
  > 2.  util's mpp_annex.s contains a bug, which (I believe - it compiles
  >     anyway!) can be corrected by replacing the .psect mpp_annex as shown here:
  > 
  > sequoia% diff mpp_annex.s{,.orig}
  > 13c13
  > <       .psect  mpp_annex_code,cache,code
  > ---
  > >       .psect  mpp_annex,cache,code
  > 
  > Also, it may be worth mentioning to your users that the scalar_fastmath
  > routine's results may differ from the libm routine's results by a maximum
  > of 2 ULPs (i.e. the 2 least significant digits may differ).  The libm
  > routines differ from "reality" by at most 1/2 ULP, and my (non-extensive)
  > tests of the vect_fastmath routines indicate they differ from the libm
  > routines by at most 1 ULP (better precision than scalar_fastmath).
  > 
I have remade the tar tape on the ARSC ftp server to include these changes and remade the libraries on denali in /usr/local/examples/mpp/lib. Some of the files available on the ARSC ftp server are:

  /pub/submissions/libbnch.tar.Z    - An updated version of benchlib
  /pub/submissions/flipper          - Jeff Brook's perl script(from Brent Swartz)
  /pub/submissions/t3d_opt.ps.Z     - Jeff Brook's paper on T3D optimization
  /pub/submissions/cug_slides.ps.Z  - Slides for Jeff Brook's talk on optimization
  /pub/submissions/ieee.ps.Z        - A description of converting from Cray format
                                      to IEEE format

The 1.2.2 Release of the Programming Environment

One of our users, Dr. Alan Wallcraft, a scientist with the Naval Research Center in Stennis, Mississippi sent in this note about Fortran 90 in the new 1.2.2 PE:

  > FYI, f90new on the T3D supports loop unrolling (e.g. -O scalar3,unroll2) and
  > this can make a dramatic difference.  Previously a single-node REAL*4 ocean
  > model benchmark ran at 16.5 Mflops, but with automatic unrolling it runs at
  > 26.6 Mflops.  In REAL*8, f90 is now only slightly slower than cf77 (15.2
  > Mflops, vs 16.0 Mflops).  The remaining difference is probably because cf77
  > used -Wf"-o noieeedivide,unroll2" and noieeedivide is not an f90 option.
  > 
  > A DEC Alpha 4000/710 is clocked at 190 MHz (vs the T3D's 150 MHz), with a
  > large L2 cache, and it runs this benchmark at 46.7 Mflops in 32-bit and
  > 26.1 Mflops in 64-bit.  Thus the DEC workstation is now about 30-40% faster
  > (allowing for clock speeds), which may entirely be due to the larger cache
  > (i.e. Cray's compilers are now as good or better than DEC's).

New Libsci Versions

We have updated the original version of libsci.a in the 1.2.2.0 PE to the version in the 1.2.2.2 PE. This new version has corrections to the routines SLARFG and CLARFG in CRI's implementation of LAPACK.

List of Differences Between T3D and Y-MP

The current list of differences between the T3D and the Y-MP is:
  1. Data type sizes are not the same (Newsletter #5)
  2. Uninitialized variables are different (Newsletter #6)
  3. The effect of the -a static compiler switch (Newsletter #7)
  4. There is no GETENV on the T3D (Newsletter #8)
  5. Missing routine SMACH on T3D (Newsletter #9)
  6. Different Arithmetics (Newsletter #9)
  7. Different clock granularities for gettimeofday (Newsletter #11)
  8. Restrictions on record length for direct I/O files (Newsletter #19)
  9. Implied DO loop is not "vectorized" on the T3D (Newsletter #20)
  10. Missing Linpack and Eispack routines in libsci (Newsletter #25)
  11. F90 manual for Y-MP, no manual for T3D (Newsletter #31)
  12. RANF() and its manpage differ between machines (Newsletter #37)
  13. CRAY2IEG is available only on the Y-MP (Newsletter #40)
  14. Missing sort routines on the T3D (Newsletter #41)
I encourage users to e-mail in differences that they have found, so we all can benefit from each other's experience.
Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top