ARSC HPC Users' Newsletter 298, August 25, 2004

10th Anniversary Issue (To The Day)

Yep, it's true.

On August 25, 1994, Mike Ess, former ARSC MPP Specialist, wrote and delivered "The ARSC T3D Users' Group Newsletter", Issue #1 .

The scope of the newsletter is wider today but the goal is still rapid sharing of information and experiences between users of HPC systems.

We have almost 1000 subscribers to the Email Edition, only about 2/3rds of which are ARSC staff or users. We have subscribers in Australia, Denmark, France, Germany, India, Italy, Japan, The Netherlands, New Zealand, Norway, Poland, South Korea, Spain, Sweden, The U.K., and across the U.S.

Over the last two months, and excluding all hits from the arsc domain and web crawlers, the Web Edition received an average of 81 hits from 23 unique IP addresses per day.

Thanks to everyone for contributing and reading!

Tenth Anniversary Comment, from Frank Williams

[ Thanks to Frank Williams, the Director of ARSC. ]

Recognizing major milestones such as this 10th anniversary of the ARSC HPC Newsletter is an important way to show appreciation for those who made it possible. In this case, the work of Tom Baring and Guy Robinson is noteworthy for carrying a vast amount of information about ARSC and efficient use of HPC resources into the newsletter for those in the ARSC and wider HPC community to read and use.

However, I'm especially thankful to the users of our systems who give us the reason for publishing the articles and for sustaining the center. Your research continually brings excitement to ARSC through the wonderful new knowledge you are generating and the challenges that optimal running of your codes put on us.

Years of High Performance Scientific Computing

[ Thanks to Richard Barrett, ARSC Research Liaison. ]

The 90's began with a flurry of activity in the world of supercomputer architecture and programming models, presenting the scientist with a dizzying array of choices: SIMD, MIMD, SPMD; CM2(00), CM5, Paragon, T3D, KSR; and for a few (aka Seymour's target audience), shared memory vector processors (in the US, synonymous with "Cray") remained viable.

As the scientific computing oriented companies failed to make profits, we saw the last of the true supercomputers, replaced by a fast moving trend for commodity-based clusters. This trend was greatly encouraged by the freely available PVM software, which would turn any collection of (Unix-based) workstations into a robust distributed memory parallel processing platform. In addition to being the heart of large-scale clusters, PVM turned student computer labs around the world into inexpensive training grounds for a new generation of scientist.

Perhaps it was a relatively smooth transition after all, with the CM-5 augmenting its custom vector processors, network, and programming environment with Sparc processors, and the T3D alpha processors at the heart of its custom everything else. Some of the ideas from these systems were incorporated into standard high performance computing: CMFortran was an inspiration for Fortran90 and HPF, and T3D shmem (puts and gets across shared distributed memory) is still with us, and begat MPI-2 one-sided communication. (Unfortunately the tools area did not transition as systematically, which in my opinion accounts for a large portion of the programmability and usability complaints these days.)

We also discovered (the hard way) what happens when a non-standard paradigm (PVM) is adopted as the message passing method on a supercomputer (the T3D). Luckily, the MPI specification was quickly defined and adopted, enabled by a well-written and supported prototypical implementation, MPICH.

Sandia tried to keep true MPP architectures alive with ASCI Red, a 4,500 or so 2-way Pentium processor node-based system (hey, isn't this a cluster?) from Intel, the first machine to break the TeraFlop barrier on the Top 500 list. Although a successful machine, Intel disbanded its supercomputing applications division, and in the US at least, it was cluster everything. At the high end, ASCI Blue Mountain hit the top of the heap with 48 128-way SGI Origin 2000s, connected with a custom HiPPI network, and supported by a variety of third party vendor software and firmware. ASCI Blue Pacific, then White, went online, composed of IBM commodity compute nodes. Around the same time, smaller versions of such architectures were constructed at universities and computing centers around the country. Many variations on the theme of out-of-the-box parts making up Beowulf-like clusters enabled a broad range of computationally intensive scientific pursuits.

While shared memory vector machines continued to be the work horse for many, in the United States the embargo on Japanese vector computers and Cray's lapse in that market meant that the cluster would shoulder an increasing burden. This provided incentive to the computer science world as it struggled to make clusters more usable.

The convenient high performance shared memory pool was replaced by the global address space, which abstracted the concept of loads and stores in an attempt to simplify increasingly complex distributed memory hierarchies. This did not, however, free the programmer (or user!) from issues related to non-uniform memory access latencies.

These clusters continue to provide excellent environments for getting work done. However, they are (not surprisingly) not appropriate for all types of applications. So its back to the future with the re-emergence of vector processor-based machines. (With immense gratitude to the developers of the Earth Simulator!) But this time they too are a variation on the existing theme: rather than the shared memory of their predecessors, their multi-processors operate on distributed hierarchical memory based global address space. Further, like the T3D/E, the Cray X1 provides a non-portable programming language (Co-Array Fortran) to support specialized hardware with the option of portable programming models involving standard languages and MPI. (A portable analogue to Co-Array Fortran, the C language extension UPC is emerging.)

So over the past 10 years our world has gotten far more complex though perhaps less chaotic. The breadth of choice has led us to question the wisdom of relying on a single benchmark (linpack), but also makes life more complicated since the burden is now on the individual to determine the applicable metric(s). (Ironic, since the linpack benchmark was exactly that to the accidental developers who were investigating dense linear algebra algorithms on high performance computers!) ARSC is only now phasing out its flagship workhorse, the seven year old T3E (that's more than a lifetime in human years), and is presenting the user with an even greater breadth of options worth investigating. IBM has invested considerable time and effort into cluster technologies, resulting in our 800 processor machine iceberg, and as discussed above, Cray is back in the vector processing business and continues on with ARSC in the form of klondike, a 128 node X1. An HP linux-based cluster is coming on line, perhaps with Grid technology capabilities.

Let's do some science!

--

(This note is one person's view of the world he entered in 1990. Other views, especially from those with experiences prior to then, are encouraged.)

Richard Barrett ARSC Research Liaison richard.barrett@arsc.edu

Giga-FLOPS to Tera-FLOPS in 10 Years

Here are some ARSC Milestones to flesh out the "historical" perspective. When available, I've given the relevent Newsletter number. The Web Edition is at:

issue 164 .

Keeping Values Consistent Across Batch Script Layers

[ Thanks due to Ed Kornkven of ARSC for sharing his solution to a common need. This article is geared toward PBS, but the concept would work for Loadleveler scripts as well. If you have another solution, please send it in! ]

In a typical batch script it is necessary to specify, in more than one place, the number of processors to be used by the job. As for any code that needs to be maintained, these multiple references to a single value are a potential source of bugs resulting when some, but not all of the references are changed.

The natural solution to the problem, creating a variable which is defined once and referenced everywhere the original value is needed, won't work in a PBS script because the statement that would initialize the variable cannot occur before the PBS directives, yet we would like to use its value in one of those directives.

The following is one solution to the problem. It is a ksh script for generating the PBS script. It accepts one command line argument, the number of SSPs to use to run the job. It does minimal error checking but could be easily adapted to MSPs (change the PBS directive from "mppssp" to "mppe")

Usage: simply insert your PBS script between the "cat" command and the "EOF" marker at the end of the ksh script, keeping in mind that:

  1. Variables will be expanded into their values (e.g., the $ssps variable);
  2. If you want a variable name to be copied verbatim, add a "\" before the variable name (see $PBS_O_WORKDIR for an example).

#!/bin/ksh
######################################################################
# E. Kornkven, ARSC

#
# Check for presence of an argument
if [[ "$1" = "" ]]
then
    print "Usage: $0 <number of SSPs>"
    exit 1
else
    ssps=$1
fi

#####
#####    Fill in PBS script after "cat" command    #####
#####

cat << EOF > JOB_$ssps.pbs
#!/bin/ksh

# Queue to use
#PBS -q default
#
# Keep stderr, stdout
#PBS -j oe
#
# Specify number of Processors (SSPs)
#PBS -l mppssp=$ssps
#
# Wall-clock time limit
#PBS -l walltime=1:00:00
#

cd \$PBS_O_WORKDIR
pwd

export TRACEBK=16

date
pat_hwpc -f mpirun -np $ssps ./a.out < input.txt
date

EOF

#####
##### "EOF" in the ksh script marks end 
##### of the generated PBS script
#####
######################################################################

Reminder: Fall Science Seminars, Sept 8-10

As announced in the last issue, ARSC is sponsoring a series of seminars by senior scientists, September 8-10. Schedule available at:

http://www.arsc.edu/news/fallseminars04.html

Quick-Tip Q & A


A:[[ Uh oh... I just deleted some files that I don't even own!  How'd that
  [[ happen?   Why'd it let me do that!  (Oh boy... I think I'm in
  [[ trouble.)
  [[
  [[   % ls -l 
  [[   total 2048
  [[   -rw-------    1 fred     puffball       30 Aug 11 16:57 file.junk
  [[   -rw-------    1 horace   heatrash 21922042 Mar 18 10:01 file.priceless
  [[   -rw-------    1 fred     puffball       30 Aug 11 16:57 file2.junk
  [[   -rw-------    1 horace   heatrash  4808440 Mar 19 11:21 file2.priceless
  [[   % 
  [[   % rm -f file*
  [[   % ls -l
  [[   total 0
  [[   % 


  #
  # Quick Answer: 
  #

  Here's the rest of the story:

    % ls -ld . 
    drwxrwx---    2 bob      somegrp           19 Aug 11 16:57 .
    % grep somegrp /etc/group
    somegrp:*:6826:bob,fred,horace


  #
  # And a real explanation from Greg Newby:
  #

  The "-f" in rm says not to prompt for confirmation if anything out of
  the ordinary happens.  Removing the "-f" probably would have given you
  a prompt like this for the files you don't own:

        rm: remove write-protected regular file `file.junk'? 

  If, in fact, you remove the files (versus, say, they coincidentally
  get deleted by someone else), the only explanation I can think of is
  that you had write access to the *directory* containing the files.

  What deletion really means, in Unix terms, is removing an entry from
  the containing directory.  When a file has no more directories that
  contain it, the reference count goes to zero (it's "1" in your lines
  above - the second column in "ls -l", though different systems provide
  slightly different output).  When the reference count goes to zero,
  the operating system knows that the bytes of a file are available for
  reuse.

  This might explain what happened, but not why someone else's files
  were in your directory.  (Directories like /scratch and /tmp have a
  different enforcement policy: just because you can create a file,
  doesn't mean you own the directory.)

  One scenario would be for you to create a directory and "chmod 777"
  the directory (something that is against ARSC policy and not a good
  idea).  Someone else could then create files in your directory, but
  you would be able to remove them.  Another scenario is that you were
  logged in as root or a similarly privileged user, which can override
  permissions and ownerships.  But if you were, hopefully you wouldn't
  need to ask this question...




Q: This paraphrases a statement issued in about 1959.  What's the 
   actual quote?  Who said it?  (And, full disclosure, we don't have the
   answer...):

     "Coding for parallel processors will put the intellectual 
     challenge back into computer programming."

[[ Answers, Questions, and Tips Graciously Accepted ]]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top