ARSC T3E Users' Newsletter 179, October 8, 1999

SC99: Last Chance for Advance Registration is Today

From:

http://www.SC99.org/

"To qualify for advance registration discounts, your registration form and payment must be received by 5:00pm Eastern Time, Friday, October 8, 1999.

co-array.org Update

The Co-Array Fortran web site,

http://www.co-array.org/

has expanded a bit since we last referenced it, and is worth a visit. A few items to whet your appetite:
  • A short introduction to Co-Array Fortran
  • A Subset Co-Array Fortran to OpenMP Fortran translator
  • Papers:
    • Writing a Multigrid Solver Using Co-Array Fortran
    • A Parallel 'Class Library' for Co-Array Fortran
    • SPMD OpenMP vs MPI for Ocean Models (uses the CAF2OMP translator)

Be sure to let co-array.org know about work or research you're conducting using Co-Array Fortran.

Designing and Building Parallel Programs (Online)

If you want to learn more about parallel programming, visit the following link (found at www.co-array.org):

http://www-unix.mcs.anl.gov/dbpp/

You'll find references, programming tools, tutorials, and an online version of Ian Foster's book, "Designing and Building Parallel Programs" (recommended reading in Newsletter #121 )

From the introduction:

"Designing and Building Parallel Programs (Online)" integrates four resources concerned with parallel programming and parallel computing:

  • The content of "Designing and Building Parallel Programs"
  • A collection of public-domain Parallel Software Tools
  • A collection of Web Tours providing access to other information on Parallel Computing
  • Various educational resources.

OpenMP / Auto-Tasking

Cray's auto-tasking directives for shared memory systems (i.e., directives beginning with !MIC$) are an out-moded feature of CF90. Cray recommends you replace them with the OpenMP Fortran API directives. This also makes your code portable to the many systems which support OpenMP. Documentation on the OpenMP Fortran API directives is available for both the UNICOS and IRIX implementations. See the "OpenMP Fortran API Multiprocessing Directives" chapter of the "CF90 Commands and Directives" manual or the "MIPSpro 7 Fortran 90 Commands and Directives Reference" manual. These are both available from SGI's techpubs server:

http://techpubs.sgi.com/

The "CF90" manual is also available at:

http://www.arsc.edu:40/

SGI users can compile and run OpenMP codes under IRIX. Compile with:

f90 -mp omp_prog.f

ARSC SGI users might visit the "ARSC Host Status" page (click the button at the bottom of "www.arsc.edu") to see what multi-processor SGI hosts are available. However, you can run multi-threaded OpenMP codes on single processor O2's for testing, if you wish. The OMP_NUM_THREADS environment variable determines how many threads you get, regardless of the number of processors.

Cray shared memory systems (e.g., J90) users compile with:

f90 omp_prog.f

On the Cray SMP systems, the NCPUS environment variable supersedes OMP_NUM_THREADS. Also, if you request more threads than there are physical CPUs, you won't get them. The maximum number of threads is held to the number of CPUs.

Discussion of OMP_NUM_THREADS, and the other OpenMP environment variables, is available in the documents listed above.

Next ARSC MPP Training: December 8th

Eleven ARSC users took the "T3E Basics" course last Wednesday. The course was composed entirely of live demonstrations on the T3E. This was a lot of fun for the instructor, and I hope it worked for the class! You're welcome to see the course outline, at:

http://www.arsc.edu/~baring/Courses/T3EBasics/T3EBasics.html

The next ARSC MPP course is:

"Parallel Computing, Real Applications and Examples"

Dates:      December 8, 1999 Time:       2-5pm Location:   ARSC Butrovich Access Lab, Butrovich Building room 007 Instructor: Guy Robinson, ARSC MPP Specialist

Course Description:

This course builds on the basics introduced in "T3E Basics and Parallel Flyby." You will learn about the relative merits of different parallel programming models. A series of case studies will give examples of how various scientific problems have been speeded up by the application of parallel programming. A segment of the course will cover the fundamentals on how to use the Message Passing Interface (MPI) communication library to distribute work and data across processors.

Intended Audience:

Those who are considering developing parallel codes from first principles, parallelizing existing serial codes, or modifying existing parallel codes. There will be ample opportunity to have your questions about parallelism answered and to determine how you might employ parallel processors to speed up your research.

Registration at:

  • http://www.arsc.edu/user/classes/ClassParallel.html

Quick-Tip Q & A



A:{{ I have about 150 files in a directory and need to "rm" about 125 of
  {{ them.  No combination of wild-card characters will select the
  {{ "to delete" files without including some of the "to retain" files.
  {{ 
  {{ How would you approach this task?



From Brad Chamberlain of the University of Washington:
==================================================================
  I would move the 25 files I wanted to save to a subdirectory, and
  then delete * in the original directory.

  I would also consider using emacs' dired mode (directory edit),
  though I can never remember what all the keys are by heart, and for
  these numbers would just use the above technique.  (If it were more
  like 125/250 I would re-learn the emacs stuff).

  You could also use rm -i which interactively prompts you for "do you
  want to delete this file?" but I would tend not to use this
  personally, as it doesn't allow me to double-check for mistakes as
  well as the previous two approaches.

From from Kurt Carlson of ARSC:
==================================================================
Method #1: I'm paranoid and want a backup copy in case I make a mistake:

  mkdir -p /tmp/kcarlson/old
  cp -p *  /tmp/kcarlson/old
  rm -i *
  File aaa. Remove ? (yes/no)[no] :
  [...]

  Advantages: 
    quick recovery in case of a mistake,
    I'm prompted for each file (default no), I don't have to edit anything.

  Disadvantages: 
    I have to respond yes 125 times.
    occupies space on /tmp for some period of time.

Method #2: I'll edit a file to remove the 25 I want to keep:

  ls -1 
 nawk '{printf "rm %s\n",$1}' >/tmp/kcarlson/rm
  chmod 700 /tmp/kcarlson/rm
  vi /tmp/kcarlson/rm        # remove stuff want to keep
  /tmp/kcarlson/rm        # execute the remove

  Advantages: 
    I don't have to respond yes 125 times.
    If there is any criteria by name or attributes, I could use 

      'ls -l' 
 sort [-k] 

    to order the files.  For example:

      ls -lut 
 nawk '{printf "rm %s\t# %s %s %s\n",$9,$6,$7,$8}'

    would order the files by last usage time and include the date as a
    comment (to facilitate editing).

  Disadvantages: 
    I have to edit a file.
    No backup (unless added), but time to review before executing.


Bonus Answer:   File Access Time followup:
==================================================================
  It took 3 weeks for someone to catch it, but "ls -lu" DOES list
  access time (as well as sort by it).  Kurt Carlson gets the prize.

  Also, thanks to Richard Griswold who sent the following version of my
  Korn shell function from the last issue:

    "This works under tcsh on Linux. The 'set backslash_quote' allows
    quote escaping to work and, yes, all the backslashes are needed."

set backslash_quote
alias age 'perl -e "printf <<EOS\\\
\\\"\!:*\\\"\\\
-%s-\\\
contents last accessed %f days ago.\\\
contents last modified %f days ago.\\\
inode    last modified %f days ago.\\\
EOS\\\
  ,\\\'-\\\' x length \\\"\!:*\\\"\\\
  ,-A \\\"\!:*\\\"\\\
  ,-M \\\"\!:*\\\"\\\
  ,-C \\\"\!:*\\\"\\\
"'





Q: I'm baffled and starting to lose it.  Any theories about this?

     chilkoot% 
     chilkoot% ls -l 
     total 32
     -rw-------   1 tbtester tstaccts    1864 Oct  7 17:33 myfile
     chilkoot% rm myfile
     cmd-1025 rm: myfile: No such file or directory
     chilkoot% ls -l
     total 32
     -rw-------   1 tbtester tstaccts    1864 Oct  7 17:33 myfile
     chilkoot% 

   (Except for the indentation, this session was copied exactly as
   it appeared.) 

[ Answers, questions, and tips graciously accepted. ]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top