ARSC T3E Users' Newsletter 163, January 05, 1999

MPI_Gatherv

This a quick introduction to the MPI vector gather, or MPI_Gatherv operation. We'll have more on this in the next issue.

To get started, here's a reminder of the basic gather operation.

MPI_Gather is a single operation which allows you to send equally sized groups of data from an array on all processors to a single processor. The groups of data are (essentially) concatenated in rank order in the destination array on the destination processor.

Forgive the ASCII art, but here's a simple example. The root process gathers the first four elements of the array from all processors to a local array.


  Processors:        0        1        2      
  Data:           xxxx---  yyyy---  zzzz---

  Result on Root:   xxxxyyyyzzzz
MPI_Gatherv extends the flexibility of the basic gather. It allows the sizes of the source groups to differ from processor to processor and lets their final positions within the destination array vary as well.

Here's a possible result:


  Processors:        0        1        2      
  Data:           x------  yyyy---  zzz----

  Result on Root:   yyyyx--zzz--

Eric Butcher Lecture

Eric Butcher, UAF Assistant Professor of Mechanical Engineering and ARSC Research Associate, is giving a talk next Tuesday. It is open to the public.

It is titled "Symbolic Bifurcation Boundaries and Time-Dependent Resonance Sets of Time-Periodic Nonlinear Systems" and will occur on Tues. March 9 at 4:00 p.m. in Duckering 333. In this talk Eric will discuss his use of Mathematica to generate certain symbolic expressions that define nonlinear bifurcation boundaries which have been unobtainable until recently.

QSUB -- Switching Accounts

Users with multiple resource accounts but only one login id need to be able to charge their runs to the appropriate accounts from their NQS runs. There are two approaches:
  1. Use the QSUB -A option. From "man qsub":
    
         -A      (For systems running UNICOS or IRIX 6.2 or later only)
                 Specifies the UNICOS account name or the IRIX project name
                 under which the request will run.
    
  2. From within the script, use the "newacct" command. This would let you switch to different accounts for different portions of the script.

DMF Optimizations Follow-Up

Last week's quick-tip suggested retrieving files in groups. E.g.,

dmget file.A file.B file.C

Some have wondered why there was no mention of dmput.

The system for migrating files (as opposed to retrieving them) is quite complicated. For yukon users, the level of user control is especially small, as a dmput actually causes the file to be ftp'ed to a file system on chilkoot (ARSC's J90), where it waits for the chilkoot DMF "robot" to decide when to actually put the file to tape.

The best suggestion is to migrate and retrieve related groups of files together, and to do this as soon as you determine that these operations will be required. This lets the system seek possible optimizations.

Quick-Tip Q & A



A:{{ You want extract clusters of lines from a file based on a start and
     end condition.   [ ... ]   Can you give a Unix command to do this? }}

cat file.txt 
 perl -e "while (<>) { print if /start/ .. /end/}"


   This command would search file.txt for a line containing the string,
   "start." It would print that line and continue printing lines until
   it found a line containing the string, "end." It would print that
   line, and resume the search for "start".

   To print all comments from a C source file:

     
cat prog.c 
 perl -e "while (<>) {print if /\/\*/ .. /\*\//}"


   A C programmer who consistently started typedefs in column 1 and
   ended them with the "}" also in column 1 might list them with this
   command:

     
cat *.h 
 perl -e "while (<>) {print if /^typedef/ .. /^\}/}" 

Q: You may have different storage quotas for files residing on disk and
   those that have been migrated by DMF to tape (this is true for ARSC
   users).  How can you determine your current storage volume separated
   into disk and DMF components?

[ Answers, questions, and tips graciously accepted. ]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top