ARSC T3E Users' Newsletter 187, January 21, 2000

ARSC Guidelines Concerning Interactive Parallel Jobs

In the code development phase, users occasionally need to run parallel jobs interactively rather than through the NQS batch system. To promote the development of new parallel codes, as well as interactive debugging and performance tuning, ARSC permits interactive work on yukon.

The following guidelines are designed to support interactive users while maximizing system utilization, and the throughput of production batch jobs.

  1. By default, users already have an interactive limit of 8 PEs. (Thus, any time PEs are available, you can run an interactive job.)
  2. As a part of normal operations, and on request, ARSC will dedicate a maximum of 8 PEs to interactive work.

    During such periods, these PEs will be dedicated to interactive work and will not be available to batch users. If two or more interactive users happen to be working at the time, they will share this pool of PEs.

  3. Dedicated interactive sessions will be available between the hours of 9am - 4pm, Alaska Time, Monday - Friday.
  4. Users needing dedicated interactive PEs must contact ARSC consulting. E-mail us at: "consult@arsc.edu" or call us at: 907-450-8602.

    In making your request, specify the period of time during which you will be working, and if you finish early, please let us know.

  5. In general, do not make production runs interactively and move your testing and work into the batch system as soon as possible. High priority NQS queues are enabled for 30-minute (or shorter) batch jobs.

Special arrangements are also available for real-time demos, unusual debugging situations, short-term high-priority projects, etc. Contact ARSC consulting.

A Tip on Using "qsub"

NQS understands the time formats, "HH:MM:SS" and "MM:SS", so you don't need to figure your time requests in seconds! For example:

  #QSUB -l mpp_t=30:00            # Request 30 minutes total MPP time
  
  #QSUB -l mpp_t=8:00:00          # Request 8 hours 
  
  #QSUB -l mpp_t=2:15:00          # Request 2 hours and 15 minutes
See "man qsub" for details.

Numerical Simulations in Turbulent Flow Research

[ This is part of our occasional series on the published work of T3E Newsletter Readers. Contributions are always welcome. ]

Two of our readers published some of their research (accomplished using the T3E) in the January edition of "Physics of Fluids." Here's an overview of the project, followed by the citation:

Jim Riley and Steve de Bruyn Kops of the University of Washington are testing new methods to predict turbulent and chemically reactive flows with direct numerical and large-eddy simulations. Turbulent flow research affects studies in a variety of areas, including the ozone layer in the atmosphere, the efficiency of fossil-fuel burning engines, and control of air pollution.

Turbulent flows are extremely difficult to simulate without resorting to modeling because of the massive processing power required to compute such complex events. In fact, Riley and de Bruyn Kops were the first researchers at ARSC to take advantage of all 256 processors on the ARSC Cray T3E, following the February 1999 upgrade.

To study turbulent combustion, de Bruyn Kops and Riley run direct numerical simulations (DNSs) of turbulent flows on massively parallel computers. In DNSs, the exact transport equations are solved numerically, without resorting to modeling. The computer code developed by de Bruyn Kops and Riley employs a pseudo-spectral scheme in which derivatives are computed in Fourier space and multiplication is performed in physical space. This method is both very accurate and computationally efficient.

As demonstrated in a recently published paper in Physics of Fluids, computers such as Yukon are finally powerful enough to simulate simple flows having the same conditions as those studied in wind tunnels.

The study of these "canonical flows," such as the scalar mixing layer discussed in the paper, enable researchers to develop theories and models that can be applied to more complex flows. In the laboratory, however, it is often very difficult to create an experiment that matches the desired conditions exactly because several parameters cannot always be adjusted independently. It is also not feasible to collect data as a function of three spatial dimensions plus time for large areas of the flow. As de Bruyn Kops and Riley discuss in their paper, both of these problems can be overcome in numerical experiments. If the simulations are closely matched to measured laboratory data, the results can be used with high confidence that they represent true physical phenomena, while providing extremely detailed information about the flow. From this solid base, the simulations can then be extended to include phenomena not present in the laboratory experiments, such as chemical reactions.

The citation is:

S. M. de Bruyn Kops and J. J. Riley, "Re-examining the thermal mixing layer with numerical simulations", Physics of Fluids, v12, n1, Jan. 2000.

The abstract is available at:

http://staff.washington.edu/debk/node5.html

Those with an on-line subscription to Physics of Fluids can find the complete paper at:

http://ojps.aip.org/journals/doc/PHFLE6-home/top.html

AURORA Project Papers

The AURORA project is a special project of the Austrian government looking at the broad issues of HPC and involving applications, programming paradigms, tools, benchmarks etc. An overview gives more detail at:

http://www.vcpc.univie.ac.at/aurora/overview.shtml

along with a breakdown of all projects and goals. An extensive and interesting collection of papers from the past three years covers such work as:
  • Projects on HPF ranging from applications and tools plus compiler issues. There are also several results for both compiler and applications from the HPF+ project which looked at advanced algorithms in HPF and are worth reading.
  • P3T+, a performance estimator for parallel programs which provides various information on performance at compile time.
  • A number of matrix solvers for parallel systems in different languages, both be useful for those writing parallel applications or wanting to understand parallelism in general.
  • FFT algorithms for serial systems are reviewed in detail in one paper in particular along with several novel parallel FFT algorithms being described in other papers.
  • Applications ranging from advanced codes with HPF through innovative graphical/video processing and financial systems.
Papers can be downloaded from the web site and a full list is to be found at:

http://www.vcpc.univie.ac.at/aurora/publications/.

Quick-Tip Q & A



A: {{ "mv -i" and "cp -i" only prompt when the move or copy would
   {{ overwrite an existing file.  "rm -i", on the other hand, prompts
   {{ on EVERY file -- as expected!
   {{
   {{ How can I get "mv" and "cp" to ask about every file too, so I 
   {{ can issue wild-card commands like:
   {{
   {{ mv -i *.F90  ../some/directory
   {{
   {{ and then only move some of the files?



Well, you can't change "mv" and "cp," but you can use "find" to create
the desired behavior using the "-ok" option.

From "man find": -ok cmd Like -exec except that the generated command line is printed with a question mark first, and is executed only if the user responds by typing y. Here's the basic command: find . -name "*.F90" -ok mv {} ../some/directory \; And a sample run: $ find . -name "*.f90" -ok mv {} ../some/directory \; < mv ... ./out.f90 >? n < mv ... ./int.f90 >? n < mv ... ./Test2/out.f90 >? n There's a problem here. As the example shows, find doesn't stop at the current directory, but recursively searches all subdirectories. Here are two ways to restrict the search to the current directory: find * \( -type d -prune \) -o \( -name "*.f90" -ok mv {} ../some/directory \; \) find * \( -name "*.f90" -ok mv {} ../some/directory \; \) -o -prune (For more on "-prune," see: /arsc/support/news/t3enews/t3enews181/index.xml#qt Q: Is it a good idea to compress files which are to be DMF migrated?

[ Answers, questions, and tips graciously accepted. ]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top