ARSC T3E Users' Newsletter 198, June 23, 2000

ARSC Brown-Bag Lunches

Local and visiting newsletter readers, ARSC staff and users, are invited to ARSC brown bag lunches.

The next will be at noon on Wednesday, July 12, at the picnic table in front of the Butrovich building. We'll meet every Wednesday through August.

During the summer, ARSC will be host to many visitors from various backgrounds, staff and researchers will also be off on travels to various conferences and meetings. It has been suggested that one way to exchange these experiences will be to meet each week over lunch to simply talk about what has been happening recently.

This will also be a chance for you to give a casual presentation, ask for suggestions, and bounce ideas off of other users. Send us suggestions for conversation topics.

If successful, the Wednesday Brown-Bag Lunches will continue in the winter (but not at the picnic table).

Checking Your Disk Quota and Usage

Two commands are particularly useful in keeping track of your disk quota and usage:
quota -v
For each disk on which you have a quota, reports your quota and usage (in 512 byte blocks) and percent usage. Only considers files which are under quota management (i.e., does not distinguish between migrated and disk resident files).
du -msk <PATH>
For the specified directory, reports storage for both disk resident and migrated files (in 1 kb blocks).
For example:

 
  YUKON$ quota -v 
  File system: /u1
   User: smith, Id: 2235
                     File blocks (512 bytes)     Inodes
       User Quota:        20480000* (  4.4%)  Unlimited*         
          Warning:        18432000* (  4.9%)       None*         
            Usage:          894720                 3817             
  
  File system: /tmp2
   User: smith, Id: 2235
                  Aggregate blocks (512 bytes)   Inodes
       User Quota:        20480000  (  3.9%)  Unlimited*         
          Warning:        18432000  (  4.4%)       None*         
            Usage:          802688                   31             
  
  File system: /tmp
   User: smith, Id: 2235
                  Aggregate blocks (512 bytes)   Inodes
       User Quota:        20480000* (  0.0%)  Unlimited*         
          Warning:        18432000* (  0.0%)       None*         
            Usage:             256                    9             
  
  
  YUKON$ du -msk ~
  447360     on-line   969056     off-line /u1/uaf/smith
The word "Aggregate" in the quota -v output for /tmp and /tmp2, above, indicates that quota is enforced on the the aggregate of disk resident and migrated files. The absence of the word for the /u1 report implies that the quota is only enforced on disk resident files.

In other words: on /u1, this user can have unlimited storage in migrated files, but only 10GB on disk. On /tmp or /tmp2, he can have 10GB storage, total.

ARSC Users:

6/28 Talk: Unix Clusters for HPC Research and Education

Don Morton of The University of Montana will present the following talk at 2pm Wednesday, June 28th, in Butrovich 109, on the UAF campus.

Issues in Building Unix Clusters for High Performance Computing Research and Education Activities

Don Morton has spent the last eight years using Linux for research and education activities, with the last six years focused more on utilizing clusters of Linux workstations. The Linux clusters have been utilized in a portable fashion to carry on parallel computing activities at small universities in the Lower 48. Work performed on ARSC's MPP systems during the summers has been migrated to the Linux clusters for continued work during academic years, and work initiated on the Linux clusters has been migrated to ARSC's MPP systems during the summers. Additionally, the clusters have supported user interfaces that are similar to those found on Cray MPPs, meaning that students may be trained at low cost on small clusters.

This presentation will outline Morton's goals and efforts in utilizing Linux clusters for low-end research and development activities in computational science. The experiences gained in configuring and using an existing cluster will be shared and, future plans, including the initiation of an NSF EPSCoR grant, will be presented. The intent is to provide the audience with a single perspective of building and using a Unix cluster for parallel programming activities, to highlight certain areas where others might use different approaches, and to spark a discussion that centers on cluster computing.

Don Morton Department of Computer Science The University of Montana Missoula, Montana morton@cs.umt.edu

6/30 Talk: High Performance Visualization and Hayden Planetarium

Greg Johnson of SDSC will present the following talk at 10am Friday, June 30th, in Butrovich 109, on the UAF campus.

High Performance Visualization and the New Hayden Planetarium

The San Diego Supercomputer Center (SDSC) and the Hayden Planetarium at the American Museum of Natural History are working together to generate scientifically accurate, high-resolution imagery of stellar phenomena, including stars and nebulae. Nebulae consist of clouds of dust and gas, components of which emit, reflect, or obscure light from surrounding stars and fluorescing gas. Lacking well-defined edges, nebulae are difficult to accurately represent using traditional computer graphics techniques that rely on polygons to approximate surface features.

Researchers at SDSC have developed a system called MPIRE (Massively Parallel Interactive Rendering Environment) which uses the CPU and memory capacity of high-performance computers to dynamically render images from multi-gigabyte volume datasets. A recent addition to MPIRE, called the Galaxy Renderer, employs a perspective viewing model to better emphasize the size and relative position of features in gaseous nebulae. The MPIRE Galaxy Renderer has been ported to several shared memory HPC systems including the Cray (formerly Tera) MTA, Sun HPC 10000, and IBM Power3 SP nodes.

This talk will focus on the computational challenges involved in generating content for the digital projection system at Hayden, general algorithmic issues in porting the MPIRE Galaxy Renderer to several unique HPC systems, and performance results. Lastly, the direction of future work with Hayden and its implications on the relationship between high-performance networking and scientific visualization will be discussed.

Quick-Tip Q & A



A:{{ I re-use the same qsub script in different directories, copying 
  {{ it around as I change data sets.
  {{
  {{ The first command in the script is a "cd" right back to the
  {{ directory from which I submit the script (the "cd" is required
  {{ because NQS always starts the job in my home directory).
  {{
  {{ Thus, I must update the "cd" command in my script whenever I copy
  {{ it to a new directory.  It's BAD NEWS when I forget!  Any advice?


  
  Thanks go to two readers for these responses:

  
From Alan Wallcraft:

    > From  man qsub:
    >
    >    When NQS selects a batch request for execution, the following events
    >    occur in the specified order:
    >
    >    9.  NQS adds the following variables to the environment of the
    >        request:
    >
    >        QSUB_WORKDIR       Current directory when the request was
    >                           submitted
    >
    > So change the first command in the script to:
    > 
    > cd $QSUB_WORKDIR


  
From Mark Reed:

    > I always like to start my batch scripts w/ a
    > 
    >        cd $QSUB_WORKDIR
    > 
    > which puts you back in the directory you submitted the script from.
    > This may or may not be what you want, e.g. if you are not in /tmp
    > when you submit it. However, you can use the standard unix parsing
    > tools to transform this variable into something suitable if needed.
    >
    > This is one of several environment variables the NQS sets for you,
    > see man qsub for more details.




Q: How can I initialize common blocks in Fortran 90?

[ Answers, questions, and tips graciously accepted. ]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top