ARSC T3E Users' Newsletter 177, Sept 10, 1999

Now on the Web: ARSC Host and Queue Status

Instead of logging onto yukon to check your NQS job or read the downtime schedule, visit our home page:

and click the "Host Status" push button at the bottom of the screen. You'll see:
  • which ARSC hosts are up (or down)
  • how many processors are available
  • selected system info
  • selected NQS queue status info
  • downtime schedules

The information is updated every 5 minutes. Thanks go to Shawn Houston, ARSC Webmaster, for putting this together.

Massive ".o" Files and qalter

In the last issue we ran an article titled, "Post-Processing Standard Output Files From NQS Jobs."

As described, when an NQS job exits, the system copies its accumulated standard output and error from temporary system files to the user's account, and renames them to:

  <job-name>.o<nqs-id>        # standard output
  <job-name>.e<nqs-id>        # standard error
Some jobs create massive ".o" files which can take a half-an-hour or more to copy. Unfortunately, NQS doesn't release a job's PEs until after the files are written, and thus, its PEs will remain unavailable to other users--and wasted--for a long time. The good T3E citizen won't let this happen!

Here are two simple solutions:

First Solution:

Call "qalter" from within the script, to release the job's PEs before the script exits and the system commences the big file copy. For example, you might add this at the end of your NQS script:

    qalter -l mpp_p=0                # Release all parallel App. PEs
    qsub do_nothing.script           # Force NQS to rescan the queues
Where "do_nothing.script" is another file containing something like this:

    #QSUB -q mpp                     # This will run in the single queue
    echo "This script did nothing, and ran at: "  

Second Solution:

If the NQS script runs a program which writes its results to standard output, the script could redirect that output to a file. E.g.,

    mpprun -n 50 ./a.out > ./a.out.results

AAAS Arctic Science Conference and CUG Meetings

The 50th Arctic Science Conference of the AAAS
  • Sept 19-22, 1999
  • At Denali National Park and Preserve

    Look for Guy at this conference.

    (As an aside, Guy spent Wednesday in Denali and saw wolf, fox, black bear, brown bear, moose, caribou, dahl sheep, and tourist--last of the season.)

Cray User Group (CUG) T3E Workshop

CUG Annual Meeting

  • May 22-26, 2000
  • Noordwijk aan Zee, The Netherlands

    Call for papers has been issued: deadline, Dec. 10, 1999. See web page for details.

ARSC: Programming Environment Clean-Up

Beginning after downtime on Wednesday, Sept. 15, 1999, the only programming environments available on yukon will be PrgEnv (the default) and PrgEnv.old (the previous default).

The following components comprise PrgEnv and PrgEnv.old and will remain available:

CC. CC. CCmathlib. CCtoollib. cam. cf90. cf90. craylibs. craylibs. craytools. craytools. cvt. mpt. mpt. scc.

Correction: qalter -l mpp_p=0

In the article, "Post-Processing Standard Output Files From NQS Jobs" in the last issue, the qalter command given in the sample script was incorrect. It should have been:

  qalter -l mpp_p=0                # Release all 60 APP PEs

Quick-Tip Q & A

A:{{ I moved some files from my home directory to my /tmp directory,
  {{ double checked them with "ls," but the next day they were gone,
  {{ gone, gone.  I didn't delete them and I know that the purger on 
  {{ /tmp only removes files over 10 days old.  What happened?

   The "mv" command does not update file access time.  

   If you move a file which hasn't been accessed in over 10 days to
   /tmp, it will still be over 10 days old.  It will be purged within
   24 hours, when the purge robot runs.

   NOTE: ARSC's /tmp purge period is 10 days, based on last access 
         time. Other sites differ!

Q: The "ls -lc" command shows modification time and sorts by 
   modification time. The "ls -lu" command also shows modification time
   but sorts by access time.

   How can I show file access time?

[ Answers, questions, and tips graciously accepted. ]

Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top