ARSC HPC Users' Newsletter 212,

ARSC Spring 2001 Training Schedule

 

  • Wednesday, Feb 7, 2001, 2-4pm: Introduction to Unix

  • Wednesday, Feb 14, 2001, 3-4pm: ARSC Tour for New and Prospective Users

  • Wednesday, Feb 21, 2001 2-4pm: Visualization with MAYA

  • Wednesday, Feb 28, 2001, 2-4pm: User's Introduction to ARSC Supercomputers

  • Wednesday, March 28, 2001, 2-4pm: Parallel Computing Concepts

  • Wednesday, April 11, 2001, 2-4pm: Visualization with Vis5D, Part I

  • Wednesday, April 18, 2001, 2-4pm: Visualization with Vis5D, Part II

All classes are schedule for Wednesday's in the Butrovich Building at UAF, in room 007. For details and registration, visit our Instruction at ARSC page

Default NCPUS Value on Chilkoot to be changed

[ Taken from "news NCPUS" on chilkoot: ]

On January 31, 2001, we will switch the default value of NCPUS from 4 to 1. This environmental variable, NCPUS, determines how many SV1 processors will be used to execute a user's program that was compiled for multiple processors or linked with libraries that have been optimized for multiple CPUs (such as libsci or IMSL).

Users may override this default by setting the environment variable explicitly, either in NQS scripts, prior to program execution, or in interactive sessions. Thus, if your code performs well on 4 CPUs, you may continue to run on 4 CPUs. The command to set this variable explicitly is:

in the C shell : setenv NCPUS nnn

in the K shell: export NCPUS=nnn

where nnn is the desired number of processors.

Prior to this change, users may have been executing on multiple CPUs unknowingly.

Using multiple CPUs may decrease the wall clock time of a user's job but will increase the CPU time over a uniprocessor execution on the same problem. This additional CPU time is reflected in a faster use of Service Units (SUs).

After this change, some users may notice that the wall clock time for their program has increased. Of course this could be a result of how busy chilkoot is, but it also could be that a program that previously used multiple processors is now using only one.

Not all programs are amenable to multiprocessing. Users should experiment with their own application on 1, 2 and 4 CPUs, measuring performance using "ja" and "hpm", before using multiple processors.

ARSC Users Services is willing to help in these efforts. Further questions should be directed to ARSC Consulting at consult@arsc.edu or 907-450-8602.

Quick-Tip Q & A


A:[[ I don't understand the difference between the OpenMP constructs:
       !$omp master
       ...
       !$omp end master
   and
       !$omp single
       ...
       !$omp end single
   Aren't they rather redundant?  Why use "single"?


These are similar, as either construct may appear within a parallel 
region and they both ensure that the contained blocks of code will be
executed by only one of the threads.

There are some significant differences, though.

The "master" construct will be executed by the master thread, while
"single" may be executed by ANY thread in the team (perhaps that which
arrives first, but it's up to the implemention).

Also, there's no implicit barrier at the end of the master construct,
while there is one at the end of single. All threads are required to
reach each single construct, but they're not all required to reach each
master construct.



Q: I'm tired of waiting ages and ages for my code to recompile when
I need to change its parameters.  Is there a way to run the code again
with new array sizes, constants, etc. that's any faster?

[[ Answers, Questions, and Tips Graciously Accepted ]]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top