ARSC HPC Users' Newsletter 336, March 16, 2006

X1 Cross-Compiler, Skagway, Open to all Klondike Users

All klondike users can now log onto skagway. Skagway is the previously announced 4-CPU Linux system configured as a cross-compiling system and front-end host for klondike.

We encourage all klondike users to at least experiment with shifting the bulk of their X1 interactive, or front-end, tasks to skagway. We expect you will find it a satisfying transition.

Here's most of what skagway offers:

  • Identical programming environment to the X1:
    • "modules" command
    • Cray compilers and libraries
    • Cray man pages
  • 10x faster compilation and linking of X1 codes.
  • Faster code profiling and tracing using pat_build and pat_report.
  • Faster Unix/Linux utilities such as tar and zip.
  • Submission and monitoring of klondike PBS Batch jobs.
  • Linux operating system and utilities
  • Availability of open-source compilers and interpreters such as gcc and python. (gcc can't generate X1 executables, but might be useful for pre-processing of codes.)
  • Availability of EMACS (!!!) and other editors.
  • System availability for programming tasks, even when the X1 itself is down for upgrades or maintenance.

The "news" items on skagway as well as the following document will help you get started:

http://www.arsc.edu/support/howtos/usingx1xc.html

Using PAT Performance Analysis Tools on Skagway

Skagway not only speeds up compilation and linking, it speeds up pat_build and pat_report. On klondike, the processing of large profile and trace files (".xf" files) can be prohibitively slow. On skagway, we've observed a roughly 4x speedup from pat_report.

The basic procedure is:

On skagway:

  1. make
  2. pat_build
On klondike:
  1. copy the instrumented binary and all data files, etc. to $WRKDIR
  2. set PAT environment variables
  3. run
  4. copy the .xf file back to skagway
On skagway:
  1. pat_report

In issues > 305 and > 311 we provided shell scripts which explain and simplify the use of CrayPAT. These scripts work on skagway or klondike. See: Running "patcalltreefunc -h" (from issue > 305 ), for instance, explains how to build and run a profiling experiment and dump a call tree:


KLONDIKE$ ./patcalltreefunc -h
  Syntax: patcalltreefunc [-h] <instrumented_exec_name> <name_of_xf_file>

  Preparation:
   1. Instrument the executable: pat_build <executable_name> <instrumented_exec_name>
   2. In the PBS script (or interactive environment):
      [ksh]  export PAT_RT_EXPERIMENT=samp_cs_time
      [csh]  setenv PAT_RT_EXPERIMENT samp_cs_time
   3. Run the instrumented executable. E.g., mpirun -np 4 ./<instrumented_exec_name>
   4. This produces the .xf file needed by this script 

After performing the "Preparation" steps, you run "patcalltreefunc" script again (without -h) to produce the report. Alternatively, you can reference the man pages for pat, pat_build, pat_run, and pat_report and design profiling/tracing experiments to match your specific needs.

Sample skagway qsub script:

Assuming you've built the instrumented binary, here's a qsub script to automate the copying to and from klondike. Remember that the script runs on klondike, even though you submit it from skagway.


##############

#PBS -q default
#PBS -l mppe=1
#PBS -l walltime=30:00
#PBS -S /bin/ksh

set -x

# It is expected that this script exists on $XCHOME and is submitted
# while logged onto skagway.  When a batch script runs, it runs on
# klondike regardless of where it was submitted.  "PBS_O_WORKDIR"
# is set to the path and directory from which it was submitted on 
# skagway. "PBS_O_HOST" is set to the host from which it was submitted.

if [[ -z $PBS_O_WORKDIR ]]; then
  echo "Error: can't run this script interactively. Use qsub."
  return 1
fi

if [[ $PBS_O_HOST != "skagway.arsc.edu" ]]; then
  echo "Error: this script was not submitted from skagway."
  return 1
fi

# Grab the last directory name off path to directory (on skagway) 
# from which this script submitted.
THISDIR=${PBS_O_WORKDIR##*/}

# Get a unique name for the run directory.
RUNDIR=$(date +%Y%m%d.%H%M)

# Define the needed paths and create the klondike run directory. Uses:
#   $WRKDIR which is predefined for ARSC users as working directory,
#   $XCROOT which is predefined to point to the root of the skagway 
#     home directory filesystem.
KLONDIKE_RUNDIR="$WRKDIR/$THISDIR/$RUNDIR"
SKAGWAY_WORKDIR="$XCROOT/$PBS_O_WORKDIR"

mkdir -p $KLONDIKE_RUNDIR
cd $KLONDIKE_RUNDIR

# Copy everything from skagway to klondike needed for the run
cp $SKAGWAY_WORKDIR/d.inst $KLONDIKE_RUNDIR/

# Define the PAT experiment and run the code on klondike
export PAT_RT_EXPERIMENT=samp_cs_time
aprun -n 1 ./d.inst


# Copy the entire run directory back to skagway for processing 
# with pat_report
cp -r -p $KLONDIKE_RUNDIR $SKAGWAY_WORKDIR/ 

##############

After the job ends, you should have a copy of the .xf and all output back on skagway. In this simple example, you'd simply cd to the run directory and run pat_report (or patcalltreefunc) to produce the profile report.

Using this script, you wouldn't need to log onto klondike at all.

Klondike Default PrgEnv, Minor Upgrade to Synchronize with Skagway

Next Wednesday, April 22, we will upgrade the default programming environment on klondike from PE 5.4.0.0 to PE. 5.4.0.7. At that point it will match PrgEnv on the cross-compiling system.

PrgEnv.new is already synchronized between skagway and klondike. PrgEnv.old (PE 5.3.0.0) doesn't exist on skagway because a linux cross-compiler version didn't exist for it at the time of its release.

With this temporary exception of PrgEnv.old, it is ARSC's intention to keep our three primary programming environments,

  • PrgEnv.old
  • PrgEnv
  • PrgEnv.new

equivalent on skagway and klondike.

IDL training at ARSC

ARSC is sponsoring IDL training:

Title: "Introduction to IDL" Dates: April 26-28, 8:30 - 5:00 every day Location: ARSC classroom (WRRB 009)

This course is prepared and presented by trainers from the IDL vendor, Research System Inc. (RSI), and is for new and beginner IDL users. This training is open to any ARSC users and other members of the UAF community.

Beginning with the basic concepts of variables and line plotting, the course takes students through file manipulation, programming methods, interactive data visualization and analysis techniques. Users are introduced to the IDL Development Environment (IDLDE) and IDL's advanced mathematical and image processing capabilities.

For more information and to register, see:

http://www.arsc.edu/support/training/IDLtraining2006.html

Space still available: Distributed MATLAB Symposium, Next Week

Next week at ARSC! Distributed MATLAB Symposium.

Registration and info:

http://www.arsc.edu/news/MATLAB_2006.html

Quick-Tip Q & A


A:[[ On some systems, when I use vi/vim to open a file and scroll down to
  [[ a particular line and then exit, vi/vim remembers the position within
  [[ the file. The next time I open the file it returns me to that
  [[ position.
  [[ 
  [[ I can't get this feature working everywhere. Do you know how one
  [[ enables this?


  [ Editor's Response ]

  When you exit from vim, it saves the last cursor position in a file
  called .viminfo which is located in your home directory.  Some linux
  distributions include a customized system-wide vimrc (e.g. in
  /etc/vimrc) which puts the cursor to the most recent jumplist position
  listed in the .viminfo file when a file is opened.  Below is an
  excerpt from the /etc/vimrc for SuSE linux, which places the cursor at
  the last known position within the file:

  if has("autocmd")
    " When editing a file, always jump to the last known cursor position.
    " Don't do it when the position is invalid or when inside an event handler
    " (happens when dropping a file on gvim).
    autocmd BufReadPost *
      \ if line("'\"") > 0 && line("'\"") <= line("$") 

      \   exe "normal g`\"" 

      \ endif
  endif " has("autocmd")


  If this section of code is added to your local ~/.vimrc file, vim
  should use the last jumplist position listed for the file.

  NOTE: vim must be compiled with jumplist enabled to use this
  functionality.  Running "vim --version" will show which options
  were enabled at compile time.


 
Q: I've got a misbehaving file with lots of hidden and wierd characters. 
    Here's some sample output from "od -c" :

  %    od -c file.txt
  0000000  \r   t   h   i   s       i   s       a       l   i   t   t   l
  0000020   e       d   e   m   o       o   f       t   h   e     311    
  0000040   f   e   a   t   u   r   e   s       o   f  \r   t   h   e    
  0000060   a   u   t   o   c   o   r   e   c   t     311       f   u   n
  0000100   c   t   i   o   n       o   f       t   h   i   s       w   o
  0000120   r   d       p   r   o   c   e   s   s   o   r     252   ,    
  0000140   h   e   r   e   .      \r  \r  \n

   From the quick-tip in issue 287 I know how to get rid of these
   characters, but first I need to know what they are.  There must be
   some pipeline of Unix utilities that would convert the "od -c" output
   into a simple list of the characters that appear in the file. Any 
   help?

[[ Answers, Questions, and Tips Graciously Accepted ]]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top