ARSC system news for all systems
Menu to filter items by type
| Type | Downtime | News | |||
| Machine | All Systems | linuxws | pacman | bigdipper | fish |
Contents for all systems
| linuxws | pacman | bigdipper | fish |
|---|---|---|---|
|
|
|
|
|
News Items
"PrgEnv"
Last Updated: Wed, 22 Oct 2008 -Machines: pacman
Programming Environments on pacman
====================================
Compiler and MPI Library versions on pacman are controlled via
the modules package. New accounts load the "PrgEnv-pgi" module by
default. This module adds the PGI compilers and the OpenMPI stack
to the PATH.
Should you experience problems with a compiler or library in
many cases a new programming environment may be available.
Below is a description of available Programming Environments:
Module Name Description
=============== ==============================================
PrgEnv-pgi Programming environment using PGI
compilers and MPI stack (default version).
PrgEnv-gcc Programming environment using GNU compilers
and MPI stack.
For a list of the latest available Programming Environments, run:
pacman1 748% module avail PrgEnv-pgi
------------------- /usr/local/pkg/modulefiles -------------------
PrgEnv-pgi/10.5 PrgEnv-pgi/11.2
PrgEnv-pgi/9.0.4(default)
If no version is specified when the module is loaded, the "default"
version will be selected.
Programming Environment Changes
================================
The following is a table of recent additions and changes to the
Programming Environment on pacman.
Updates on 1/9/2013
====================
Default Module Updates
-----------------------
The default modules for the following packages will be updated on 1/9/2013.
module name new default previous default
=================== ================= ================
abaqus 6.11 6.10
comsol 4.2a 4.3a
grads 1.9b4 2.0.2
idl 8.2 6.4
matlab R2011b R2010a
ncl 6.0.0 5.1.1
nco 4.1.0 3.9.9
OpenFoam 2.1.0 1.7.1
petsc 3.3-p3.pgi.opt 3.1-p2.pgi.debug
pgi 12.5 9.0.4
PrgEnv-pgi 12.5 9.0.4
python 2.7.2 2.6.5
r 2.15.2 2.11.1
totalview 8.10.0-0 8.8.0-1
Retired Modules
----------------
The following module files will be retired on 1/9/2013.
* PrgEnv-gnu/prep0
* PrgEnv-gnu/prep1
* PrgEnv-gnu/prep2
* PrgEnv-gnu/prep3
* PrgEnv-pgi/prep0
* PrgEnv-pgi/prep1
* PrgEnv-pgi/prep2
* PrgEnv-pgi/prep3
"PrgEnv"
Last Updated: Mon, 02 Jul 2012 -Machines: fish
Programming Environment on Fish
========================================
Compiler and MPI Library versions on fish are controlled via
the modules package. All accounts load the "PrgEnv-pgi" module by
default. This module adds the PGI compilers to the PATH.
Should you experience problems with a compiler or library in
many cases a new programming environment may be available.
Below is a description of available Programming Environments:
Module Name Description
=============== ==============================================
PrgEnv-pgi Programming environment using PGI
compilers and MPI stack (default version).
PrgEnv-cray Programming environment using Cray compilers
and MPI stack.
PrgEnv-gcc Programming environment using GNU compilers
and MPI stack.
Additionally multiple compiler versions may also be available.
Module Name Description
=============== ==============================================
pgi The PGI compiler and related tools
cce The Cray Compiler Environment and related tools
gcc The GNU Compiler Collection and related tools.
To list the available version of a package use the "module avail pkg" command:
% module avail pgi
-------------------------- /opt/modulefiles --------------------------
pgi/12.3.0(default) pgi/12.4.0
Programming Environment Changes
================================
The following is a table of recent additions and changes to the
Programming Environment on fish.
Date Module Name Description
---------- --------------------- -----------------------------------
"center"
Last Updated: Tue, 04 Sep 2012 -Machines: pacman
$CENTER directory
========================================
ARSC has deployed a new center-wide scratch file system which will serve
as a replacement for $WORKDIR, /datadir and $LUSTRE on pacman. Each
pacman user has been assigned a directory in this new file system which
can be referred to with the environment variable $CENTER. We ask that
you begin migrating any data you have in either $WORKDIR or $LUSTRE to
$CENTER as soon as is practical.
Once you have completed the migration of your data, we ask that you run
the command "retire_WORKDIR" to indicate that you have completed the
migration. Similarly you may run "retire_LUSTRE" to indicate you have
completed the migration of data in $LUSTRE.
Due to technical considerations with $WORKDIR and /datadir, we ask that
you avoid removing significant amounts of data, instead please run
either "retire_WORKDIR" or "retire_LUSTRE" to indicate you have migrated
data you would like to keep.
$WORKDIR and $LUSTRE directories will be removed for any account which
has no data present in either of those directories beginning on 9/4.
If you have questions about this change, please contact the ARSC Help Desk
(consult@arsc.edu)
Frequently Asked Questions
1) Q- Why are $WORKDIR and $LUSTRE being retired?
A- Both file systems are on aging hardware that is either no longer
supported by the vendor or will be off of vendor support in the
next 1/2 year. Bringing in a new file system allows us to
provide a file system available from all systems.
2) Q- My $WORKDIR or $LUSTRE directory is missing, what happened?
A- Accounts with no data saved in either $WORKDIR or $LUSTRE had those
respective directories removed on 9/4 as part of the retirement of
those file systems. You should instead use $CENTER for your
work.
3) Q- I have migrated my data from $WORKDIR to $CENTER, should I remove
data from $WORKDIR. Should I remove files from $WORKDIR?
A- We recommend that you indicate you have migrated your files by running
the "retire_WORKDIR" command.
4) Q- Can I use $CENTER from any node on pacman?
A- Yes, the new $CENTER file system is connected via infiniband to all
of the node types on pacman.
5) Q- I have files on $WORKDIR, is it OK if I use the "mv" command to move
them over to $CENTER?
A- We discourage the use of the "mv" command when going between different
file systems (e.g. $CENTER and $WORKDIR). It's safer to use the "cp".
Once you have migrated your data from $WORKDIR you can use the
"retire_WORKDIR" command to indicate that your migration is complete.
6) Q- I would like to copy all of the contents from $WORKDIR to $CENTER.
What's the best way to do this?
A- We recommend this be done using the "rsync" command. The following
command will copy the contents from $WORKDIR to $CENTER recursively:
% rsync -P -a -v -r $WORKDIR/ $CENTER/
If the rsync command fails, it can safely be restarted to sync the
files that have not been transferred yet.
6) Q- I would like to copy most of the contents from $WORKDIR to $CENTER.
What's the best way to do this?
A- We recommend this be done using the "rsync" command. The following
command will copy the contents from $WORKDIR to $CENTER recursively
and excludes particular directories:
% rsync -P -a -v -r --exclude=dir1 --exclude=dir2 $WORKDIR/ $CENTER/
This rsync command can also be safely restarted if the transfer fails
before completion.
7) Q- I have a /datadir directory on pacman, where can I migrate the contents?
A- Each project with a /datadir directory has been assigned a directory in
/center/d. This directory will be available on pacman, fish and
linux workstations.
8) Q- What are the retirement dates for $WORKDIR and $LUSTRE
A- $LUSTRE will be retired on 10/12/2012. After retirement, $LUSTRE
will remain available on pacman10, pacman11, and pacman12 to
facilitate data migration.
$WORKDIR and /datadir will be retired on 11/13/2012.
9) Q- When is the final data $WORKDIR and $LUSTRE will be available from
pacman10, pacman11, and pacman12
A- $WORKDIR and $LUSTRE will remain available on pacman10, pacman11
and pacman12 until the January 9, 2012. ARSC began proactively
migrating remaining accounts from $WORKDIR and $LUSTRE to $CENTER
on 12/7/2012.
"compilers"
Last Updated: Wed, 21 Jun 2006 -Machines: linuxws
Compilers
========================================
The ARSC Linux Workstations have two suites of compilers available:
* The GNU Compiler suite version 4.0 including:
- gcc C compiler
- g++ C++ compiler
- gfortran Fortran 95 compiler
* The Portland Group (PGI) compiler suite version 6.1 including:
- pgcc C compiler
- pgCC C++ compiler
- pgf90 Fortran 90 compiler
- pgf95 Fortran 95 compiler
The PGI compilers require several environment variables to be set:
For ksh/bash users:
===================
export PGI=/usr/local/pkg/pgi/pgi-6.1
pgibase=${PGI}/linux86-64/6.1
export PATH=$PATH:${pgibase}/bin
if [ -z "$MANPATH" ]; then
export MANPATH=${pgibase}/man
else
export MANPATH=${pgibase}/man:$MANPATH
fi
if [ -z "$LD_LIBRARY_PATH" ]; then
export LD_LIBRARY_PATH=${pgibase}/lib
else
export LD_LIBRARY_PATH=${pgibase}/lib:$LD_LIBRARY_PATH
fi
unset pgibase
For csh/tcsh users:
===================
setenv PGI /usr/local/pkg/pgi/pgi-6.1/
set pgibase=${PGI}/linux86-64/6.1
setenv PATH ${PATH}:${pgibase}/bin
if ( ! ${?MANPATH} ) then
setenv MANPATH ${pgibase}/man
else
setenv MANPATH ${pgibase}/man:${MANPATH}
endif
if ( ! ${?LD_LIBRARY_PATH} ) then
setenv LD_LIBRARY_PATH ${pgibase}/lib
else
setenv LD_LIBRARY_PATH ${pgibase}/lib:${LD_LIBRARY_PATH}
endif
unset pgibase
"largefile"
Last Updated: Wed, 15 Dec 2010 -Machines: bigdipper
Large File Support
========================================
Some commands on bigdipper do not have support for files larger that 2 GB.
See "man largefile" for a list of commands which have large file support.
In particular the "bzip2" is a 32 bit executable on bigdipper and
therefore may truncate large files.
If you need to manipulate files larger than 2 GB, it is recommended that an
HPC system or Linux Workstation be used.
If you have questions, please contact User Support.
"login"
Last Updated: Mon, 07 Nov 2011 -Machines: linuxws pacman bigdipper
New User Login Process for ARSC Systems
========================================
ARSC is in the process of converting to a new user authentication
approach allowing login to ARSC Academic systems using University of
Alaska (UA) passwords. The UA password is the password currently
entered to access Blackboard, EDIR, and UA google mail.
This new login option is available to ARSC users with current login
accounts within the University of Alaska system.
NOTE: at this time you must use your existing ARSC username if it does
not match your UA username.
Follow these steps to login to the ARSC Linux Workstations, Pacman
and Bigdipper using your UA password:
1) Connect to the ARSC system like you normally would:
mysystem % ssh username@pacman7.arsc.edu
2) When prompted for "Enter_PASSCODE:" leave the value blank and
press the "Return" (or "Enter") key on your keyboard.
3) A new "Password:" prompt will appear. Enter your University
of Alaska password at this prompt.
Frequently Asked Questions:
1) Q: Do I need to use my ARSC username or my UA username?
A: If your ARSC username is different than your UA username, enter
your ARSC username when you establish an ssh, scp, or scp connection.
2) Q: I'm not sure what my UA password is?
A: UA passwords are used to access UA google email, Blackboard, and
other services on the UA campuses. If you don't remember your
password, reset it using ELMO: https://elmo.alaska.edu
3) Q: Can I use this new authentication mechanism to login to the ARSC
Linux Workstations in the Duckering Building or West Ridge Research
Building?
A: Yes, when prompted for your Passcode, press the "Return" or
"Enter" key then enter your UA password at the "Password" prompt.
4) Q: I am not directly affiliated with the University of Alaska, can I
use this new authentication scheme?
A: Persons not directly affiliated with the University of Alaska were
sent a message describing the Guest Account process. If you are
not affiliated with the University of Alaska and didn't receive that
message, please contact the ARSC Help Desk.
5) Q: Can I access Bigdipper using my UA password?
A: As of November 25, 2011, UA passwords now work on Bigdipper.
6) Q: Can I continue to use ssh public keys to access ARSC systems?
A: Yes, provided the public keys comply with ARSC's public key
policies:
http://www.arsc.edu/arsc/support/policy/secpolicy/index.xml#acad_ssh_public_key
7) Q: Will SecurID logins continue to be supported?
A: No, This change is intended to phase out the use of SecurIDs.
Please return your SecurID to the ARSC Help Desk once you are
comfortable using your UA password to access ARSC resources.
"modules"
Last Updated: Mon, 28 Dec 2009 -Machines: linuxws
Using the Modules Package
=========================
The modules package is used to prepare the environment for various
applications before they are run. Loading a module will set the
environment variables required for a program to execute properly.
Conversely, unloading a module will unset all environment variables
that had been previously set. This functionality is ideal for
switching between different versions of the same application, keeping
differences in file paths transparent to the user.
The following modules commands are available:
module avail - list all available modules
module load <pkg> - load a module file from environment
module unload <pkg> - unload a module file from environment
module list - display modules currently loaded
module switch <old> <new> - replace module <old> with module <new>
module purge - unload all modules
Before the modules package can be used in a script, its init file may
need to be sourced.
To do this using tcsh or csh, type:
source /usr/local/pkg/modules/init/<shell>
To do this using bash, ksh, or sh, type:
. /usr/local/pkg/modules/init/<shell>
For either case, replace <shell> with the shell you are using.
If your shell is bash, for example:
. /usr/local/pkg/modules/init/bash
Known Issues:
=============
2009-09-24 Accounts using bash that were created before 9/24/2009
are missing the default ~/.bashrc file. This may cause
the module command to be unavailable in some instances.
Should you experience this issue run the following:
# copy the template .bashrc to your account.
[ ! -f ~/.bashrc ] && cp /etc/skel/.bashrc ~
If you continue to experience issues, please contact the
ARSC Help Desk.
"modules"
Last Updated: Sun, 06 Jun 2010 -Machines: fish
Using the Modules Package ========================= The modules package is used to prepare the environment for various applications before they are run. Loading a module will set the environment variables required for a program to execute properly. Conversely, unloading a module will unset all environment variables that had been previously set. This functionality is ideal for switching between different versions of the same application, keeping differences in file paths transparent to the user. Sourcing the Module Init Files --------------------------------------------------------------------- For some jobs, it may be necessary to source these files, as they may not be automatically sourced as with login shells. Before the modules package can be used, its init file must first be sourced. To do this using tcsh or csh, type: source /opt/modules/default/init/tcsh To do this using bash, type source /opt/modules/default/init/bash To do this using ksh, type: source /opt/modules/default/init/ksh Once the modules init file has been sourced, the following commands become available: Command Purpose --------------------------------------------------------------------- module avail - list all available modules module load <pkg> - load a module file from environment module unload <pkg> - unload a module file from environment module list - display modules currently loaded module switch <old> <new> - replace module <old> with module <new> module purge - unload all modules (not recommended on fish)
"modules"
Last Updated: Sun, 06 Jun 2010 -Machines: pacman
Using the Modules Package ========================= The modules package is used to prepare the environment for various applications before they are run. Loading a module will set the environment variables required for a program to execute properly. Conversely, unloading a module will unset all environment variables that had been previously set. This functionality is ideal for switching between different versions of the same application, keeping differences in file paths transparent to the user. Sourcing the Module Init Files --------------------------------------------------------------------- For some jobs, it may be necessary to source these files, as they may not be automatically sourced as with login shells. Before the modules package can be used, its init file must first be sourced. To do this using tcsh or csh, type: source /etc/profile.d/modules.csh To do this using bash, ksh, or sh, type: . /etc/profile.d/modules.sh Once the modules init file has been sourced, the following commands become available: Command Purpose --------------------------------------------------------------------- module avail - list all available modules module load <pkg> - load a module file from environment module unload <pkg> - unload a module file from environment module list - display modules currently loaded module switch <old> <new> - replace module <old> with module <new> module purge - unload all modules
"projects"
Last Updated: Sun, 06 Jun 2010 -Machines: pacman
Instructions for Users with Multiple Projects
=============================================
This news item is intended for users that are members of more than one
project. Users in a single project will automatically have use
charged against the allocation for their primary group (i.e. project).
Users in more than one project can select an alternate project to
charge use to by using the "-W group_list" PBS option. If the
"-W group_list" option is not specified the account number will default
to your primary group (i.e. project).
Below is an example "-W group_list" statement.
e.g.
#PBS -W group_list=proja
The "-W group_list" option can also be used on the command line.
e.g.
pacman1 % qsub -Wgroup_list=proja script.bat
Each project has a corresponding UNIX group, therefore the groups
command will show all projects and other groups of which you are a
member.
e.g.
pacman1 % groups
proja projb
In this case use would be charged to proja by default, but could be
charged to projb by setting "-W group_list=projb" in the PBS script.
If you have questions about this news item, please contact the ARSC
help desk (consult@arsc.edu).
'show_usage' Available
=======================
Project utilization information is available via the 'show_usage' command.
'show_usage' will display the remaining allocation for each project of which
you are a member.
pacman1 % show_usage
ARSC - Subproject Usage Information (in CPU Hours)
As of 04:6:27 hours ADT 6 Jun 2010
For Fiscal Year 2010 (01 October 2009 - 30 September 2010)
Percentage of Fiscal Year Remaining: 32.05%
Hours Hours Hours Percent Background
System Subproject Allocated Used Remaining Remaining Hours Used
========== ============== ========== ========== ========== ========= ==========
pacman proja 50000.00 41552.59 8447.41 16.89% 0.00
pacman projb 50000.00 52482.78 -2482.78 -4.97% 63422.03
Projects with no remaining allocation may continue to run jobs in the
"background" queue.
"pubkeys"
Last Updated: Fri, 11 Jun 2010 -Machines: bigdipper
Setting Up SSH Public Key Authentication On Linux/UNIX Systems
==============================================================
SSH public key authentication is available on ARSC Academic systems
as an alternative to SecurID authentication. This method of authentication
allows you to log into ARSC Academic systems (e.g. pacman, midnight,
bigdipper) using a password, removing the need for a hardware
authentication mechanism. The following guide describes the procedure for
enabling SSH public key authentication for your bigdipper account.
Linux and Mac Systems Instructions
==================================
Step #1 - Generate an SSH Key Pair on Your Local System
Note: If you have existing SSH keys on your system, you may want to back
them up before generating a new key pair.
The SSH installation on your local system should have come with an
executable named "ssh-keygen". Use this command to generate an SSH
public/private key pair:
$ ssh-keygen
This program will prompt you for the location to save the key. The rest
of this guide will assume you chose the default location,
$HOME/.ssh/id_rsa.
You will then be prompted to enter a password. Please choose a long
password with multiple character classes (e.g., lowercase letters,
uppercase letters, numbers, and/or symbols). After you set your password,
the program will write two files to the location you specified:
Private Key: $HOME/.ssh/id_rsa
Public Key: $HOME/.ssh/id_rsa.pub
Do not share your private key. Take precautions to make sure others
cannot access your private key.
Step #2 - Transfer Your Public Key to Bigdipper.
ARSC has developed a tool, "ssh-keymanage", to help you comply with our
security policies while adding your SSH public keys to bigdipper. When a
public key is added to your account on bigdipper, it must be associated with
a particular system that is allowed to authenticate with that key. This
is accomplished via SSH's "from=" clause, which is tied to a public key
when it is inserted into SSH's authorized_keys file.
The basic usage for adding a public key to bigdipper with the ssh-keymanage
tool is:
ssh-keymanage --add <keyfile> --host <hostname>
This usage assumes that you have already transferred the public key you
generated in Step #1 to bigdipper. You will also need to know your local
system's full hostname (e.g., "sysname.uaf.edu").
Step #3 - Enable SSH Public Key Authentication on Your Local System
Pacman is already configured to allow SSH public key authentication on the
server side, but you will need to make sure the SSH client on your local
machine is configured to allow SSH public key authentication. There are
several ways to do this, including:
a) Adding an option to your SSH command when you connect to bigdipper:
ssh -o PubkeyAuthentication=yes username@bigdipper.arsc.edu
b) Adding the following to your $HOME/.ssh/config file as a long-term
solution:
Host bigdipper
PubkeyAuthentication yes
Hostname bigdipper.arsc.edu
Windows Instructions
====================
Step #1 - Generate an SSH Key Pair on Your Local System
Note: If you have existing SSH keys on your system, you may want to back
them up before generating a new key pair.
You will need to use PuTTY's "puttygen.exe" program to generate a key
pair. If you installed the HPCMP Kerberos Kit in the default location,
you can run this program by clicking Start -> Run and entering the
following into the "Open" text box:
"C:\Program Files\HPCMP Kerberos\puttygen.exe"
Next, click the "Generate" button in this program. This will prompt you
to move the mouse around to generate randomness to create a unique key
pair. This may take you a few minutes. Once this process is complete,
you will be shown the public key for your SSH key pair.
Please enter a password for your key pair by filling out both the "Key
passphrase" and "Confirm passphrase" text boxes. Choose a long password
with multiple character classes (e.g., lowercase letters, uppercase
letters, numbers, and/or symbols).
Then, click the "Save private key" button. You are free to choose the
file name and location of the private key. This guide will assume you
saved the private key as "private.ppk" in your "My Documents" folder. Do
not share your private key. Take precautions to make sure others cannot
access your private key. Proceed to Step #2, but do not close the "PuTTY
Key Generator" yet.
Step #2 - Transfer Your Public Key to Bigdipper
Log into bigdipper with your SecurID card and open your favorite text
editor. Copy the text in the "Public key for pasting into OpenSSH
authorized_keys file" text area on the "PuTTY Key Generator" window.
Paste this text into the text editor on pacman and save this to a
temporary file. This guide will assume you named the file "pubkey".
ARSC has developed a tool, "ssh-keymanage", to help you comply with our
security policies while adding your SSH public keys to bigdipper. When a
public key is added to your account on bigdipper, it must be associated with
a particular system that is allowed to authenticate with that key. This
is accomplished via SSH's "from=" clause, which is tied to a public key
when it is inserted into SSH's authorized_keys file.
The basic usage for adding a public key to bigdipper with the ssh-keymanage
tool is:
ssh-keymanage --add <keyfile> --host <hostname>
You will need to know your local system's full hostname (e.g.,
"sysname.uaf.edu"). For example:
ssh-keymanage --add pubkey --host sysname.uaf.edu
This command will report whether the key was successfully added. Once the
public key has been added, type "exit" to close PuTTY.
Step #3 - Add Your Private Key to PuTTY
Launch PuTTY again. Click the + sign next to "SSH", under the
"Connection" category. Click the "Auth" section under the SSH
subcategory. Click the "Browse..." button under "Private key file for
authentication" and select your private key file, "private.ppk". Go back
to the "Session" category and enter bigdipper.arsc.edu under "Host Name".
If you do not want to enter your private key every time you run PuTTY, you
may wish to save your session settings by entering a name under "Saved
Sessions" (e.g., "Bigdipper (pubkey auth)") and clicking "Save". The next
time you run PuTTY, you can reload these settings by selecting your saved
session and clicking "Load".
Finally, click "Open". Instead of being prompted for a SecurID passcode,
you should be prompted for the password you set on your key pair in
Step #1. Enter your key pair password. You should now be logged into
bigdipper.
"pubkeys"
Last Updated: Fri, 11 Jun 2010 -Machines: pacman
Setting Up SSH Public Key Authentication On Linux/UNIX Systems
==============================================================
SSH public key authentication is available on ARSC Academic systems
as an alternative to SecurID authentication. This method of authentication
allows you to log into ARSC Academic systems (e.g. pacman, midnight,
bigdipper) using a password, removing the need for a hardware
authentication mechanism. The following guide describes the procedure for
enabling SSH public key authentication for your pacman account.
Linux and Mac Systems Instructions
==================================
Step #1 - Generate an SSH Key Pair on Your Local System
Note: If you have existing SSH keys on your system, you may want to back
them up before generating a new key pair.
The SSH installation on your local system should have come with an
executable named "ssh-keygen". Use this command to generate an SSH
public/private key pair:
$ ssh-keygen
This program will prompt you for the location to save the key. The rest
of this guide will assume you chose the default location,
$HOME/.ssh/id_rsa.
You will then be prompted to enter a password. Please choose a long
password with multiple character classes (e.g., lowercase letters,
uppercase letters, numbers, and/or symbols). After you set your password,
the program will write two files to the location you specified:
Private Key: $HOME/.ssh/id_rsa
Public Key: $HOME/.ssh/id_rsa.pub
Do not share your private key. Take precautions to make sure others
cannot access your private key.
Step #2 - Transfer Your Public Key to Pacman, Midnight, etc.
ARSC has developed a tool, "ssh-keymanage", to help you comply with our
security policies while adding your SSH public keys to pacman. When a
public key is added to your account on pacman, it must be associated with
a particular system that is allowed to authenticate with that key. This
is accomplished via SSH's "from=" clause, which is tied to a public key
when it is inserted into SSH's authorized_keys file.
The basic usage for adding a public key to pacman with the ssh-keymanage
tool is:
ssh-keymanage --add <keyfile> --host <hostname>
This usage assumes that you have already transferred the public key you
generated in Step #1 to pacman. You will also need to know your local
system's full hostname (e.g., "sysname.uaf.edu").
Alternatively, the following command can be used to transfer and add your
key to pacman all at once:
cat ~/.ssh/id_rsa.pub | ssh -tt username@pacman.arsc.edu ssh-keymanage --add /dev/stdin --host sysname.uaf.edu
Step #3 - Enable SSH Public Key Authentication on Your Local System
Pacman is already configured to allow SSH public key authentication on the
server side, but you will need to make sure the SSH client on your local
machine is configured to allow SSH public key authentication. There are
several ways to do this, including:
a) Adding an option to your SSH command when you connect to pacman:
ssh -o PubkeyAuthentication=yes username@pacman.arsc.edu
b) Adding the following to your $HOME/.ssh/config file as a long-term
solution:
Host pacman
PubkeyAuthentication yes
Hostname pacman.arsc.edu
Windows Instructions
====================
Step #1 - Generate an SSH Key Pair on Your Local System
Note: If you have existing SSH keys on your system, you may want to back
them up before generating a new key pair.
You will need to use PuTTY's "puttygen.exe" program to generate a key
pair. If you installed the HPCMP Kerberos Kit in the default location,
you can run this program by clicking Start -> Run and entering the
following into the "Open" text box:
"C:\Program Files\HPCMP Kerberos\puttygen.exe"
Next, click the "Generate" button in this program. This will prompt you
to move the mouse around to generate randomness to create a unique key
pair. This may take you a few minutes. Once this process is complete,
you will be shown the public key for your SSH key pair.
Please enter a password for your key pair by filling out both the "Key
passphrase" and "Confirm passphrase" text boxes. Choose a long password
with multiple character classes (e.g., lowercase letters, uppercase
letters, numbers, and/or symbols).
Then, click the "Save private key" button. You are free to choose the
file name and location of the private key. This guide will assume you
saved the private key as "private.ppk" in your "My Documents" folder. Do
not share your private key. Take precautions to make sure others cannot
access your private key. Proceed to Step #2, but do not close the "PuTTY
Key Generator" yet.
Step #2 - Transfer Your Public Key to Pacman or Midnight
Log into pacman with your SecurID card and open your favorite text
editor. Copy the text in the "Public key for pasting into OpenSSH
authorized_keys file" text area on the "PuTTY Key Generator" window.
Paste this text into the text editor on pacman and save this to a
temporary file. This guide will assume you named the file "pubkey".
ARSC has developed a tool, "ssh-keymanage", to help you comply with our
security policies while adding your SSH public keys to pacman. When a
public key is added to your account on pacman, it must be associated with
a particular system that is allowed to authenticate with that key. This
is accomplished via SSH's "from=" clause, which is tied to a public key
when it is inserted into SSH's authorized_keys file.
The basic usage for adding a public key to pacman with the ssh-keymanage
tool is:
ssh-keymanage --add <keyfile> --host <hostname>
You will need to know your local system's full hostname (e.g.,
"sysname.uaf.edu"). For example:
ssh-keymanage --add pubkey --host sysname.uaf.edu
This command will report whether the key was successfully added. Once the
public key has been added, type "exit" to close PuTTY.
Step #3 - Add Your Private Key to PuTTY
Launch PuTTY again. Click the + sign next to "SSH", under the
"Connection" category. Click the "Auth" section under the SSH
subcategory. Click the "Browse..." button under "Private key file for
authentication" and select your private key file, "private.ppk". Go back
to the "Session" category and enter pacman.arsc.edu under "Host Name".
If you do not want to enter your private key every time you run PuTTY, you
may wish to save your session settings by entering a name under "Saved
Sessions" (e.g., "Pacman (pubkey auth)") and clicking "Save". The next
time you run PuTTY, you can reload these settings by selecting your saved
session and clicking "Load".
Finally, click "Open". Instead of being prompted for a SecurID passcode,
you should be prompted for the password you set on your key pair in
Step #1. Enter your key pair password. You should now be logged into
pacman.
"pubkeys"
Last Updated: Sun, 19 Dec 2010 -Machines: linuxws
Setting up SSH Public Authentication on Linux/UNIX Systems
===========================================================
SSH public key authentication is available on ARSC Academic systems
as an alternative to SecurID authentication. This method of authentication
allows you to log into ARSC Academic systems (e.g. pacman, midnight,
bigdipper) using a password, removing the need for a hardware
authentication mechanism. The following guide describes the procedure for
enabling SSH public key authentication for your workstation account.
Linux and Mac Systems Instructions
==================================
Step #1 - Generate an SSH Key Pair on Your Local System
Note: If you have existing SSH keys on your system, you may want to back
them up before generating a new key pair.
The SSH installation on your local system should have come with an
executable named "ssh-keygen". Use this command to generate an SSH
public/private key pair:
$ ssh-keygen
This program will prompt you for the location to save the key. The rest
of this guide will assume you chose the default location,
$HOME/.ssh/id_rsa.
You will then be prompted to enter a password. Please choose a long
password with multiple character classes (e.g., lowercase letters,
uppercase letters, numbers, and/or symbols). After you set your password,
the program will write two files to the location you specified:
Private Key: $HOME/.ssh/id_rsa
Public Key: $HOME/.ssh/id_rsa.pub
Do not share your private key. Take precautions to make sure others
cannot access your private key.
Step #2 - Transfer Your Public Key to Pacman, Midnight, Linux Workstations, etc.
ARSC has developed a tool, "ssh-keymanage", to help you comply with our
security policies while adding your SSH public keys to linux workstations.
When a public key is added to your account on the workstations, it must be
associated with a particular system that is allowed to authenticate with that
key. This is accomplished via SSH's "from=" clause, which is tied to a public
key when it is inserted into SSH's authorized_keys file.
The basic usage for adding a public key to a workstation with the
ssh-keymanage tool is:
ssh-keymanage --add <keyfile> --host <hostname>
This usage assumes that you have already transferred the public key you
generated in Step #1 to the workstation. You will also need to know your local
system's full hostname (e.g., "sysname.uaf.edu").
Alternatively, the following command can be used to transfer and add your
key to a workstation all at once:
cat ~/.ssh/id_rsa.pub | ssh -tt username@mallard.arsc.edu ssh-keymanage --add /dev/stdin --host sysname.uaf.edu
Step #3 - Enable SSH Public Key Authentication on Your Local System
Workstations are already configured to allow SSH public key authentication on
the server side, but you will need to make sure the SSH client on your local
machine is configured to allow SSH public key authentication. There are
several ways to do this, including:
a) Adding an option to your SSH command when you connect to a workstation:
ssh -o PubkeyAuthentication=yes username@mallard.arsc.edu
b) Adding the following to your $HOME/.ssh/config file as a long-term
solution:
Host mallard.arsc.edu
PubkeyAuthentication yes
Windows Instructions
====================
Step #1 - Generate an SSH Key Pair on Your Local System
Note: If you have existing SSH keys on your system, you may want to back
them up before generating a new key pair.
You will need to use PuTTY's "puttygen.exe" program to generate a key
pair. If you installed the HPCMP Kerberos Kit in the default location,
you can run this program by clicking Start -> Run and entering the
following into the "Open" text box:
"C:\Program Files\HPCMP Kerberos\puttygen.exe"
Next, click the "Generate" button in this program. This will prompt you
to move the mouse around to generate randomness to create a unique key
pair. This may take you a few minutes. Once this process is complete,
you will be shown the public key for your SSH key pair.
Please enter a password for your key pair by filling out both the "Key
passphrase" and "Confirm passphrase" text boxes. Choose a long password
with multiple character classes (e.g., lowercase letters, uppercase
letters, numbers, and/or symbols).
Then, click the "Save private key" button. You are free to choose the
file name and location of the private key. This guide will assume you
saved the private key as "private.ppk" in your "My Documents" folder. Do
not share your private key. Take precautions to make sure others cannot
access your private key. Proceed to Step #2, but do not close the "PuTTY
Key Generator" yet.
Step #2 - Transfer Your Public Key to the workstations
Log into a workstation with your SecurID card and open your favorite text
editor. Copy the text in the "Public key for pasting into OpenSSH
authorized_keys file" text area on the "PuTTY Key Generator" window.
Paste this text into the text editor on a workstation and save this to a
temporary file. This guide will assume you named the file "pubkey".
ARSC has developed a tool, "ssh-keymanage", to help you comply with our
security policies while adding your SSH public keys to a workstation. When a
public key is added to your account on the workstation, it must be associated
with a particular system that is allowed to authenticate with that key. This
is accomplished via SSH's "from=" clause, which is tied to a public key
when it is inserted into SSH's authorized_keys file.
The basic usage for adding a public key to the workstation with the
ssh-keymanage tool is:
ssh-keymanage --add <keyfile> --host <hostname>
You will need to know your local system's full hostname (e.g.,
"sysname.uaf.edu"). For example:
ssh-keymanage --add pubkey --host sysname.uaf.edu
This command will report whether the key was successfully added. Once the
public key has been added, type "exit" to close PuTTY.
Step #3 - Add Your Private Key to PuTTY
Launch PuTTY again. Click the + sign next to "SSH", under the
"Connection" category. Click the "Auth" section under the SSH
subcategory. Click the "Browse..." button under "Private key file for
authentication" and select your private key file, "private.ppk". Go back
to the "Session" category and enter workstation name under "Host Name".
If you do not want to enter your private key every time you run PuTTY, you
may wish to save your session settings by entering a name under "Saved
Sessions" (e.g., "Workstation (pubkey auth)") and clicking "Save". The next
time you run PuTTY, you can reload these settings by selecting your saved
session and clicking "Load".
Finally, click "Open". Instead of being prompted for a SecurID passcode,
you should be prompted for the password you set on your key pair in
Step #1. Enter your key pair password. You should now be logged into
the workstation.
"queues"
Last Updated: Wed, 17 Dec 2008 -Machines: fish
Fish Queues
========================================
The queue configuration is as described below. It is subject to
review and further updates.
Login Nodes Use:
=================
Login nodes are a shared resource and are not intended for
computationally or memory intensive work. Processes using more
than 30 minutes of CPU time on login nodes may be killed by ARSC
without warning. Please use compute nodes for computationally or
memory intensive work.
Queues:
===============
Specify one of the following queues in your Torque/Moab qsub script
(e.g., "#PBS -q standard"):
Queue Name Purpose of queue
------------- ------------------------------
standard Runs on 12 core nodes without GPUs
standard_long Runs longer jobs on 12 core nodes without GPUs.
gpu Runs on 16 core nodes with 1- NVIDIA X2090 GPU per node.
gpu_long Runs longer jobs on 16 core nodes with 1- NVIDIA X2090
GPU per node.
debug Quick turn around debug queue. Runs on GPU nodes.
debug_cpu Quick turn around debug queue. Runs on 12 core nodes.
transfer For data transfer to and from $ARCHIVE.
NOTE: transfer queue is not yet functional.
See 'qstat -q' for a complete list of system queues. Note, some
queues are not available for general use.
Maximum Walltimes:
===================
The maximum allowed walltime for a job is dependent on the number of
processors requested. The table below describes maximum walltimes for
each queue.
Queue Min Max Max
Nodes Nodes Walltime Notes
--------------- ----- ----- --------- ------------
standard 1 32 24:00:00
standard_long 1 2 168:00:00 12 nodes are available to this queue.
gpu 1 32 24:00:00
gpu_long 1 2 168:00:00 12 nodes are available to this queue.
debug 1 2 1:00:00 Runs on GPU nodes
debug_cpu 1 2 1:00:00 Runs on 12 core nodes (no GPU)
transfer 1 1 24:00:00 Not currently functioning correctly.
NOTES:
* August 11, 2012 - transfer queue is not yet functional.
* October, 16 2012 - debug queues and long queues were added to fish.
PBS Commands:
=============
Below is a list of common PBS commands. Additional information is
available in the man pages for each command.
Command Purpose
-------------- -----------------------------------------
qsub submit jobs to a queue
qdel delete a job from the queue
qsig send a signal to a running job
Running a Job:
==============
To run a batch job, create a qsub script which, in addition to
running your commands, specifies the processor resources and time
required. Submit the job to PBS with the following command. (For
more PBS directives, type "man qsub".)
qsub <script file>
Sample PBS scripts:
--------------
## Beginning of MPI Example Script ############
#!/bin/bash
#PBS -q standard
#PBS -l walltime=24:00:00
#PBS -l nodes=4:ppn=12
#PBS -j oe
cd $PBS_O_WORKDIR
NP=$(( $PBS_NUM_NODES * $PBS_NUM_PPN ))
aprun -n $NP ./myprog
## Beginning of OpenMP Example Script ############
#!/bin/bash
#PBS -q standard
#PBS -l nodes=1:ppn=12
#PBS -l walltime=8:00:00
#PBS -j oe
cd $PBS_O_WORKDIR
export OMP_NUM_THREADS=16
aprun -d $OMP_NUM_THREADS ./myprog
#### End of Sample Script ##################
NOTE: jobs using the "standard" and "gpu" queues must run compute and memory
intensive applications using the "aprun" or "ccmrun" command. Jobs failing
to use "aprun" or "ccmrun" may be killed without warning.
Resource Limits:
==================
The only resource limits users should specify are walltimes and nodes,
ppn limits. The "nodes" statement requests a job be allocated a number
of chunks with the given "ppn" size.
Tracking Your Job:
==================
To see which jobs are queued and/or running, execute this
command:
qstat -a
Current Queue Limits:
=====================
Queue limits are subject to change and this news item is not always
updated immediately. For a current list of all queues, execute:
qstat -Q
For all limits on a particular queue:
qstat -Q -f <queue-name>
Maintenance
============
Scheduled maintenance activities on Fish use the Reservation
functionality of Torque/Moab to reserve all available nodes on the system.
This reservation keeps Torque/Moab from scheduling jobs which would still
be running during maintenance. This allows the queues to be left running
until maintenance. Because walltime is used to determine whether or not a
job will complete prior to maintenance, using a shorter walltime in your
job script may allow your job to begin running sooner.
e.g.
If maintenance begins at 10AM and it is currently 8AM, jobs specifying
walltimes of 2 hours or less will start if there are available nodes.
CPU Usage
==========
Only one job may run per node for most queues on fish (i.e. jobs may
not share nodes).
If your job uses fewer than the number of available processors on a node
the job will be charged for all processors on the node unless you use the
"shared" queue
Utilization for all other queues is charged for the entire node regardless
of the number of tasks using that node:
* standard - 12 CPU hours per node per hour
* standard_long - 12 CPU hours per node per hour
* gpu - 16 CPU hours per node per hour
* gpu_long - 16 CPU hours per node per hour
* debug - 16 CPU hours per node per hour
* debug_cpu - 12 CPU hours per node per hour
"queues"
Last Updated: Wed, 17 Dec 2008 -Machines: pacman
Pacman Queues
========================================
The queue configuration is as described below. It is subject to
review and further updates.
Login Nodes Use:
=================
The pacman1 and pacman2 login nodes are a shared resource and are
not intended for computationally or memory intensive work. Processes
using more than 30 minutes of CPU time on login nodes may be killed
by ARSC without warning. Please use compute nodes or pacman3 through
pacman9 for computationally or memory intensive work.
Queues:
===============
Specify one of the following queues in your Torque/Moab qsub script
(e.g., "#PBS -q standard"):
Queue Name Purpose of queue
------------- ------------------------------
standard General use routing queue, routes to standard_16 queue.
standard_4 General use by all allocated users. Uses 4-core nodes.
standard_12 General use by all allocated users. Uses 12-core nodes.
standard_16 General use by all allocated users. Uses 16-core nodes.
bigmem Usable by all allocated users requiring large memory
resources. Jobs that do not require very large memory
should consider the standard queues.
Uses 32-core large memory nodes.
gpu Usable by all allocated users requiring gpu computing
resources.
debug Quick turnaround queue for debugging work. Uses 12-core
and 16-core nodes.
background For projects with little or no remaining allocation.
This queue has the lowest priority, however projects
running jobs in this queue do not have allocation
deducted. The number of running jobs or processors
available to this queue may be altered based on system load.
Uses 16-core nodes.
shared Queue which allows more than one job to be placed on a
node. Jobs will be charged for the portion of the
cores used by the job. MPI, OpenMP and memory intensive
serial work should consider using the standard queue
instead. Uses 4-core nodes.
transfer For data transfer to and from $ARCHIVE. Be sure to
bring all $ARCHIVE files online using batch_stage
prior to the file copy.
See 'qstat -q' for a complete list of system queues. Note, some
queues are not available for general use.
Maximum Walltimes:
===================
The maximum allowed walltime for a job is dependent on the number of
processors requested. The table below describes maximum walltimes for
each queue.
Queue Min Max Max
Nodes Nodes Walltime Notes
--------------- ----- ----- --------- ------------
standard_4 1 128 240:00:00 10-day max walltime.
standard_12 1 6 240:00:00 10-day max walltime.
standard_16 1 32 48:00:00
debug 1 6 01:00:00 Only runs on 12 & 16 core nodes.
shared 1 1 48:00:00
transfer 1 1 60:00:00
bigmem 1 4 240:00:00
gpu 1 2 48:00:00
background 1 11 08:00:00 Only runs on 16 core nodes.
NOTES:
* Oct 1, 2012 - Max walltime for transfer increased to 60 hours.
* Sept 18, 2012 - Removed references to $WORKDIR and $LUSTRE
* March 2, 2012 - standard_4 was added to the available queues.
The $LUSTRE filesystem should be used with the
standard_4 queue. Accessing files in $WORKDIR
from the standard_4 queue may result in significant
performance degradation.
* March 14, 2012 - shared queue was moved from 12 core nodes to 4
core nodes.
PBS Commands:
=============
Below is a list of common PBS commands. Additional information is
available in the man pages for each command.
Command Purpose
-------------- -----------------------------------------
qsub submit jobs to a queue
qdel delete a job from the queue
qsig send a signal to a running job
Running a Job:
==============
To run a batch job, create a qsub script which, in addition to
running your commands, specifies the processor resources and time
required. Submit the job to PBS with the following command. (For
more PBS directives, type "man qsub".)
qsub <script file>
Sample PBS scripts:
--------------
## Beginning of MPI Example Script ############
#!/bin/bash
#PBS -q standard_12
#PBS -l walltime=96:00:00
#PBS -l nodes=4:ppn=12
#PBS -j oe
cd $PBS_O_WORKDIR
mpirun ./myprog
## Beginning of OpenMP Example Script ############
#!/bin/bash
#PBS -q standard_16
#PBS -l nodes=1:ppn=16
#PBS -l walltime=8:00:00
#PBS -j oe
cd $PBS_O_WORKDIR
export OMP_NUM_THREADS=16
./myprog
#### End of Sample Script ##################
Resource Limits:
==================
The only resource limits users should specify are walltimes and nodes,
ppn limits. The "nodes" statement requests a job be allocated a number
of chunks with the given "ppn" size.
Tracking Your Job:
==================
To see which jobs are queued and/or running, execute this
command:
qstat -a
Current Queue Limits:
=====================
Queue limits are subject to change and this news item is not always
updated immediately. For a current list of all queues, execute:
qstat -Q
For all limits on a particular queue:
qstat -Q -f <queue-name>
Maintenance
============
Scheduled maintenance activities on Pacman use the Reservation
functionality of Torque/Moab to reserve all available nodes on the system.
This reservation keeps Torque/Moab from scheduling jobs which would still
be running during maintenance. This allows the queues to be left running
until maintenance. Because walltime is used to determine whether or not a
job will complete prior to maintenance, using a shorter walltime in your
job script may allow your job to begin running sooner.
e.g.
If maintenance begins at 10AM and it is currently 8AM, jobs specifying
walltimes of 2 hours or less will start if there are available nodes.
CPU Usage
==========
Only one job may run per node for most queues on pacman (i.e. jobs may
not share nodes).
If your job uses fewer than the number of available processors on a node
the job will be charged for all processors on the node unless you use the
"shared" queue
Utilization for all other queues is charged for the entire node regardless
of the number of tasks using that node:
* standard_4 - 4 CPU hours per node per hour
* standard_12 - 12 CPU hours per node per hour
* standard_16, debug, background - 16 CPU hours per node per hour
* gpu - 8 CPU hours per node per hour
* bigmem - 32 CPU hours per node per hour
"samples_home"
Last Updated: Wed, 31 Mar 2010 -Machines: pacman
Sample Code Repository
========================
Filename: INDEX.txt
Description: This file contains the name,location, and brief
explanation of "samples" included in this Sample
Code Repository. There are several subdirectories within
this code repository containing frequently-used procedures,
routines, scripts, and code used on this allocated system,
pacman. This sample code repository can be accessed from
pacman by changing directories to
$SAMPLES_HOME, or changing directories to the following
location: pacman% /usr/local/pkg/samples.
This particular file can be viewed from the internet at:
http://www.arsc.edu/arsc/support/news/systemnews/index.xml?system=pacman#samples_home
Contents: applications
bio
debugging
jobSubmission
libraries
parallelEnvironment
training
******************************************************************************
Directory: applications
Description: This directory contains sample scripts used to run
applications installed on pacman.
Contents: abaqus
comsol
gaussian_09
matlab_dct
namd
nwchem
tau
vnc
OpenFOAM
******************************************************************************
Directory: bio
Description: This directory contains sample scripts used to run
BioInformatics applications installed on pacman.
Contents: mrbayes
******************************************************************************
Directory: config
Description: This directory contains configuration files for applications
which require some customization to run on pacman.
Contents: cesm_1_0_4
******************************************************************************
Directory: debugging
Description: This directory contains basic information on how to start up
and use the available debuggers on pacman.
Contents: core_files
*****************************************************************************
Directory: jobSubmission
Description: This directory contains sample PBS batch scripts
and helpful commands for monitoring job progress.
Examples include options to submit a jobs such as
declaring which group membership you belong to
(for allocation accounting), how to request a particular
software license, etc.
Contents: MPI_OpenMP_scripts
MPI_scripts
OpenMP_scripts
Rsync_scripts
*****************************************************************************
Directory: parallelEnvironment
Description: This directory contains sample code and scripts containing
compiler options for common parallel programming practices
including code profiling.
Contents: hello_world_mpi
*****************************************************************************
Directory: training
Description: This directory contains sample exercises from ARSC
training.
Contents: introToLinux
introToPacman
*****************************************************************************
"samples_home"
Last Updated: Wed, 31 Mar 2010 -Machines: fish
Sample Code Repository
========================
Filename: INDEX.txt
Description: This file contains the name,location, and brief
explanation of "samples" included in this Sample
Code Repository. There are several subdirectories within
this code repository containing frequently-used procedures,
routines, scripts, and code used on this allocated system,
pacman. This sample code repository can be accessed from
pacman by changing directories to
$SAMPLES_HOME, or changing directories to the following
location: pacman% /usr/local/pkg/samples.
This particular file can be viewed from the internet at:
http://www.arsc.edu/support/news/systemnews/fishnews.xml#samples_home
Contents: applications
jobSubmission
libraries
*****************************************************************************
Directory: applications
Description: This directory contains sample PBS batch scripts for
applications installed on fish.
Contents: abaqus
*****************************************************************************
Directory: jobSubmission
Description: This directory contains sample PBS batch scripts
and helpful commands for monitoring job progress.
Examples include options to submit a jobs such as
declaring which group membership you belong to
(for allocation accounting), how to request a particular
software license, etc.
Contents: MPI_OpenMP_scripts
MPI_scripts
OpenMP_scripts
*****************************************************************************
Directory: libraries
Description: This directory contains examples of common libraries and
programming paradigms.
Contents: cuda
openacc
scalapack
"software"
Last Updated: Sun, 07 Nov 2010 -Machines: pacman
Pacman Software
========================================
jdk: java development kit 1.7.0_10 (2013-01-08)
A new version of the jdk is now available via modules:
module load jdk/1.7.0_10
matlab: matlab version R2012b (2012-12-26)
Matlab R2012b is now available for UAF users via modules:
module load matlab/R2012b
matlab: matlab version R2012a (2012-12-07)
Matlab R2012a is now available for UAF users via modules:
module load matlab/R2012a
comsol: comsol-4.3a (2012-12-07)
The newest comsol release is now available for UAF users
via modules:
module load comsol/4.3a
idl/envi: idl-8.2 and envi 5.0 (2012-10-31)
IDL version 8.2 and ENVI version 5.0 are now available
on pacman via modules:
module load idl/8.2
CUBIT: cubit-13.0 (2012-09-25)
Cubit version 13.0 is available on pacman in
/usr/local/pkg/cubit/cubit-13.0
R: R-2.15.0 and R-2.15.1 (2012-08-03)
The two newest releases of R are now available.
Load the module to access your preferred version:
module load r/2.15.0
module load r/2.15.1
pgi: pgi-12.5 (2012-05-23)
The newest release of the PGI compiler is now available.
Load the module to access the newest version:
module load PrgEnv-pgi/12.5
totalview: totalview-8.10.0-0 (2012-05-23)
The newest release of the Totalview Debugger is now available with
CUDA support. Load the module to access the newest version:
module load totalview/8.10.0-0
tau: tau-2.2.21.gnu and tau-2.2.21.pgi (2012-05-14)
Version 2.2.21 of TAU has been installed on pacman.
See $SAMPLES_HOME/applications/tau for an example application using
tau.
module load PrgEnv-gnu tau/2.2.21.gnu jdk
or
module load PrgEnv-pgi tau/2.2.21.pgi jdk
python: python-2.7.2 (2012-03-05)
Version 2.7.2 of python is now available via modules
module load python/2.7.2
idl/envi: idl-8.1 (2012-02-10)
Idl/envi version 8.1 is now available via modules
module load idl/8.1
openmpi: openmpi-1.4.3 (2012-01-14)
Openmpi version 1.4.3 is now available via modules for
three different compiler versions:
module load openmpi-gnu/1.4.3
module load openmpi-gnu-4.5/1.4.3
module load openmpi-pgi/1.4.3
comsol: comsol-4.2a.update2 (2012-01-13)
The newest comsol release is now available via modules
module load comsol/4.2a.update2
matlab: matlab version R2011b (2011-12-06)
Matlab R2011b is now available on pacman via modules:
module load matlab/R2011b
R: r-2.13.2 (2011-11-28)
A newer version of R is now available on pacman via
the r/2.13.2 module.
gaussian: gaussian-09.C.01.SSE4a (2011-11-11)
The newest gaussian release is now available for use by
UAF faculty, staff, and students. See $SAMPLES_HOME/applications
for an example job submission script and input file.
comsol: comsol-4.2a (2011-11-07)
The newest comsol release is now available via modules
module load comsol/4.2a
automake: automake-1.11.1 (2011-10-27)
Automake version 1.11.1 has been installed in the following location:
/usr/local/pkg/automake/automake-1.11.1/bin
autoconf: autoconf-2.68 (2011-10-27)
Autoconf version 2.68 has been installed in the following location:
/usr/local/pkg/autoconf/autoconf-2.68/bin
petsc: petsc-3.2-p3 (2011-10-26)
The latest version of PetSC is available for PGI and GNU compilers
module load petsc/3.2-p3.pgi.opt - PGI Version
module load petsc/3.2-p3.gnu.opt - GNU Version
visit: visit 2.3.2 (2011-10-17)
visit 2.3.2 is now available available on pacman via modules:
module load visit/2.3.2
ncl: ncl 6.0.0 (2011-10-17)
NCL 6.0.0 is now available available on pacman via modules:
module load ncl/6.0.0
ncview: ncview 2.1.1 (2011-09-19)
NCView 2.1.1 is now available on pacman. Due to a bug in the
previous version we have updated the default version of ncview-2.1.1.
The previous version is available via the ncview/2.0 module.
paraview: paraview 3.10.1 (2011-08-23)
Paraview version 3.10.1 is available on pacman and supports
loading multiple files at once:
module load paraview/3.10.1
nwchem: nwchem 6.0 (2011-08-19)
NWChem is now available on pacman. An example job script is
available in:
$SAMPLES_HOME/applications/nwchem
octave: octave version 3.4.2 (2011-08-19)
A new release of octave has been installed on pacman. This
version includes netcdf support via the octcdf package.
module load octave/3.4.2
abaqus: abaqus version 6.11-1 (2011-07-18)
The newest version of abaqus is available via modules:
module load abaqus/6.11
totalview: totalview version 8.9.1-0 (2011-06-28)
The newest version of totalview is available via modules:
module load totalview/8.9.1-0
matlab: matlab version R2011a (2011-06-28)
Matlab R2011a is now available on pacman via modules:
module load matlab/R2011a
mexnc: Mexnc/Mexcdf version r3487 (2011-02-21)
mexnc is now available for matlab-R2010a via the
following:
module load matlab/R2010a
matlab
addpath /usr/local/pkg/mexnc/mexnc-r3487/mexcdf/mexnc
osg: OpenSceneGraph-2.8.3 (2010-12-28)
osg is now available on pacman via the osg/2.8.3 module.
R: r-2.11.1 (2010-11-30)
R is now available on pacman via the r/2.11.1 module.
scalapack: scalapack-1.8.0 (2010-11-07)
ScaLAPACK is now available on pacman for GNU and PGI compilers.
PGI: /usr/local/pkg/scalapack/scalapack-1.8.0.pgi/lib
GNU: /usr/local/pkg/scalapack/scalapack-1.8.0.gnu/lib
blacs: blacs (2010-11-07)
BLACS is now available on pacman for GNU and PGI compilers.
PGI: /usr/local/pkg/blacs/blacs-1.1.3.pgi/lib
GNU: /usr/local/pkg/blacs/blacs-1.1.3.gnu/lib
silo: silo-4.8 (2010-11-07)
Silo version 4.8 is now available on pacman in the following directory:
/usr/local/pkg/silo/silo-4.8.gnu
gnuplot: gnuplot-4.4.1 (2010-11-07)
Gnuplot version 4.4.1 is now available via the gnuplot/4.4.1 module.
"software"
Last Updated: Wed, 31 Oct 2012 -Machines: fish
Fish Software
========================================
abaqus: abaqus version 6.11 (2012-12-26)
Version 6.12 of abaqus is available via modules:
module load abaqus/6.11
matlab: matlab version R2012b (2012-12-26)
Matlab R2012a is now available to UAF users via modules:
module load matlab/R2012b
matlab: matlab version R2012a (2012-12-07)
Matlab R2012a is now available to UAF users via modules:
module load matlab/R2012a
comsol: comsol version 4.3a (2012-11-30)
This version of comsol is now available to UAF users via modules:
module load comsol/4.3a
idl/envi: idl-8.2 and envi 5.0 (2012-10-31)
IDL version 8.2 and ENVI version 5.0 are now available
on fish via modules:
module load idl/8.2
"software"
Last Updated: Tue, 17 Feb 2009 -Machines: linuxws
Software on Linuxws
========================================
matlab: matlab-R2012a (2012-12-07)
Matlab version R2012a is now available to UAF users via modules:
module load matlab/R2012a
comsol: comsol-4.3a (2012-12-07)
The newest comsol release is now available to UAF users via modules:
module load comsol/4.3a
R: R-2.15.0 and R-2.15.1 (2012-08-03)
The two newest releases of R are now available.
Load the module to access your preferred version:
module load r/2.15.0
module load r/2.15.1
totalview: totalview-8.10.0-0 (2012-05-23)
The newest release of the Totalview Debugger is now available with
CUDA support. Load the module to access the newest version:
module load totalview/8.10.0-0
python: python-2.7.2 (2012-04-20)
Version 2.7.2 of python is now available via modules
module load python/2.7.2
idl/envi: idl-8.1 (2012-02-10)
Idl/envi version 8.1 is now available via modules
module load idl/8.1
comsol: comsol-4.2a.update2 (2012-01-13)
The newest comsol release is now available via modules
module load comsol/4.2a.update2
totalview: totalview.8.9.1-0 (2011-12-08)
Totalview Debugger version 8.9.1-0 is now available on the linux
workstations. Launch the softwar by first loading the module:
module load totalview/8.9.1-0
matlab: matlab-R2011b (2011-12-06)
Matlab version R2011b is now available to UAF users and
can be loaded via modules:
module load matlab/R2011b
R: R version 2.13.2 (2011-11-28)
A newer version is now available via modules:
module load r/2.13.2
comsol: comsol-4.2a (2011-11-02)
The newest comsol release is now available via modules
module load comsol/4.2a
automake: automake-1.11.1 (2011-11-01)
Automake version 1.11.1 has been installed in the following location:
/usr/local/pkg/automake/automake-1.11.1/bin
autoconf: autoconf-2.68 (2011-11-01)
Autoconf version 2.68 has been installed in the following location:
/usr/local/pkg/autoconf/autoconf-2.68/bin
visit: visit 2.3.2 (2011-10-17)
Visit 2.3.2 is now available available on Linux Workstation
via modules:
module load visit/2.3.2
ncl: ncl 6.0.0 (2011-10-17)
NCL 6.0.0 is now available available on Linux Workstation
via modules:
module load ncl/6.0.0
ncview: ncview 2.0 (2011-07-19)
Ncview version 2.0 is now available via modules
module load ncview/2.0
abaqus: abaqus 6.11 (2011-07-19)
The newest abaqus release is now available via modules
module load abaqus/6.11
comsol: comsol-4.2 (2011-06-08)
The new comsol release is now available via modules
module load comsol/4.2
matlab: matlab-7.10.0 (R2010a) (2010-10-20)
Matlab 7.10.0 is now available to UAF users. A module is
available for this software and can be loaded with:
module load matlab-7.10.0
After loading the module, type 'matlab' at the prompt
to open a new matlab session.
mexnc: mexnc-r3240 (aka mexcdf) (2010-10-07)
Mexnc/mexcdf is now available. To use this software
with matlab, first load the matlab-7.8.0 module
then enter the following at the matlab prompt:
addpath /usr/local/pkg/mexnc/mexcdf-r3240/mexnc
comsol: comsol-4.0a (2010-09-20)
The newest version of comsol is now available in
/usr/local/pkg/comsol/comsol-4.0a.
idl: idl-7.1 (2010-05-17)
idl 7.1 is now available on the linux workstations. A
module is available as idl-7.1 for use.
gsl: gsl-1.13 GNU Scientific Library 1.13 (2010-01-14)
The newest version of GSL is now available. This version
was compiled using the GNU compiler and is now available
on the Workstations in the following location:
/usr/local/pkg/gsl/gsl-1.13/
pgi: pgi-9.0.4 (2010-01-11)
The pgi 9.0.4 compiler is now available via the
pgi-9.0.4 module.
paraview: paraview-3.6.2 (2010-01-11)
Paraview 3.6.2 is now available via the paraview-3.6.2
module.
vapor: vapor-1.5.0 (2009-10-21)
Vapor 1.5.0 is now available via the vapor-1.5.0 module.
paraview: paraview-3.6.1 (2009-10-20)
Paraview 3.6.1 is now available via the paraview-3.6.1
module.
idv: idv-2.7u2 (2009-10-20)
idv 2.7 update 2 is now available via the idv-2.7u2 module.
subversion: subversion-1.6.3 (2009-09-03)
The latest version of subversion is available via the
subversion-1.6.3 module.
google-earth: google-earth-5.0 (2009-08-31)
Google Earth 5.0 is now available on the Workstations in the
following location:
/usr/local/pkg/google-earth/google-earth-5.0
ncl: ncl-5.1.1 (2009-08-05)
The latest version of ncl is now available via the "ncl-5.1.1"
module.
git: git-1.6.1.3 (2009-07-28)
git 1.6.1.3 is now available. This package is available
by loading the "git-1.6.1.3" module.
abaqus: Abaqus 6.9 (2009-07-09)
Abaqus 6.9 is now available. This package is available
by loading the "abaqus-6.9" module.
matlab: matlab-7.8.0 (2009-06-19)
Matlab 7.8.0 is now available. A module is
available for this software and can be loaded with:
module load matlab-7.8.0
After loading the module, type 'matlab' at the prompt
to open a new matlab session.
cmake: cmake-2.6.4
The latest version of cmake is available via the cmake-2.6.4
module.
blender: blender-2.48a
Blender 2.48a is now available on the Workstations. It is
available in a module, as blender-2.48a
avizo: avizo-6.0 (2009-04-30)
Avizo 6.0 is now available in the following directory:
/usr/local/pkg/avizo/avizo60
visit: visit-1.11.2 (2009-04-29)
VisIt 1-11.2 is now available via the "visit-1.11.2" module.
ncl: ncl-5.1.0 (2009-04-29)
The latest version of ncl is now available via the "ncl-5.1.0"
module.
paraview: paraview-3.4.0 (2009-03-10)
Paraview 3.4.0 is now available. It can be accessed with the
paraview-3.4.0 module file and is located in the
/usr/local/pkg/paraview/paraview-3.4.0 directory.
visit: visit-1.11.1 (2009-03-02)
VisIt is now available in /usr/local/pkg/visit/visit-1.11.1
Module files are available as both "visit" and "visit-1.11.1"
comsol: comsol-3.5a (2009-02-06)
The newest version of comsol is now available in
/usr/local/pkg/comsol/comsol-3.5a. This version appears
to resolve the previous errors when starting the software.
matlab: matlab-7.7.0 (2009-02-05)
The latest version of Matlab is available for
use by loading the matlab-7.7.0 module.
matlab: matlab-7.6.0 (2008-07-24)
The latest version of Matlab is available for
use by loading the matlab-7.6.0 module.
paraview: paraview-3.0.2 (2007-09-19)
Paraview version 3.0.2 has been installed into
/usr/local/pkg/paraview/paraview-3.0.2. It is
available via a module (paraview-3.0.2).
acml: acml-3.6.0 & acml-4.0.0 AMD Core Math Library (2007-09-14)
The ACML has been installed and is available in
/usr/local/pkg/acml. The following versions were installed:
acml-3.6.0.gcc
acml-3.6.0.pgi
acml-4.0.0.gcc
These libraries are available as of Sep 14th, 2007 and
the current link was set to point to acml-3.6.0.gcc.
idv: idv-2.2: Integrated Data Viewer (2007-07-28)
The new version of idv(2.2) has been installed
in /usr/local/pkg/idv/idv-2.2 and will be made
the default version on July 12th, 2007.
"storage"
Last Updated: Sun, 06 Jun 2010 -Machines: pacman
Pacman Storage
========================================
The environment variables listed below represent paths. They are
expanded to their actual value by the shell, and can be used in
commands (i.e. ls $ARCHIVE). From the command prompt, the expanded
path and the variable are usually interchangeable. However, in non-shell
settings like ftp, you will need to use the actual path,
not the variable.
In the listing below, $USER is an environment variable holding
your ARSC username.
Filesystem Purpose Default Allowed Use
------------------ ------------------------ -----------
$HOME dotfiles, sm. files 4 GB
/u1/uaf/$USER
$CENTER do work here 750 GB
/center/w/$USER
$ARCHIVE long-term remote storage no quota
/archive/$HOME
-- $HOME: Home directories are intended primarily for basic account
info (e.g. dotfiles). Please use $CENTER (your /center/w/$USER
directory) for compiles, inputs, outputs, etc. Files in the
$HOME are backed up periodically. Quotas are enabled on this
filesystem. Use the command "du -sk $HOME" to show your current
$HOME use.
-- $ARCHIVE: Long-term backed up storage is only available in
your $ARCHIVE directory. As this is an NFS-mounted file system
from bigdipper, files will be temporarily unavailable when
bigdipper goes down for maintenance. I/O performance in this
directory will be much slower. Compiles in $ARCHIVE are not
recommended. $ARCHIVE is not available from compute nodes.
-- $CENTER: Short term, not backed up, purged file system. This is
a large fast local disk. The $CENTER file system is available
to all nodes on pacman and is also available on fish.arsc.edu.
This is the recommended location for input, output, and
temporary files. The $ARCHIVE file system is available for
long term storage.
NOTE: Usage of $CENTER and $HOME can be monitored with the
"show_storage" command.
Long Term Storage Use
======================
batch_stage
------------
Files saved in $ARCHIVE can potentially be offline (i.e. not
on disk). When accessing multiple files in $ARCHIVE, the
"batch_stage" can significantly speed the process of retrieving
files from tape.
e.g.
cd $ARCHIVE/somedirectory
find . -type f | batch_stage -i
See "man batch_stage" for additional examples.
/usr/bin/rcp
-------------
While $ARCHIVE is available as an NFS file system, higher
transfer rates can be obtained by using the "rcp" command for
large transfers to and from $ARCHIVE.
The non-kerberosized version of rsh may be used to transfer files to
$ARCHIVE using the "bigdip-s" hostname.
e.g.
/usr/bin/rcp results.tar "bigdip-s:$ARCHIVE"
NOTE: The full path to rcp (i.e. /usr/bin/rcp) must be used.
$CENTER file purging
---------------------
File purging on the $CENTER directory is not currently enabled. Data
not in use should be migrated to $ARCHIVE. Use may be monitored with the
"show_storage" command.
See http://www.arsc.edu/support/howtos/storage for more information
on storage best practices at ARSC.
"storage"
Last Updated: Mon, 19 Jun 2006 -Machines: linuxws
Linux Workstation Storage
========================================
The environment variables listed below represent paths. They are
expanded to their actual value by the shell, and can be used in commands
(i.e. ls $ARCHIVE). From the command prompt the value and the variable
are usually interchangeable. However, in non-shell settings like ftp
you will need to use the actual path, not the variable.
In the listing below, $USER is an environment variable holding your
ARSC username.
Filesystem Purpose Purged Backed Up Quota
------------- ---------------------- ------- ---------- ------
$HOME shared filesystem No Yes 4 GB
$WRKDIR temp filesystem Yes No None (1)
$WORKDIR
$SCRATCH
$ARCHIVE long term storage No Yes None
$ARCHIVE_HOME
NOTES:
(1) Use is limited by the available space on the disk.
Environment Variable Definitions
=================================
Variable Definition
-------------- ---------------------
$HOME /u1/uaf/$USER or
/u2/wes/$USER or
/u2/red/$USER
$WRKDIR /scratch/$USER
$WORKDIR /scratch/$USER
$SCRATCH /scratch/$USER
$ARCHIVE /archive/$HOME
$ARCHIVE_HOME /archive/$HOME
-- Home directories are intended primarily for basic account info
(e.g. dotfiles). Please use $WRKDIR (your /scratch/$USER directory)
for compiles, inputs, outputs, etc.
* The 'quota' command will show quota information for your $HOME
directory.
-- The $WRKDIR or $SCRATCH directories are local to each machine. (On the
Linux Workstations these two variables both point to the same
location.) When moving to another machine you will also need to move
your files. This file system not backed up, files not accessed in
over 30 days are purged (deleted).
-- Your $SCRATCH directory is not created by default. If one does
not exist on the machine you are using, type 'mkdir $SCRATCH' to
create one.
-- Purging: Files not accessed in over 30 days in $WRKDIR ($SCRATCH)
directories are purged, and these directories are not backed up.
Please store what you want to keep in $ARCHIVE.
-- Long-term backed up storage is only available in your $ARCHIVE
directory. No other directories are backed up. As this is an
NFS-mounted filesystem from bigdipper, files will be temporarily
unavailable when bigdipper is taken down for maintenance. I/O
performance in this directory may be much slower. Compiles and runs in
$ARCHIVE are not recommended.
See http://www.arsc.edu/support/howtos/storage.html for more information
on storage policies at ARSC.
"storage"
Last Updated: Sun, 06 Jun 2010 -Machines: fish
Fish Storage
========================================
The environment variables listed below represent paths. They are
expanded to their actual value by the shell, and can be used in
commands (i.e. ls $ARCHIVE). From the command prompt, the expanded
path and the variable are usually interchangeable. However, in non-shell
settings like ftp, you will need to use the actual path,
not the variable.
In the listing below, $USER is an environment variable holding
your ARSC username.
Filesystem Purpose Default Allowed Use
------------------ ------------------------ -----------
$HOME dotfiles, sm. files 8 GB
/u1/uaf/$USER
$CENTER do work here 750 GB
/center/w/$USER
$ARCHIVE long-term remote storage no quota
/archive/$HOME
-- $HOME: Home directories are intended primarily for basic account
info (e.g. dotfiles). Please use $CENTER (your /center/w/$USER
directory) for compiles, inputs, outputs, etc. Files in the
$HOME are backed up periodically. Quotas are enabled on this
filesystem. Use the command "show_storage" to show your current
$HOME use.
-- $ARCHIVE: Long-term backed up storage is only available in
your $ARCHIVE directory. As this is an NFS-mounted file system
from bigdipper, files will be temporarily unavailable when
bigdipper goes down for maintenance. I/O performance in this
directory will be much slower. Compiles in $ARCHIVE are not
recommended. $ARCHIVE is not available from compute nodes.
-- $CENTER: Short term, not backed up, purged file system. This is
a large fast local disk. The $CENTER file system is available
to all nodes on fish. This is the recommended location
for input, output, and temporary files. The $ARCHIVE
file system is available for long term storage.
NOTE: Usage of $CENTER and $HOME can be monitored with the
"show_storage" command.
Long Term Storage Use
======================
batch_stage
------------
Files saved in $ARCHIVE can potentially be offline (i.e. not
on disk). When accessing multiple files in $ARCHIVE, the
"batch_stage" can significantly speed the process of retrieving
files from tape.
e.g.
cd $ARCHIVE/somedirectory
find . -type f | batch_stage -i
See "man batch_stage" for additional examples.
/usr/bin/rcp
-------------
While $ARCHIVE is available as an NFS file system, higher
transfer rates can be obtained by using the "rcp" command for
large transfers to and from $ARCHIVE.
The non-kerberosized version of rsh may be used to transfer files to
$ARCHIVE using the "bigdip-s" hostname.
e.g.
/usr/bin/rcp results.tar "bigdip-s:$ARCHIVE"
NOTE: The full path to rcp (i.e. /usr/bin/rcp) must be used to
make transfers without a ticket.
$CENTER file purging
---------------------
File purging on the $CENTER directory is not currently enabled. Data
not in use should be migrated to $ARCHIVE. Use may be monitored with
the "show_storage" command.
See http://www.arsc.edu/support/howtos/storage for more information
on storage best practices at ARSC.
"totalview"
Last Updated: Fri, 25 Mar 2011 -Machines: fish
Totalview on Fish
========================================
Totalview is available on fish and can be used to debug MPI, OpenMP and
serial applications. Generally debugging should occur on compute nodes
through the use of an interactive PBS job. Totalview may be run on login
nodes to inspect core files.
The instructions below are prefaced by a prompt corresponding to a system
name where the command should be run.
* fish % corresponds to a fish login node (e.g. fish1 or fish2 ).
* fish-compute% corresponds to a fish compute node.
* local% corresponds to the name of your local workstation.
I. MPI Code Compilation
MPI applications should be compiled with "-g" in order to get the best
possible debugging information.
II. Starting an interactive job with X11 forwarding enabled.
A) Log into fish1 or fish2 with X11 forwarding enabled.
local% ssh -X -Y username@fish1.arsc.edu
B) Start an interactive PBS job requesting the number of processors
required for your job.
* The "standard" queue may be used to debug application
requiring up to 12 MPI tasks per node.
pacman % qsub -q standard -l nodes=2:ppn=12 -X -I
The "-X" qsub option enables X11 forwarding and the "-I"
option requests that the job be interactive.
When there are a sufficient number of nodes available, torque
will start the job.
III. Running totalview.
For MPI applications, start the application using "aprun"
fish-compute % module load xt-totalview
# serial example
fish-compute % totalview ./a.out
# MPI Example
fish-compute % totalview aprun -a -n 24 ./a.out
# Command line args after "-a" are passed to the command being run.
# In this case "aprun".
Additional hints:
1) Code should be compiled with -g. This makes it possible for
totalview to refer back to the source code. Code compiled without
-g will appear as assembly and you will not have meaningful access
to variable values.
2) You can view core files with totalview by passing the executable
and core file to totalview. A core file from an MPI application
can be viewed without using aprun.
pacman % totalview ./a.out core.1234
"totalview"
Last Updated: Fri, 25 Mar 2011 -Machines: pacman
Totalview on Pacman
========================================
Totalview is available on pacman and can be used to debug MPI, OpenMP and
serial applications. Generally debugging should occur on compute nodes
through the use of an interactive PBS job. Totalview may be run on login
nodes to debug short serial applications or to inspect core files.
The instructions below are prefaced by a prompt corresponding to a system
name where the command should be run.
* pacman % corresponds to a pacman login node (e.g. pacman1 or pacman2 ).
* pacman-compute% corresponds to a pacman compute node.
* local% corresponds to the name of your local workstation.
I. MPI Code Compilation
MPI applications must be compiled using PrgEnv-pgi/11.2
or newer or PrgEnv-gnu/4.5.1 or newer. Older PrgEnv modules use
an MPI stack which doesn't work properly with totalview. If your
code requires an older version of the PGI or GNU compiler and MPI
support, please contact the help desk for assistance.
II. Starting an interactive job with X11 forwarding enabled.
A) Log into pacman1 or pacman2 with X11 forwarding enabled.
local% ssh -X -Y username@pacman.arsc.edu
B) Start an interactive PBS job requesting the number of processors
required for your job.
* The "standard_16" queue may be used to debug application
requiring up to 16 MPI tasks per node.
pacman % qsub -q standard_16 -l nodes=2:ppn=16 -X -I
* The "bigmem" queue may be used to debug applications requiring
up to 32 MPI tasks per node.
pacman % qsub -q bigmem -l nodes=2:ppn=32 -X -I
The "-X" qsub option enables X11 forwarding and the "-I"
option requests that the job be interactive.
When there are a sufficient number of nodes available, torque
will start the job.
III. Running totalview.
A) For MPI applications, start the application using the "-tv" and
flags.
pacman-compute % module load totalview
pacman-compute % totalview ./a.out
The "-tv" flag instructs mpirun to start the executable (a.out)
under the control of totalview.
B) In the Totalview "Setup Parameters" window, choose the following:
a) Click on the "Parallel" tab
b) Choose "OpenMPI" for the "Parallel System:"
c) Select the total number of tasks to be run
d) Select the total number of nodes (this value should equal the
number of nodes requested in step II,B.
e) Click "Okay" and begin the Totalview debugging session.
Additional hints:
1) Code should be compiled with -g. This makes it possible for
totalview to refer back to the source code. Code compiled without
-g will appear as assembly and you will not have meaningful access
to variable values.
2) You can view core files with totalview by passing the executable
and core file to totalview. A core file from an MPI application
can be viewed without using mpirun.
pacman % totalview ./a.out core.1234
For more information, see
http://www.roguewave.com/products/totalview-family/totalview.aspx