ARSC system news for all systems

Menu to filter items by type

Type Downtime News
Machine All Systems linuxws pacman bigdipper fish lsi

Contents for all systems

News Items

"CENTER Old File Removal"

Last Updated: Tue, 17 Dec 2013 -
Machines: linuxws pacman fish
CENTER Old File Removal 
========================================
ARSC has launched the automatic deletion of old files
residing on the $CENTER filesystem.  The automatic tool will run
weekly and will target files older than 30 days. 

To identify which of your files are eligible for
deletion, try running the following command: 
lfs find $CENTER -type f -atime +30

Remember, there are NO backups for data in $CENTER.  Once the data
is deleted, the data is gone.

Note: Modification of file timestamp information, data, or metadata
for the sole purpose of bypassing the automated file removal tool
is prohibited.

The policy regarding the deletion of old files is available on the
ARSC website: http://www.arsc.edu/arsc/support/policy/#removal

Users are encouraged to move important but infrequently used
data to the intermediate and long term $ARCHIVE storage
filesystem. Recommendations for optimizing $ARCHIVE file
storage and retrieval are available on the ARSC website:
http://www.arsc.edu/arsc/knowledge-base/long-term-storage-best-pr/index.xml

Please contact the ARSC Help Desk with questions regarding the
automated deletion of old files in $CENTER.

"LDAP Passwords"

Last Updated: Mon, 20 May 2013 -
Machines: linuxws pacman bigdipper fish
    
How to update your LDAP password 
========================================

User authentication and login to ARSC systems uses University 
of Alaska (UA) passwords and follows the LDAP protocol to connect to
the University's Enterprise Directory.  Because of this, users must
change their passwords using the UA Enterprise tools.

While logging into ARSC systems, if you see the following message,
please change your password on https://elmo.alaska.edu

  Password: 
  Your are required to change your LDAP password immediately.
  Enter login(LDAP) password:

Attempts to change your password on ARSC systems will fail.

Please contact the ARSC Help Desk if you are unable to log into
https://elmo.alaska.edu to change your login password.

  

"LSI HW Support Expired"

Last Updated: Wed, 11 Dec 2013 -
Machines: lsi
LSI Hardware is Now Off Vendor Support
========================================
The hardware vendor support contract for the LSI hardware has expired.
Existing LSI hardware can no longer be repaired under contract
or warranty. If a hardware failure occurs, all or part of the LSI
infrastructure may cease to operate.

All compute hardware still under vendor support is being migrated to
the Arctic Region Supercomputing Center (ARSC) pacman.arsc.edu system
located on the UAF campus.

All storage hardware will continue to support existing LSI file share
services and the LSI compute portal. However, all data is "AT RISK"
of being lost in the event of a catastrophic hardware failure.

All backup policies will continue to be honored until a hardware
failure occurs. Users are strongly encouraged to maintain a backup
of their information and data in case of a catastrophic failure.

"LSI Login Node Retired"

Last Updated: Wed, 11 Dec 2013 -
Machines: lsi
LSI Login Node, Anyu, Retired
========================================
The compute node "anyu.inbre.alaska.edu" is retired. During the
last maintenance update, the hardware failed to boot properly.
There are insufficient replacement parts to safely maintain the
node for supporting user activity. User data which resided on "anyu"
is available upon request.

"LSI Portal Migration"

Last Updated: Wed, 11 Dec 2013 -
Machines: lsi
LSI Compute Portal Migration
========================================
The LSI Compute Portal available at
https://biotech.inbre.alaska.edu/portal will begin its migration to
a new host in 2014.

Following the January 2014 scheduled LSI System downtime, a clone of
the LSI Compute Portal will be available at https://biotech.arsc.edu.
This clone will enable bioinformatics users to submit jobs to the
ARSC pacman system hosting LSI compute hardware and a 128 core
bioinformatics node with 2 TB of RAM. Jobs submitted to the LSI
Compute Portal will run on the LSI hardware via the "bio" queue in
pacman's batch scheduling environment.

The original LSI Compute Portal
(https://biotech.inbre.alaska.edu/portal) will remain in operation
through December 2014 and will continue to submit jobs to the remaining
LSI compute hardware which will remain separate from the ARSC pacman
system.

"Login Cluster Retirement"

Last Updated: Wed, 11 Dec 2013 -
Machines: lsi
LSI "tuxedo" System to be Retired
========================================

    During the May 2014 UAF Fire Alarm and Safety Test Downtime, 
    the "tuxedo.inbre.alaska.edu" LSI login cluster will be
    retired. Replacing the service provided by the tuxedo login cluster
    will be "pacman.arsc.edu", the Penguin Computing Cluster hosted by
    the Arctic Region Supercomputing Center on the UAF campus.

    The pacman system is a 2816 core system offering both interactive
    logins and batch job submissions from the cluster itself and the LSI
    Compute Portal.  The pacman system supports long runtime batch jobs
    and a 128 core node with 2 TB of RAM for bioinformatics applications.

    For information on how to access pacman and the 128 core node
    dedicated to the use of bioinformatics applications, please contact
    ARSC User Support at consult@arsc.edu.

"New Default PrgEnv-pgi"

Last Updated: Wed, 26 Jun 2013 -
Machines: pacman
    
Updated Default PrgEnv-pgi module to 13.4
========================================
In response to noticeable cases in which the PGI 12.10 compiler failed
to generate a working executable, we will be moving the pacman default
PrgEnv-pgi module from PrgEnv-pgi/12.10 to PrgEnv-pgi/13.4.

This will affect users who run the "module load PrgEnv-pgi" command
instead of specifying a particular module version, e.g. "module load
PrgEnv-pgi/13.4" for program compilation or in their job submission
scripts.

If you are currently compiling and running successfully with
PrgEnv-pgi/12.10, you are welcome to continue using that version.
Make sure you review your ~/.profile or ~/.cshrc files and explicitly
load the PrgEnv-pgi/12.10 module instead of "module load PrgEnv-pgi".

If your code is failing to compile or run properly with
PrgEnv-pgi/12.10 (the current system default), we encourage you to
try again using the PrgEnv-pgi/13.4 environment instead. The
"module swap PrgEnv-pgi/12.10 PrgEnv-pgi/13.4" command will switch
versions of the PGI compiler for you.

Please forward any questions regarding this change or any issues with
compiling or running your program on the pacman system to the ARSC
Help Desk.

    
    
  

"PrgEnv"

Last Updated: Wed, 22 Oct 2008 -
Machines: pacman
Programming Environments on pacman
====================================
Compiler and MPI Library versions on pacman are controlled via
the modules package.  New accounts load the "PrgEnv-pgi" module by
default.  This module adds the PGI compilers and the OpenMPI stack 
to the PATH.  

Should you experience problems with a compiler or library in 
many cases a new programming environment may be available.

Below is a description of available Programming Environments:

Module Name      Description
===============  ==============================================
PrgEnv-pgi       Programming environment using PGI
                 compilers and MPI stack (default version).

PrgEnv-gcc       Programming environment using GNU compilers 
                 and MPI stack.


For a list of the latest available Programming Environments, run:

   pacman1 748% module avail PrgEnv-pgi
   ------------------- /usr/local/pkg/modulefiles -------------------
   PrgEnv-pgi/10.5           PrgEnv-pgi/11.2           
   PrgEnv-pgi/9.0.4(default) 


If no version is specified when the module is loaded, the "default"
version will be selected.


Programming Environment Changes
================================
The following is a table of recent additions and changes to the
Programming Environment on pacman.

Updates on 1/9/2013
====================

Default Module Updates
-----------------------
The default modules for the following packages will be updated on 1/9/2013.

  module name          new default        previous default
  ===================  =================  ================
  abaqus               6.11               6.10
  comsol               4.2a               4.3a
  grads                1.9b4              2.0.2
  idl                  8.2                6.4
  matlab               R2011b             R2010a
  ncl                  6.0.0              5.1.1
  nco                  4.1.0              3.9.9
  OpenFoam             2.1.0              1.7.1
  petsc                3.3-p3.pgi.opt     3.1-p2.pgi.debug
  pgi                  12.5               9.0.4
  PrgEnv-pgi           12.5               9.0.4              
  python               2.7.2              2.6.5
  r                    2.15.2             2.11.1
  totalview            8.10.0-0           8.8.0-1
  

Retired Modules
----------------
The following module files will be retired on 1/9/2013.

* PrgEnv-gnu/prep0
* PrgEnv-gnu/prep1
* PrgEnv-gnu/prep2
* PrgEnv-gnu/prep3
* PrgEnv-pgi/prep0
* PrgEnv-pgi/prep1
* PrgEnv-pgi/prep2
* PrgEnv-pgi/prep3

Known Issues:
-------------
* Some users have reported seg faults for applications compiled with
  PrgEnv-pgi/12.5 when CPU affinity is enabled (e.g. --bind-to-core
  or  --mca mpi_paffinity_alone 1).  Applications compiled with 
  PrgEnv-pgi/12.10 do not appear to have this issue.

"PrgEnv"

Last Updated: Mon, 02 Jul 2012 -
Machines: fish
Programming Environment on Fish
========================================
Compiler and MPI Library versions on fish are controlled via
the modules package.  All accounts load the "PrgEnv-pgi" module by
default.  This module adds the PGI compilers to the PATH.  

Should you experience problems with a compiler or library in 
many cases a new programming environment may be available.

Below is a description of available Programming Environments:

Module Name      Description
===============  ==============================================
PrgEnv-pgi       Programming environment using PGI
                 compilers and MPI stack (default version).

PrgEnv-cray      Programming environment using Cray compilers 
                 and MPI stack.

PrgEnv-gcc       Programming environment using GNU compilers 
                 and MPI stack.

Additionally multiple compiler versions may also be available.  

Module Name      Description
===============  ==============================================
pgi              The PGI compiler and related tools

cce              The Cray Compiler Environment and related tools

gcc              The GNU Compiler Collection and related tools.


To list the available version of a package use the "module avail pkg" command:

% module avail pgi
-------------------------- /opt/modulefiles --------------------------
pgi/12.3.0(default) pgi/12.4.0


Programming Environment Changes
================================
The following is a table of recent additions and changes to the
Programming Environment on fish.

  Date         Module Name            Description
  ----------   ---------------------  -----------------------------------

"modules"

Last Updated: Sun, 06 Jun 2010 -
Machines: linuxws pacman
Using the Modules Package
=========================

The modules package is used to prepare the environment for various
applications before they are run.  Loading a module will set the
environment variables required for a program to execute properly.
Conversely, unloading a module will unset all environment variables
that had been previously set.  This functionality is ideal for
switching between different versions of the same application, keeping
differences in file paths transparent to the user.


Sourcing the Module Init Files
---------------------------------------------------------------------
For some jobs, it may be necessary to source these files, as they 
may not be automatically sourced as with login shells.
 
Before the modules package can be used, its init file must first be
sourced.

To do this using tcsh or csh, type:

   source /etc/profile.d/modules.csh

To do this using bash, ksh, or sh, type:

   . /etc/profile.d/modules.sh

Once the modules init file has been sourced, the following commands
become available:

Command                     Purpose
---------------------------------------------------------------------
module avail                - list all available modules
module load <pkg>           - load a module file from environment
module unload <pkg>         - unload a module file from environment
module list                 - display modules currently loaded
module switch <old> <new>   - replace module <old> with module <new>
module purge                - unload all modules

"modules"

Last Updated: Sun, 06 Jun 2010 -
Machines: fish
Using the Modules Package
=========================

The modules package is used to prepare the environment for various
applications before they are run.  Loading a module will set the
environment variables required for a program to execute properly.
Conversely, unloading a module will unset all environment variables
that had been previously set.  This functionality is ideal for
switching between different versions of the same application, keeping
differences in file paths transparent to the user.


Sourcing the Module Init Files
---------------------------------------------------------------------
For some jobs, it may be necessary to source these files, as they 
may not be automatically sourced as with login shells.
 
Before the modules package can be used, its init file must first be
sourced.

To do this using tcsh or csh, type:

   source /opt/modules/default/init/tcsh

To do this using bash, type

   source /opt/modules/default/init/bash

To do this using ksh, type:

   source /opt/modules/default/init/ksh

Once the modules init file has been sourced, the following commands
become available:

Command                     Purpose
---------------------------------------------------------------------
module avail                - list all available modules
module load <pkg>           - load a module file from environment
module unload <pkg>         - unload a module file from environment
module list                 - display modules currently loaded
module switch <old> <new>   - replace module <old> with module <new>
module purge                - unload all modules (not recommended on fish)

"pubkeys"

Last Updated: Sun, 19 Dec 2010 -
Machines: linuxws
Setting up SSH Public Authentication on Linux/UNIX Systems
===========================================================

SSH public key authentication is available on ARSC Academic systems 
as an alternative to SecurID authentication.  This method of authentication 
allows you to log into ARSC Academic systems (e.g. pacman, midnight,
bigdipper) using a password, removing the need for a hardware 
authentication mechanism.  The following guide describes the procedure for 
enabling SSH public key authentication for your workstation account.

Linux and Mac Systems Instructions
==================================

Step #1 - Generate an SSH Key Pair on Your Local System

Note: If you have existing SSH keys on your system, you may want to back 
them up before generating a new key pair.

The SSH installation on your local system should have come with an 
executable named "ssh-keygen".  Use this command to generate an SSH 
public/private key pair:

  $ ssh-keygen

This program will prompt you for the location to save the key.  The rest 
of this guide will assume you chose the default location, 
$HOME/.ssh/id_rsa.

You will then be prompted to enter a password.  Please choose a long 
password with multiple character classes (e.g., lowercase letters, 
uppercase letters, numbers, and/or symbols).  After you set your password, 
the program will write two files to the location you specified:

  Private Key: $HOME/.ssh/id_rsa
  Public Key: $HOME/.ssh/id_rsa.pub

Do not share your private key.  Take precautions to make sure others 
cannot access your private key.

Step #2 - Transfer Your Public Key to Pacman, Midnight, Linux Workstations, etc.

ARSC has developed a tool, "ssh-keymanage", to help you comply with our 
security policies while adding your SSH public keys to linux workstations.
When a public key is added to your account on the workstations, it must be
associated with a particular system that is allowed to authenticate with that 
key.  This is accomplished via SSH's "from=" clause, which is tied to a public 
key when it is inserted into SSH's authorized_keys file.

The basic usage for adding a public key to a workstation with the 
ssh-keymanage tool is:

  ssh-keymanage --add <keyfile> --host <hostname>

This usage assumes that you have already transferred the public key you 
generated in Step #1 to the workstation.  You will also need to know your local 
system's full hostname (e.g., "sysname.uaf.edu").

Alternatively, the following command can be used to transfer and add your 
key to a workstation all at once:

  cat ~/.ssh/id_rsa.pub | ssh -tt username@mallard.arsc.edu ssh-keymanage --add /dev/stdin --host sysname.uaf.edu

Step #3 - Enable SSH Public Key Authentication on Your Local System

Workstations are already configured to allow SSH public key authentication on 
the server side, but you will need to make sure the SSH client on your local 
machine is configured to allow SSH public key authentication.  There are 
several ways to do this, including:

a) Adding an option to your SSH command when you connect to a workstation:

     ssh -o PubkeyAuthentication=yes username@mallard.arsc.edu

b) Adding the following to your $HOME/.ssh/config file as a long-term 
   solution:

     Host mallard.arsc.edu
     PubkeyAuthentication yes

Windows Instructions
====================

Step #1 - Generate an SSH Key Pair on Your Local System

Note: If you have existing SSH keys on your system, you may want to back 
them up before generating a new key pair.

You will need to use PuTTY's "puttygen.exe" program to generate a key 
pair.  If you installed the HPCMP Kerberos Kit in the default location, 
you can run this program by clicking Start -> Run and entering the 
following into the "Open" text box:

  "C:\Program Files\HPCMP Kerberos\puttygen.exe"

Next, click the "Generate" button in this program.  This will prompt you 
to move the mouse around to generate randomness to create a unique key 
pair.  This may take you a few minutes.  Once this process is complete, 
you will be shown the public key for your SSH key pair.

Please enter a password for your key pair by filling out both the "Key 
passphrase" and "Confirm passphrase" text boxes.  Choose a long password 
with multiple character classes (e.g., lowercase letters, uppercase 
letters, numbers, and/or symbols).

Then, click the "Save private key" button.  You are free to choose the 
file name and location of the private key.  This guide will assume you 
saved the private key as "private.ppk" in your "My Documents" folder.  Do 
not share your private key.  Take precautions to make sure others cannot 
access your private key.  Proceed to Step #2, but do not close the "PuTTY 
Key Generator" yet.

Step #2 - Transfer Your Public Key to the workstations

Log into a workstation with your SecurID card and open your favorite text 
editor.  Copy the text in the "Public key for pasting into OpenSSH 
authorized_keys file" text area on the "PuTTY Key Generator" window.  
Paste this text into the text editor on a workstation and save this to a 
temporary file.  This guide will assume you named the file "pubkey".

ARSC has developed a tool, "ssh-keymanage", to help you comply with our 
security policies while adding your SSH public keys to a workstation.  When a 
public key is added to your account on the workstation, it must be associated 
with a particular system that is allowed to authenticate with that key.  This 
is accomplished via SSH's "from=" clause, which is tied to a public key 
when it is inserted into SSH's authorized_keys file.

The basic usage for adding a public key to the workstation with the 
ssh-keymanage tool is:

  ssh-keymanage --add <keyfile> --host <hostname>

You will need to know your local system's full hostname (e.g., 
"sysname.uaf.edu").  For example:

  ssh-keymanage --add pubkey --host sysname.uaf.edu

This command will report whether the key was successfully added.  Once the 
public key has been added, type "exit" to close PuTTY.

Step #3 - Add Your Private Key to PuTTY

Launch PuTTY again.  Click the + sign next to "SSH", under the 
"Connection" category.  Click the "Auth" section under the SSH 
subcategory.  Click the "Browse..." button under "Private key file for 
authentication" and select your private key file, "private.ppk".  Go back 
to the "Session" category and enter workstation name under "Host Name".

If you do not want to enter your private key every time you run PuTTY, you 
may wish to save your session settings by entering a name under "Saved 
Sessions" (e.g., "Workstation (pubkey auth)") and clicking "Save".  The next 
time you run PuTTY, you can reload these settings by selecting your saved 
session and clicking "Load".

Finally, click "Open".  Instead of being prompted for a SecurID passcode, 
you should be prompted for the password you set on your key pair in 
Step #1.  Enter your key pair password.  You should now be logged into 
the workstation.

"pubkeys"

Last Updated: Fri, 11 Jun 2010 -
Machines: pacman
Setting Up SSH Public Key Authentication On Linux/UNIX Systems
==============================================================

SSH public key authentication is available on ARSC Academic systems 
as an alternative to SecurID authentication.  This method of authentication 
allows you to log into ARSC Academic systems (e.g. pacman, midnight,
bigdipper) using a password, removing the need for a hardware 
authentication mechanism.  The following guide describes the procedure for 
enabling SSH public key authentication for your pacman account.

Linux and Mac Systems Instructions
==================================

Step #1 - Generate an SSH Key Pair on Your Local System

Note: If you have existing SSH keys on your system, you may want to back 
them up before generating a new key pair.

The SSH installation on your local system should have come with an 
executable named "ssh-keygen".  Use this command to generate an SSH 
public/private key pair:

  $ ssh-keygen

This program will prompt you for the location to save the key.  The rest 
of this guide will assume you chose the default location, 
$HOME/.ssh/id_rsa.

You will then be prompted to enter a password.  Please choose a long 
password with multiple character classes (e.g., lowercase letters, 
uppercase letters, numbers, and/or symbols).  After you set your password, 
the program will write two files to the location you specified:

  Private Key: $HOME/.ssh/id_rsa
  Public Key: $HOME/.ssh/id_rsa.pub

Do not share your private key.  Take precautions to make sure others 
cannot access your private key.

Step #2 - Transfer Your Public Key to Pacman, Midnight, etc.

ARSC has developed a tool, "ssh-keymanage", to help you comply with our 
security policies while adding your SSH public keys to pacman.  When a 
public key is added to your account on pacman, it must be associated with 
a particular system that is allowed to authenticate with that key.  This 
is accomplished via SSH's "from=" clause, which is tied to a public key 
when it is inserted into SSH's authorized_keys file.

The basic usage for adding a public key to pacman with the ssh-keymanage 
tool is:

  ssh-keymanage --add <keyfile> --host <hostname>

This usage assumes that you have already transferred the public key you 
generated in Step #1 to pacman.  You will also need to know your local 
system's full hostname (e.g., "sysname.uaf.edu").

Alternatively, the following command can be used to transfer and add your 
key to pacman all at once:

  cat ~/.ssh/id_rsa.pub | ssh -tt username@pacman.arsc.edu ssh-keymanage --add /dev/stdin --host sysname.uaf.edu

Step #3 - Enable SSH Public Key Authentication on Your Local System

Pacman is already configured to allow SSH public key authentication on the 
server side, but you will need to make sure the SSH client on your local 
machine is configured to allow SSH public key authentication.  There are 
several ways to do this, including:

a) Adding an option to your SSH command when you connect to pacman:

     ssh -o PubkeyAuthentication=yes username@pacman.arsc.edu

b) Adding the following to your $HOME/.ssh/config file as a long-term 
   solution:

     Host pacman
     PubkeyAuthentication yes
     Hostname pacman.arsc.edu

Windows Instructions
====================

Step #1 - Generate an SSH Key Pair on Your Local System

Note: If you have existing SSH keys on your system, you may want to back 
them up before generating a new key pair.

You will need to use PuTTY's "puttygen.exe" program to generate a key 
pair.  If you installed the HPCMP Kerberos Kit in the default location, 
you can run this program by clicking Start -> Run and entering the 
following into the "Open" text box:

  "C:\Program Files\HPCMP Kerberos\puttygen.exe"

Next, click the "Generate" button in this program.  This will prompt you 
to move the mouse around to generate randomness to create a unique key 
pair.  This may take you a few minutes.  Once this process is complete, 
you will be shown the public key for your SSH key pair.

Please enter a password for your key pair by filling out both the "Key 
passphrase" and "Confirm passphrase" text boxes.  Choose a long password 
with multiple character classes (e.g., lowercase letters, uppercase 
letters, numbers, and/or symbols).

Then, click the "Save private key" button.  You are free to choose the 
file name and location of the private key.  This guide will assume you 
saved the private key as "private.ppk" in your "My Documents" folder.  Do 
not share your private key.  Take precautions to make sure others cannot 
access your private key.  Proceed to Step #2, but do not close the "PuTTY 
Key Generator" yet.

Step #2 - Transfer Your Public Key to Pacman or Midnight

Log into pacman with your SecurID card and open your favorite text 
editor.  Copy the text in the "Public key for pasting into OpenSSH 
authorized_keys file" text area on the "PuTTY Key Generator" window.  
Paste this text into the text editor on pacman and save this to a 
temporary file.  This guide will assume you named the file "pubkey".

ARSC has developed a tool, "ssh-keymanage", to help you comply with our 
security policies while adding your SSH public keys to pacman.  When a 
public key is added to your account on pacman, it must be associated with 
a particular system that is allowed to authenticate with that key.  This 
is accomplished via SSH's "from=" clause, which is tied to a public key 
when it is inserted into SSH's authorized_keys file.

The basic usage for adding a public key to pacman with the ssh-keymanage 
tool is:

  ssh-keymanage --add <keyfile> --host <hostname>

You will need to know your local system's full hostname (e.g., 
"sysname.uaf.edu").  For example:

  ssh-keymanage --add pubkey --host sysname.uaf.edu

This command will report whether the key was successfully added.  Once the 
public key has been added, type "exit" to close PuTTY.

Step #3 - Add Your Private Key to PuTTY

Launch PuTTY again.  Click the + sign next to "SSH", under the 
"Connection" category.  Click the "Auth" section under the SSH 
subcategory.  Click the "Browse..." button under "Private key file for 
authentication" and select your private key file, "private.ppk".  Go back 
to the "Session" category and enter pacman.arsc.edu under "Host Name".

If you do not want to enter your private key every time you run PuTTY, you 
may wish to save your session settings by entering a name under "Saved 
Sessions" (e.g., "Pacman (pubkey auth)") and clicking "Save".  The next 
time you run PuTTY, you can reload these settings by selecting your saved 
session and clicking "Load".

Finally, click "Open".  Instead of being prompted for a SecurID passcode, 
you should be prompted for the password you set on your key pair in 
Step #1.  Enter your key pair password.  You should now be logged into 
pacman.

"pubkeys"

Last Updated: Fri, 11 Jun 2010 -
Machines: bigdipper
Setting Up SSH Public Key Authentication On Linux/UNIX Systems
==============================================================

SSH public key authentication is available on ARSC Academic systems 
as an alternative to SecurID authentication.  This method of authentication 
allows you to log into ARSC Academic systems (e.g. pacman, midnight,
bigdipper) using a password, removing the need for a hardware 
authentication mechanism.  The following guide describes the procedure for 
enabling SSH public key authentication for your bigdipper account.

Linux and Mac Systems Instructions
==================================

Step #1 - Generate an SSH Key Pair on Your Local System

Note: If you have existing SSH keys on your system, you may want to back 
them up before generating a new key pair.

The SSH installation on your local system should have come with an 
executable named "ssh-keygen".  Use this command to generate an SSH 
public/private key pair:

  $ ssh-keygen

This program will prompt you for the location to save the key.  The rest 
of this guide will assume you chose the default location, 
$HOME/.ssh/id_rsa.

You will then be prompted to enter a password.  Please choose a long 
password with multiple character classes (e.g., lowercase letters, 
uppercase letters, numbers, and/or symbols).  After you set your password, 
the program will write two files to the location you specified:

  Private Key: $HOME/.ssh/id_rsa
  Public Key: $HOME/.ssh/id_rsa.pub

Do not share your private key.  Take precautions to make sure others 
cannot access your private key.

Step #2 - Transfer Your Public Key to Bigdipper.

ARSC has developed a tool, "ssh-keymanage", to help you comply with our 
security policies while adding your SSH public keys to bigdipper.  When a 
public key is added to your account on bigdipper, it must be associated with 
a particular system that is allowed to authenticate with that key.  This 
is accomplished via SSH's "from=" clause, which is tied to a public key 
when it is inserted into SSH's authorized_keys file.

The basic usage for adding a public key to bigdipper with the ssh-keymanage 
tool is:

  ssh-keymanage --add <keyfile> --host <hostname>

This usage assumes that you have already transferred the public key you 
generated in Step #1 to bigdipper.  You will also need to know your local 
system's full hostname (e.g., "sysname.uaf.edu").

Step #3 - Enable SSH Public Key Authentication on Your Local System

Pacman is already configured to allow SSH public key authentication on the 
server side, but you will need to make sure the SSH client on your local 
machine is configured to allow SSH public key authentication.  There are 
several ways to do this, including:

a) Adding an option to your SSH command when you connect to bigdipper:

     ssh -o PubkeyAuthentication=yes username@bigdipper.arsc.edu

b) Adding the following to your $HOME/.ssh/config file as a long-term 
   solution:

     Host bigdipper
     PubkeyAuthentication yes
     Hostname bigdipper.arsc.edu

Windows Instructions
====================

Step #1 - Generate an SSH Key Pair on Your Local System

Note: If you have existing SSH keys on your system, you may want to back 
them up before generating a new key pair.

You will need to use PuTTY's "puttygen.exe" program to generate a key 
pair.  If you installed the HPCMP Kerberos Kit in the default location, 
you can run this program by clicking Start -> Run and entering the 
following into the "Open" text box:

  "C:\Program Files\HPCMP Kerberos\puttygen.exe"

Next, click the "Generate" button in this program.  This will prompt you 
to move the mouse around to generate randomness to create a unique key 
pair.  This may take you a few minutes.  Once this process is complete, 
you will be shown the public key for your SSH key pair.

Please enter a password for your key pair by filling out both the "Key 
passphrase" and "Confirm passphrase" text boxes.  Choose a long password 
with multiple character classes (e.g., lowercase letters, uppercase 
letters, numbers, and/or symbols).

Then, click the "Save private key" button.  You are free to choose the 
file name and location of the private key.  This guide will assume you 
saved the private key as "private.ppk" in your "My Documents" folder.  Do 
not share your private key.  Take precautions to make sure others cannot 
access your private key.  Proceed to Step #2, but do not close the "PuTTY 
Key Generator" yet.

Step #2 - Transfer Your Public Key to Bigdipper

Log into bigdipper with your SecurID card and open your favorite text 
editor.  Copy the text in the "Public key for pasting into OpenSSH 
authorized_keys file" text area on the "PuTTY Key Generator" window.  
Paste this text into the text editor on pacman and save this to a 
temporary file.  This guide will assume you named the file "pubkey".

ARSC has developed a tool, "ssh-keymanage", to help you comply with our 
security policies while adding your SSH public keys to bigdipper.  When a 
public key is added to your account on bigdipper, it must be associated with 
a particular system that is allowed to authenticate with that key.  This 
is accomplished via SSH's "from=" clause, which is tied to a public key 
when it is inserted into SSH's authorized_keys file.

The basic usage for adding a public key to bigdipper with the ssh-keymanage 
tool is:

  ssh-keymanage --add <keyfile> --host <hostname>

You will need to know your local system's full hostname (e.g., 
"sysname.uaf.edu").  For example:

  ssh-keymanage --add pubkey --host sysname.uaf.edu

This command will report whether the key was successfully added.  Once the 
public key has been added, type "exit" to close PuTTY.

Step #3 - Add Your Private Key to PuTTY

Launch PuTTY again.  Click the + sign next to "SSH", under the 
"Connection" category.  Click the "Auth" section under the SSH 
subcategory.  Click the "Browse..." button under "Private key file for 
authentication" and select your private key file, "private.ppk".  Go back 
to the "Session" category and enter bigdipper.arsc.edu under "Host Name".

If you do not want to enter your private key every time you run PuTTY, you 
may wish to save your session settings by entering a name under "Saved 
Sessions" (e.g., "Bigdipper (pubkey auth)") and clicking "Save".  The next 
time you run PuTTY, you can reload these settings by selecting your saved 
session and clicking "Load".

Finally, click "Open".  Instead of being prompted for a SecurID passcode, 
you should be prompted for the password you set on your key pair in 
Step #1.  Enter your key pair password.  You should now be logged into 
bigdipper.

"queues"

Last Updated: Wed, 17 Dec 2008 -
Machines: pacman
Pacman Queues
========================================

The queue configuration is as described below.  It is subject to
review and further updates.


   Login Nodes Use:
   =================
   The pacman1 and pacman2 login nodes are a shared resource and are 
   not intended for computationally or memory intensive work.  Processes 
   using more than 30 minutes of CPU time on login nodes may be killed 
   by ARSC without warning.  Please use compute nodes or pacman3 through
   pacman9 for computationally or memory intensive work.


   Queues:
   ===============
   Specify one of the following queues in your Torque/Moab qsub script
   (e.g., "#PBS -q standard"):

     Queue Name     Purpose of queue
     -------------  ------------------------------
     standard       General use routing queue, routes to standard_16 queue.
     standard_4     General use by all allocated users. Uses 4-core nodes.
                    
     standard_12    General use by all allocated users. Uses 12-core nodes.
                    
     standard_16    General use by all allocated users. Uses 16-core nodes.
                    
     bigmem         Usable by all allocated users requiring large memory 
                    resources. Jobs that do not require very large memory 
                    should consider the standard queues.  
                    Uses 32-core large memory nodes.
                                       
     debug          Quick turnaround queue for debugging work.  Uses 12-core 
                    and 16-core nodes.
                    
     background     For projects with little or no remaining allocation. 
                    This queue has the lowest priority, however projects
                    running jobs in this queue do not have allocation    
                    deducted. The number of running jobs or processors 
                    available to this queue may be altered based on system load.
                    Uses 16-core nodes.
                    
     shared         Queue which allows more than one job to be placed on a
                    node.  Jobs will be charged for the portion of the 
                    cores used by the job.  MPI, OpenMP and memory intensive
                    serial work should consider using the standard queue 
                    instead.   Uses 4-core nodes.
                      
     transfer       For data transfer to and from $ARCHIVE.  Be sure to 
                    bring all $ARCHIVE files online using batch_stage 
                    prior to the file copy.  

   See 'qstat -q' for a complete list of system queues.  Note, some 
   queues are not available for general use.


   Maximum Walltimes:
   ===================
   The maximum allowed walltime for a job is dependent on the number of 
   processors requested.  The table below describes maximum walltimes for 
   each queue.

   Queue             Min   Max     Max       
                    Nodes Nodes  Walltime Notes
   ---------------  ----- ----- --------- ------------
   standard_4           1   128 240:00:00 10-day max walltime.  
   standard_12          1     6 240:00:00 10-day max walltime.    
   standard_16          1    32  48:00:00 
   debug                1     6  01:00:00 Only runs on 12 & 16 core nodes.
   shared               1     1  48:00:00  
   transfer             1     1  60:00:00
   bigmem               1     4 240:00:00     
   background           1    11  08:00:00 Only runs on 16 core nodes.     


   NOTES:
   * Feb 7, 2013    - The gpu queue and nodes were retired from the compute
                      node poll.  Fish is available for applications requiring
                      GPUs
   * Oct 1, 2012    - Max walltime for transfer increased to 60 hours.
   * Sept 18, 2012  - Removed references to $WORKDIR and $LUSTRE
   * March 2, 2012  - standard_4 was added to the available queues.
                      The $LUSTRE filesystem should be used with the
                      standard_4 queue.  Accessing files in $WORKDIR
                      from the standard_4 queue may result in significant
                      performance degradation.
   * March 14, 2012 - shared queue was moved from 12 core nodes to 4 
                      core nodes.    
                   

   PBS Commands:
   =============
   Below is a list of common PBS commands.  Additional information is
   available in the man pages for each command.

   Command         Purpose
   --------------  -----------------------------------------
   qsub            submit jobs to a queue
   qdel            delete a job from the queue   
   qsig            send a signal to a running job
   

   Running a Job:
   ==============
   To run a batch job, create a qsub script which, in addition to
   running your commands, specifies the processor resources and time
   required.  Submit the job to PBS with the following command.   (For
   more PBS directives, type "man qsub".)

     qsub <script file>

   Sample PBS scripts:
   --------------
   ## Beginning of MPI Example Script  ############
   #!/bin/bash
   #PBS -q standard_12          
   #PBS -l walltime=96:00:00 
   #PBS -l nodes=4:ppn=12
   #PBS -j oe
               
   cd $PBS_O_WORKDIR

   mpirun ./myprog


   ## Beginning of OpenMP Example Script  ############

   #!/bin/bash
   #PBS -q standard_16
   #PBS -l nodes=1:ppn=16
   #PBS -l walltime=8:00:00
   #PBS -j oe

   cd $PBS_O_WORKDIR
   export OMP_NUM_THREADS=16

   ./myprog    
   #### End of Sample Script  ##################



   Resource Limits:
   ==================
   The only resource limits users should specify are walltimes and nodes, 
   ppn limits.  The "nodes" statement requests a job be  allocated a number 
   of chunks with the given "ppn" size.  
  

   Tracking Your Job:
   ==================
   To see which jobs are queued and/or running, execute this
   command:

     qstat -a



   Current Queue Limits:
   =====================
   Queue limits are subject to change and this news item is not always
   updated immediately.  For a current list of all queues, execute:

     qstat -Q

   For all limits on a particular queue:

     qstat -Q -f <queue-name>



   Maintenance
   ============
   Scheduled maintenance activities on Pacman use the Reservation 
   functionality of Torque/Moab to reserve all available nodes on the system.  
   This reservation keeps Torque/Moab from scheduling jobs which would still 
   be running during maintenance.  This allows the queues to be left running
   until maintenance.  Because walltime is used to determine whether or not a
   job will complete prior to maintenance, using a shorter walltime in your 
   job script may allow your job to begin running sooner.  

   e.g.
   If maintenance begins at 10AM and it is currently 8AM, jobs specifying
   walltimes of 2 hours or less will start if there are available nodes.


   CPU Usage
   ==========
   Only one job may run per node for most queues on pacman (i.e. jobs may 
   not share nodes). 
 
   If your job uses fewer than the number of available processors on a node 
   the job will be charged for all processors on the node unless you use the
   "shared" queue

   Utilization for all other queues is charged for the entire node regardless
   of the number of tasks using that node:

   * standard_4 - 4 CPU hours per node per hour
   * standard_12 - 12 CPU hours per node per hour
   * standard_16, debug, background - 16 CPU hours per node per hour
   * bigmem - 32 CPU hours per node per hour

"queues"

Last Updated: Wed, 17 Dec 2008 -
Machines: fish
Fish Queues
========================================

The queue configuration is as described below.  It is subject to
review and further updates.


   Login Nodes Use:
   =================
   Login nodes are a shared resource and are not intended for
   computationally or memory intensive work.  Processes using more
   than 30 minutes of CPU time on login nodes may be killed by ARSC
   without warning.  Please use compute nodes for computationally or
   memory intensive work.


   Queues:
   ===============
   Specify one of the following queues in your Torque/Moab qsub script
   (e.g., "#PBS -q standard"):

     Queue Name     Purpose of queue
     -------------  ------------------------------
     standard       Runs on 12 core nodes without GPUs
     standard_long  Runs longer jobs on 12 core nodes without GPUs.  
     gpu            Runs on 16 core nodes with 1- NVIDIA X2090 GPU per node.
     gpu_long       Runs longer jobs on 16 core nodes with 1- NVIDIA X2090 
                    GPU per node.
     debug          Quick turn around debug queue.  Runs on GPU nodes.
     debug_cpu      Quick turn around debug queue.  Runs on 12 core nodes.
     transfer       For data transfer to and from $ARCHIVE.  
                    NOTE: transfer queue is not yet functional.

   See 'qstat -q' for a complete list of system queues.  Note, some 
   queues are not available for general use.


   Maximum Walltimes:
   ===================
   The maximum allowed walltime for a job is dependent on the number of 
   processors requested.  The table below describes maximum walltimes for 
   each queue.

   Queue             Min   Max     Max       
                    Nodes Nodes  Walltime Notes
   ---------------  ----- ----- --------- ------------
   standard             1    32  24:00:00
   standard_long        1     2 168:00:00 12 nodes are available to this queue. 
   gpu                  1    32  24:00:00     
   gpu_long             1     2 168:00:00 12 nodes are available to this queue.
   debug                1     2   1:00:00 Runs on GPU nodes
   debug_cpu            1     2   1:00:00 Runs on 12 core nodes (no GPU)
   transfer             1     1  24:00:00 Not currently functioning correctly.


   NOTES:
   * August 11, 2012 - transfer queue is not yet functional.    
   * October, 16 2012 - debug queues and long queues were added to fish.            

   PBS Commands:
   =============
   Below is a list of common PBS commands.  Additional information is
   available in the man pages for each command.

   Command         Purpose
   --------------  -----------------------------------------
   qsub            submit jobs to a queue
   qdel            delete a job from the queue   
   qsig            send a signal to a running job
   

   Running a Job:
   ==============
   To run a batch job, create a qsub script which, in addition to
   running your commands, specifies the processor resources and time
   required.  Submit the job to PBS with the following command.   (For
   more PBS directives, type "man qsub".)

     qsub <script file>

   Sample PBS scripts:
   --------------
   ## Beginning of MPI Example Script  ############
   #!/bin/bash
   #PBS -q standard          
   #PBS -l walltime=24:00:00 
   #PBS -l nodes=4:ppn=12
   #PBS -j oe
               
   cd $PBS_O_WORKDIR

   NP=$(( $PBS_NUM_NODES * $PBS_NUM_PPN ))
   aprun -n $NP ./myprog


   ## Beginning of OpenMP Example Script  ############

   #!/bin/bash
   #PBS -q standard
   #PBS -l nodes=1:ppn=12
   #PBS -l walltime=8:00:00
   #PBS -j oe

   cd $PBS_O_WORKDIR
   export OMP_NUM_THREADS=16

   aprun -d $OMP_NUM_THREADS ./myprog    
   #### End of Sample Script  ##################

   NOTE: jobs using the "standard" and "gpu" queues must run compute and memory 
   intensive applications using the "aprun" or "ccmrun" command.  Jobs failing
   to use "aprun" or "ccmrun" may be killed without warning.

   Resource Limits:
   ==================
   The only resource limits users should specify are walltimes and nodes, 
   ppn limits.  The "nodes" statement requests a job be  allocated a number 
   of chunks with the given "ppn" size.  
  

   Tracking Your Job:
   ==================
   To see which jobs are queued and/or running, execute this
   command:

     qstat -a



   Current Queue Limits:
   =====================
   Queue limits are subject to change and this news item is not always
   updated immediately.  For a current list of all queues, execute:

     qstat -Q

   For all limits on a particular queue:

     qstat -Q -f <queue-name>



   Maintenance
   ============
   Scheduled maintenance activities on Fish use the Reservation 
   functionality of Torque/Moab to reserve all available nodes on the system.  
   This reservation keeps Torque/Moab from scheduling jobs which would still 
   be running during maintenance.  This allows the queues to be left running
   until maintenance.  Because walltime is used to determine whether or not a
   job will complete prior to maintenance, using a shorter walltime in your 
   job script may allow your job to begin running sooner.  

   e.g.
   If maintenance begins at 10AM and it is currently 8AM, jobs specifying
   walltimes of 2 hours or less will start if there are available nodes.


   CPU Usage
   ==========
   Only one job may run per node for most queues on fish (i.e. jobs may 
   not share nodes). 
 
   If your job uses fewer than the number of available processors on a node 
   the job will be charged for all processors on the node unless you use the
   "shared" queue

   Utilization for all other queues is charged for the entire node regardless
   of the number of tasks using that node:

   * standard - 12 CPU hours per node per hour
   * standard_long - 12 CPU hours per node per hour
   * gpu - 16 CPU hours per node per hour
   * gpu_long - 16 CPU hours per node per hour
   * debug - 16 CPU hours per node per hour
   * debug_cpu - 12 CPU hours per node per hour

"samples_home"

Last Updated: Wed, 31 Mar 2010 -
Machines: fish
Sample Code Repository
========================

Filename:       INDEX.txt 

Description:    This file contains the name,location, and brief 
                explanation of "samples" included in this Sample 
                Code Repository.  There are several subdirectories within 
                this code repository containing frequently-used procedures, 
                routines, scripts, and code used on this allocated system,
                pacman.  This sample code repository can be accessed from 
                pacman by changing directories to 
                $SAMPLES_HOME, or changing directories to the following 
                location: pacman% /usr/local/pkg/samples.  

                This particular file can be viewed from the internet at:

                http://www.arsc.edu/support/news/systemnews/fishnews.xml#samples_home

Contents:       applications
                jobSubmission
                libraries

*****************************************************************************
Directory:      applications

Description:    This directory contains sample PBS batch scripts for 
                applications installed on fish.

Contents:       abaqus

*****************************************************************************
Directory:      jobSubmission 

Description:    This directory contains sample PBS batch scripts
                and helpful commands for monitoring job progress.  
                Examples include options to submit a jobs such as
                declaring which group membership you belong to
                (for allocation accounting), how to request a particular  
                software license, etc.

Contents:       MPI_OpenMP_scripts 
                MPI_scripts 
                OpenMP_scripts
                
*****************************************************************************
Directory:      libraries

Description:    This directory contains examples of common libraries and 
                programming paradigms.

Contents:       cuda  
                openacc
                scalapack

"samples_home"

Last Updated: Wed, 31 Mar 2010 -
Machines: pacman
Sample Code Repository
========================

Filename:       INDEX.txt 

Description:    This file contains the name,location, and brief 
                explanation of "samples" included in this Sample 
		Code Repository.  There are several subdirectories within 
		this code repository containing frequently-used procedures, 
		routines, scripts, and code used on this allocated system,
		pacman.  This sample code repository can be accessed from 
                pacman by changing directories to 
                $SAMPLES_HOME, or changing directories to the following 
		location: pacman% /usr/local/pkg/samples.  

                This particular file can be viewed from the internet at:

                http://www.arsc.edu/arsc/support/news/systemnews/index.xml?system=pacman#samples_home

Contents:       applications
                bio
                debugging
                jobSubmission
                libraries
                parallelEnvironment
                training

******************************************************************************
Directory:	applications 

Description:    This directory contains sample scripts used to run
                applications installed on pacman.

Contents:       abaqus
                comsol
                gaussian_09
                matlab_dct
                namd
                nwchem
                tau
                vnc
                OpenFOAM

******************************************************************************
Directory:      bio

Description:    This directory contains sample scripts used to run
                BioInformatics applications installed on pacman.

Contents:       mrbayes

******************************************************************************
Directory:	config

Description:    This directory contains configuration files for applications
                which require some customization to run on pacman.

Contents:       cesm_1_0_4
              
******************************************************************************
Directory:	debugging 

Description:    This directory contains basic information on how to start up 
                and use	the available debuggers on pacman.

Contents:       core_files

*****************************************************************************
Directory:	jobSubmission 

Description:	This directory contains sample PBS batch scripts
		and helpful commands for monitoring job progress.  
                Examples include options to submit a jobs such as
		declaring which group membership you belong to
		(for allocation accounting), how to request a particular  
		software license, etc.

Contents:       MPI_OpenMP_scripts 
                MPI_scripts 
		OpenMP_scripts
                Rsync_scripts

*****************************************************************************
Directory:	parallelEnvironment 

Description:    This directory contains sample code and scripts containing 
                compiler options for common parallel programming practices
                including code profiling.  

Contents:       hello_world_mpi

*****************************************************************************
Directory:      training

Description:    This directory contains sample exercises from ARSC 
                training.

Contents:       introToLinux  
                introToPacman

*****************************************************************************

"software"

Last Updated: Wed, 31 Oct 2012 -
Machines: fish
Fish Software
========================================
      python: python version 2.7.2 (2013-02-26)
      This version includes various popular add-ons including 
        numpy, scipy, matplotlib, basemap and more.
             module load python/2.7.2

      abaqus: abaqus version 6.11 (2012-12-26)
      Version 6.12 of abaqus is available via modules:
             module load abaqus/6.11             

      matlab: matlab version R2012b (2012-12-26)
      Matlab R2012a is now available to UAF users via modules:
             module load matlab/R2012b

      matlab: matlab version R2012a (2012-12-07)
      Matlab R2012a is now available to UAF users via modules:
             module load matlab/R2012a

      comsol: comsol version 4.3a (2012-11-30)
      This version of comsol is now available to UAF users via modules:
             module load comsol/4.3a
  
      idl/envi: idl-8.2 and envi 5.0 (2012-10-31)
      IDL version 8.2 and ENVI version 5.0 are now available
      on fish via modules:
             module load idl/8.2


Back to Top