ARSC system news for fish

Menu to filter items by type

Type Downtime News
Machine All Systems linuxws pacman bigdipper fish lsi

Contents for fish

News Items

"CENTER Old File Removal" on fish

Last Updated: Tue, 17 Dec 2013 -
Machines: linuxws pacman fish
CENTER Old File Removal 
ARSC has launched the automatic deletion of old files
residing on the $CENTER filesystem.  The automatic tool will run
weekly and will target files older than 30 days. 

To identify which of your files are eligible for
deletion, try running the following command: 
lfs find $CENTER -type f -atime +30

Remember, there are NO backups for data in $CENTER.  Once the data
is deleted, the data is gone.

Note: Modification of file timestamp information, data, or metadata
for the sole purpose of bypassing the automated file removal tool
is prohibited.

The policy regarding the deletion of old files is available on the
ARSC website:

Users are encouraged to move important but infrequently used
data to the intermediate and long term $ARCHIVE storage
filesystem. Recommendations for optimizing $ARCHIVE file
storage and retrieval are available on the ARSC website:

Please contact the ARSC Help Desk with questions regarding the
automated deletion of old files in $CENTER.

"LDAP Passwords" on fish

Last Updated: Mon, 20 May 2013 -
Machines: linuxws pacman bigdipper fish
How to update your LDAP password 

User authentication and login to ARSC systems uses University 
of Alaska (UA) passwords and follows the LDAP protocol to connect to
the University's Enterprise Directory.  Because of this, users must
change their passwords using the UA Enterprise tools.

While logging into ARSC systems, if you see the following message,
please change your password on

  Your are required to change your LDAP password immediately.
  Enter login(LDAP) password:

Attempts to change your password on ARSC systems will fail.

Please contact the ARSC Help Desk if you are unable to log into to change your login password.


"modules" on fish

Last Updated: Sun, 06 Jun 2010 -
Machines: fish
Using the Modules Package

The modules package is used to prepare the environment for various
applications before they are run.  Loading a module will set the
environment variables required for a program to execute properly.
Conversely, unloading a module will unset all environment variables
that had been previously set.  This functionality is ideal for
switching between different versions of the same application, keeping
differences in file paths transparent to the user.

Sourcing the Module Init Files
For some jobs, it may be necessary to source these files, as they 
may not be automatically sourced as with login shells.
Before the modules package can be used, its init file must first be

To do this using tcsh or csh, type:

   source /opt/modules/default/init/tcsh

To do this using bash, type

   source /opt/modules/default/init/bash

To do this using ksh, type:

   source /opt/modules/default/init/ksh

Once the modules init file has been sourced, the following commands
become available:

Command                     Purpose
module avail                - list all available modules
module load <pkg>           - load a module file from environment
module unload <pkg>         - unload a module file from environment
module list                 - display modules currently loaded
module switch <old> <new>   - replace module <old> with module <new>
module purge                - unload all modules (not recommended on fish)

"queues" on fish

Last Updated: Wed, 17 Dec 2008 -
Machines: fish
Fish Queues

The queue configuration is as described below.  It is subject to
review and further updates.

   Login Nodes Use:
   Login nodes are a shared resource and are not intended for
   computationally or memory intensive work.  Processes using more
   than 30 minutes of CPU time on login nodes may be killed by ARSC
   without warning.  Please use compute nodes for computationally or
   memory intensive work.

   Specify one of the following queues in your Torque/Moab qsub script
   (e.g., "#PBS -q standard"):

     Queue Name     Purpose of queue
     -------------  ------------------------------
     standard       Runs on 12 core nodes without GPUs
     standard_long  Runs longer jobs on 12 core nodes without GPUs.  
     gpu            Runs on 16 core nodes with 1- NVIDIA X2090 GPU per node.
     gpu_long       Runs longer jobs on 16 core nodes with 1- NVIDIA X2090 
                    GPU per node.
     debug          Quick turn around debug queue.  Runs on GPU nodes.
     debug_cpu      Quick turn around debug queue.  Runs on 12 core nodes.
     transfer       For data transfer to and from $ARCHIVE.  
                    NOTE: transfer queue is not yet functional.

   See 'qstat -q' for a complete list of system queues.  Note, some 
   queues are not available for general use.

   Maximum Walltimes:
   The maximum allowed walltime for a job is dependent on the number of 
   processors requested.  The table below describes maximum walltimes for 
   each queue.

   Queue             Min   Max     Max       
                    Nodes Nodes  Walltime Notes
   ---------------  ----- ----- --------- ------------
   standard             1    32  24:00:00
   standard_long        1     2 168:00:00 12 nodes are available to this queue. 
   gpu                  1    32  24:00:00     
   gpu_long             1     2 168:00:00 12 nodes are available to this queue.
   debug                1     2   1:00:00 Runs on GPU nodes
   debug_cpu            1     2   1:00:00 Runs on 12 core nodes (no GPU)
   transfer             1     1  24:00:00 Not currently functioning correctly.

   * August 11, 2012 - transfer queue is not yet functional.    
   * October, 16 2012 - debug queues and long queues were added to fish.            

   PBS Commands:
   Below is a list of common PBS commands.  Additional information is
   available in the man pages for each command.

   Command         Purpose
   --------------  -----------------------------------------
   qsub            submit jobs to a queue
   qdel            delete a job from the queue   
   qsig            send a signal to a running job

   Running a Job:
   To run a batch job, create a qsub script which, in addition to
   running your commands, specifies the processor resources and time
   required.  Submit the job to PBS with the following command.   (For
   more PBS directives, type "man qsub".)

     qsub <script file>

   Sample PBS scripts:
   ## Beginning of MPI Example Script  ############
   #PBS -q standard          
   #PBS -l walltime=24:00:00 
   #PBS -l nodes=4:ppn=12
   #PBS -j oe

   aprun -n $NP ./myprog

   ## Beginning of OpenMP Example Script  ############

   #PBS -q standard
   #PBS -l nodes=1:ppn=12
   #PBS -l walltime=8:00:00
   #PBS -j oe

   export OMP_NUM_THREADS=16

   aprun -d $OMP_NUM_THREADS ./myprog    
   #### End of Sample Script  ##################

   NOTE: jobs using the "standard" and "gpu" queues must run compute and memory 
   intensive applications using the "aprun" or "ccmrun" command.  Jobs failing
   to use "aprun" or "ccmrun" may be killed without warning.

   Resource Limits:
   The only resource limits users should specify are walltimes and nodes, 
   ppn limits.  The "nodes" statement requests a job be  allocated a number 
   of chunks with the given "ppn" size.  

   Tracking Your Job:
   To see which jobs are queued and/or running, execute this

     qstat -a

   Current Queue Limits:
   Queue limits are subject to change and this news item is not always
   updated immediately.  For a current list of all queues, execute:

     qstat -Q

   For all limits on a particular queue:

     qstat -Q -f <queue-name>

   Scheduled maintenance activities on Fish use the Reservation 
   functionality of Torque/Moab to reserve all available nodes on the system.  
   This reservation keeps Torque/Moab from scheduling jobs which would still 
   be running during maintenance.  This allows the queues to be left running
   until maintenance.  Because walltime is used to determine whether or not a
   job will complete prior to maintenance, using a shorter walltime in your 
   job script may allow your job to begin running sooner.  

   If maintenance begins at 10AM and it is currently 8AM, jobs specifying
   walltimes of 2 hours or less will start if there are available nodes.

   CPU Usage
   Only one job may run per node for most queues on fish (i.e. jobs may 
   not share nodes). 
   If your job uses fewer than the number of available processors on a node 
   the job will be charged for all processors on the node unless you use the
   "shared" queue

   Utilization for all other queues is charged for the entire node regardless
   of the number of tasks using that node:

   * standard - 12 CPU hours per node per hour
   * standard_long - 12 CPU hours per node per hour
   * gpu - 16 CPU hours per node per hour
   * gpu_long - 16 CPU hours per node per hour
   * debug - 16 CPU hours per node per hour
   * debug_cpu - 12 CPU hours per node per hour

"samples_home" on fish

Last Updated: Wed, 31 Mar 2010 -
Machines: fish
Sample Code Repository

Filename:       INDEX.txt 

Description:    This file contains the name,location, and brief 
                explanation of "samples" included in this Sample 
                Code Repository.  There are several subdirectories within 
                this code repository containing frequently-used procedures, 
                routines, scripts, and code used on this allocated system,
                pacman.  This sample code repository can be accessed from 
                pacman by changing directories to 
                $SAMPLES_HOME, or changing directories to the following 
                location: pacman% /usr/local/pkg/samples.  

                This particular file can be viewed from the internet at:


Contents:       applications

Directory:      applications

Description:    This directory contains sample PBS batch scripts for 
                applications installed on fish.

Contents:       abaqus

Directory:      jobSubmission 

Description:    This directory contains sample PBS batch scripts
                and helpful commands for monitoring job progress.  
                Examples include options to submit a jobs such as
                declaring which group membership you belong to
                (for allocation accounting), how to request a particular  
                software license, etc.

Contents:       MPI_OpenMP_scripts 
Directory:      libraries

Description:    This directory contains examples of common libraries and 
                programming paradigms.

Contents:       cuda  

Back to Top