Pacman User Guide (Penguin Computing Cluster)

Introduction

Pacman image

The Arctic Region Supercomputing Center (ARSC) operates a Penguin Computing AMD Opteron cluster ( pacman ) running RedHat Enterprise Linux and Scyld ClusterWare.

Pacman is a resource dedicated to University of Alaska affiliated academic users performing non-commercial, scientific research of Arctic interest.

"Pacman" Hardware Specifications

The ARSC Penguin Computing cluster consists of the following hardware:

  • 12 Login Nodes
    • 2- Six core 2.2 GHz AMD Opteron Processors
    • 64 GB of memory per node. (64 GB per core)
    • 1 Mellanox Infiniband DDR Network Card
  • 1 Large Memory Login Node
    • 4 Eight core 2.3 GHz AMD Opteron Processors
    • 256 GB of memory per node (8 GB per core)
    • QLogic QDR Infiniband Network Card
    • 800 GB local disk
    • 140 GB solid state drive
  • 2 Login Nodes with GPUs
    • two NVIDIA Tesla M2050 GPUs per node
    • 3GB GDDR5 memory per GPU
    • 2.4 GHz Intel Xeon E5620 CPUs
  • 256 Four Core Compute Nodes
    • 2- Dual core 2.6 GHz AMD Opteron Processors
    • 16 GB of memory per node (4 GB per core)
    • Voltaire DDR Infiniband Network Card
  • 88 Sixteen Core Compute Nodes
    • 2 Eight core 2.3 GHz AMD Opteron Processors
    • 64 GB of memory per node (4 GB per core)
    • QLogic QDR Infiniband Network Card
    • 250 GB local disk
  • 20 Twelve Core Compute Nodes
    • 2 Six core 2.2 GHz AMD Opteron Processors
    • 32 GB of memory per node (2.6 GB per core)
    • Mellanox Infiniband DDR Network Card
  • 3 Large Memory Nodes
    • 4 Eight core 2.3 GHz AMD Opteron Processors
    • 256 GB of memory per node (8 GB per core)
    • QLogic QDR Infiniband Network Card
    • 800 GB local disk
    • 140 GB solid state drive
  • QLogic QDR and Mellanox DDR Infiniband Interconnect
  • 275 TB Lustre file system (available center-wide)

Operating System / Shells

The operating system on pacman is RedHat Enterprise Linux version 6.4.

The following shells are available on pacman:

  • sh (Bourne Shell)
  • ksh (Korn Shell)
  • bash (Bourne-Again Shell) default
  • csh (C Shell)
  • tcsh (Tenex C Shell)

If you would like to have your default login shell changed, please contact User Support .

System News, Status, and RSS Feeds

RSS Feed Icon

System news is available via the news command when logged on to pacman. For example, the command "news queues" gives news about the current queue configuration. System status and public news items are available on the web.

Storage

This system provides access to portable data storage directories, easily usable through environment variables. Please be familiar with the purpose and policies of each storage directory.

Connecting to Pacman

Connections to pacman should be made using an SSH compliant client. Linux and Mac OS X systems normally include a command line "ssh" program. Persons using Windows systems to connect to pacman will need to install an ssh client (e.g. PuTTY). For additional details see the Connecting to ARSC Academic Systems page.

Here is an example connection command for Mac OS X and Linux command line clients:

% ssh -XY arscusername@pacman1.arsc.edu

File transfers to and from pacman should also use SSH protocol via the "scp" or "sftp" programs.   Persons using Windows systems to connect to pacman will need to install an sftp or scp compatible Windows client (e.g. FileZilla, WinSCP). 

Pacman has a number of login nodes available.   The nodes "pacman1.arsc.edu" and pacman2.arsc.edu" are primarily intended for file editing and job submission.   Activities requiring significant CPU time or memory should occur on "pacman3.arsc.edu" through "pacman13.arsc.edu".

Login NodeIntended Purpose
pacman1.arsc.eduCompiling and Batch Job Submission
pacman2.arsc.eduCompiling and Batch Job Submission
pacman3.arsc.edu through pacman9.arsc.eduCompute Intensive Interactive Work
pacman10.arsc.edu through pacman12.arsc.eduBatch Data Transfer Work
pacman13.arsc.eduCompute Intensive Interactive Work / 256GB Memory / 32 Cores

Sample Code Repository ($SAMPLES_HOME)

The $SAMPLES_HOME directory on pacman contains a number of examples including, but not limited to:

  • Torque/Moab scripts for MPI, OpenMP and Hybrid applications
  • Examples for Abaqus, OpenFoam, Gaussian, NWChem and other installed applications.
  • Examples using common libraries

Available Software

Open source and commercial applications have been installed on the system in /usr/local/pkg.  In most cases, the most recent versions of these packages are easily accessible via modules.  Additional packages may be installed upon request.

User Installed Software

This system provides a repository for third-party software packages which are installed by users but are not supported by ARSC.

Parallel Programming Models

Several types of parallelism can be employed on this system using different programming models and methods.

Programming Environments

This provides multiple compiling environments for different programming languages and compiler brands. The modules package is installed, which allows you to quickly switch between these different environments.

Compiling and Linking with PGI

Libraries

Libraries on pacman are generally available for both the Portland Group and GNU compiler suites. Most current versions of libraries and include files are available in the following directories:

PGI Compilers Libraries: /usr/local/pgi/lib
Include Files: /usr/local/pgi/include
GNU Compilers Libraries: /usr/local/lib
Include Files: /usr/local/include

The following Libraries are currently available on pacman:

Library Notes
ACML AMD Core Math Library including BLAS, LAPACK, and FFT routines
Current PGI Version:

/usr/local/pkg/acml/acml-4.4.0.pgi/pgi64/lib
(single threaded)

/usr/local/pkg/acml/acml-4.4.0.pgi/pgi64_mp/lib
(multi-threaded)

Current GNU Version:

/usr/local/pkg/acml/acml-4.0.0.gnu/gfortran64/lib
(single threaded)

/usr/local/pkg/acml/acml-4.0.0.gnu/gfortran64_mp/lib
(multi-threaded)

Alternate Versions: /usr/local/pkg/acml
Example Fortran Compile Statement

pgf90 -I/usr/local/pkg/acml/acml-4.4.0.pgi/pgi64/include test.f90

pgf90 -L/usr/local/pkg/acml/acml-4.4.0.pgi/pgi64/lib -lacml -lacml_mv test.o -o test.exe

BLACS Basic Linear Algebra Communication Subprograms
Current PGI Version: /usr/local/pgi/lib
Current GNU Version: /usr/local/lib
Alternate Versions: /usr/local/pkg/blacs
BLAS See ACML

FFTW-2
and
FFTW-3

Library for computing the Discrete Fourier Transform
Current PGI Version: /usr/local/pgi/lib
Current GNU Version: /usr/local/lib
Alternate Versions: /usr/local/pkg/fftw
GSL GNU Scientific Library for numerical C and C++ programs
Current PGI Version: /usr/local/lib (Use GNU Version)
Current GNU Version: /usr/local/lib
Alternate Versions: /usr/local/pkg/gsl
HDF-5 Hierarchical Data Format for transferring graphical and numerical data among computers
Current PGI Version: /usr/local/pgi/lib
Current GNU Version: /usr/local/lib
Alternate Versions: /usr/local/pkg/hdf5
LAPACK See ACML
NetCDF network Common Data Form
Current PGI Version: /usr/local/pgi/lib
Current GNU Version: /usr/local/lib
Alternate Versions: /usr/local/pkg/netcdf
Example Fortran Compile Statement

pgf90 -c -I/usr/local/pgi/include test.f90

pgf90 -L/usr/local/pgi/lib -lnetcdff -lnetcdf test.o -o test.exe

Alternate Fortran Compile Statement

pgf90 test.f90 -I/usr/local/pgi/include -L/usr/local/pgi/lib -lnetcdff -lnetcdf -o test.exe

ScaLAPACK Scalable LAPACK, Library of high-performance linear algebra routines
Current PGI Version: /usr/local/pgi/lib
Current GNU Version: /usr/local/lib
Alternate Versions: /usr/local/pkg/scalapack

Pre-Processing and Post-Processing on Login Nodes

Several pacman login nodes have been configured with higher CPU limits and memory Limits to allow for pre-processing  and post-processing as well as code development, testing and debugging activities.   

The login nodes pacman3.arsc.edu through pacman9.arsc.edu allow for greater memory and CPU time use.   For codes requiring significant memory, please verify there is light system load on the login node being used prior to running applications.  The "top" command will display current memory use.

To increase CPU limit in seconds run:

# bash/ksh (set the limit to 8 hours; maximum is 259200 seconds or 3 days)

ulimit -St 28800

# csh/tcsh (set the limit to 8 hours; maximum is 259200 seconds or 3 days

limit cputime 28800
 

To increase CPU limit in seconds run:

# bash/ksh (set the limit to 16 GB; maximum is 33554432 KB or 32 GB)
 
ulimit -Sv 16777216

# csh/tcsh (set the limit to 16 GB; maximum is 33554432 KB or 32 GB)

limit vmemoryuse 16777216

Use caution when adding limits to the ~/.cshrc, ~/.login, ~/.profile, ~/.bash_profile, or ~/.bashrc as these files will also affect memory limits on compute nodes.

Job Submission and Resource Accounting

Job submission is done through the Torque/Moab scheduler. Resources are allocated in CPU hours and are managed on a per-project basis.

Back to Top