ARSC HPC Users' Newsletter 382, March 07, 2008

Using ezViz for Quick Tsunami Model Verification

[ By: Tom Logan ]

The Arctic Region Supercomputing Center (ARSC) and colleagues from the Northwest Alliance for Computational Science & Engineering (NACSE) have developed a tsunami computational portal. This portal allows approved oceanographic researchers to submit fully configured tsunami propagation runs to the HPC systems at ARSC through a web-based front end currently hosted by NACSE ( ).

Staff at ARSC are responsible for proper integration, maintenance, and robustness of models contributed for portal use. The ability to produce quick visualizations ensures valid integration of models and allows for spot checking of jobs submitted through the portal.

For these tasks, we have found ezViz to be an invaluable tool. ezViz is visualization package developed by the HPCMP DAAC- Data Analysis and Assessment Center ( ) and is available on many resources within the HPCMP including midnight and iceberg.

Prior to the use of ezViz, spot-checking and validation was accomplished using Matlab. The process involved transferring filter, deformation, and sea level maximum files (5 or more files totaling from 20 to 100 MB per run) to a suitable platform (e.g. Mac laptop), starting up Matlab, reading in these largish files, and, finally, generating output images that can be examined for expected patterns of deformation and sea levels.

Using ezViz, all of this post-processing can be performed on the HPC platform with a minimal number of commands in mere seconds. On midnight, one can use the 'display' command to look at the resulting jpeg files right away - no explicit data transfer is required and, if one wants to save results, they only need transfer small jpeg images (typically less than 100kb each).

Admittedly, our use of ezViz takes advantage of only the most rudimentary features of the package. For this purpose, that's all we needed and ezViz readily met the requirements.

As far as using ezViz to create a single image, here's what we're doing:

  1. Create an initialization file. Here's an example (called layer1.ini):
    This example directs ezViz to read a raw binary file named layer1.def of size 870 samples by 300 lines of 4-byte IEEE floating-point data. The output image is a 1280X720 pixel jpeg file that includes a colorbar along the bottom of the image.
  2. Invoke ezViz from the command line:
            % ezVizGeneric layer1.ini
    Using the layer1.ini given above, the output image will be named test.jpg. This step assumes you have ezVizGeneric in your PATH. You can add ezViz to your PATH by loading the "ezViz" module on either midnight or iceberg.

The beauty and ease of ezViz is that one can readily create Unix scripts to invoke ezViz in a variety of ways. For example, here's the script that is used to examine the output of two benchmark runs (used during model modification and validation):

make_images script:

set BENCH1=${ARCHIVE_HOME}/bench1
set BENCH2=${ARCHIVE_HOME}/bench2

cd bench1
ezVizGeneric ../layer1.ini -input layer1.slmax -output bench1_slmax.jpg
ezVizGeneric ../layer1.ini -input tsu_layer1.def -output bench1_def.jpg
ezVizGeneric ../layer2.ini -input layer2.slmax -output bench1_layer2_slmax.jpg
ezVizGeneric ../layer2.ini -input tsu_layer2.def -output bench1_layer2_def.jpg
ezVizGeneric ../mask.ini

~/portal/utils/utils/diff_bin layer1.slmax ${BENCH1}/layer1.slmax 300 870 diff.slmax
~/portal/utils/utils/diff_bin tsu_layer1.def ${BENCH1}/tsu_layer1.def 300 870 diff.def
ezVizGeneric ../layer1.ini -input diff.slmax -output bench1_diff_slmax.jpg
ezVizGeneric ../layer1.ini -input diff.def -output bench1_diff_def.jpg

cd ../bench2
ezVizGeneric ../layer1.ini -input layer1.slmax -output bench2_slmax.jpg
ezVizGeneric ../layer1.ini -input tsu_layer1.def -output bench2_def.jpg
ezVizGeneric ../layer2.ini -input layer2.slmax -output bench2_layer2_slmax.jpg
ezVizGeneric ../layer2.ini -input tsu_layer2.def -output bench2_layer2_def.jpg
ezVizGeneric ../mask.ini

~/portal/utils/utils/diff_bin layer1.slmax ${BENCH2}/layer1.slmax 300 870 diff.slmax
~/portal/utils/utils/diff_bin tsu_layer1.def ${BENCH2}/tsu_layer1.def 300 870 diff.def
ezVizGeneric layer1.ini -input diff.slmax -output bench2_diff_slmax.jpg
ezVizGeneric layer1.ini -input diff.def -output bench2_diff_def.jpg

The first thing to note is that the layer1.ini file is reused with different input and output files using command line specifications (in this case, the images must be of the same size; thus the need for layer1.ini and layer2.ini files). A new .ini file does not need to be created for each image.

The mask images contain integers rather than floats, so the mask.ini file contains "TYPE=int" rather than "TYPE=float".

In this script, we also compare this run's output with a saved "correct" version using the portal tool "diff_bin". Note that we are able to automatically create visualizations of these difference files as well, thus allowing a nearly instantaneous check of differences introduced by recent model modifications.

This image shows the initial surface displacement due to a theoretical offshore earthquake in the gulf of Alaska.

This image shows the maximum sea level resulting from the initial surface displacement shown above.

As previously stated, ezViz has a plethora of options and capabilities beyond the basics we use here. For instance, the recent transition of the visualization process from Iceberg (IBM) to Midnight (SUN) was quite simple - only a single line of each .ini file had to be changed to account for the endianness. The line:


EzViz supports over 40 different input data formats, not just the simple flat binaries that we use for the portal. Outputs can be in a variety of standard image formats as well as geometric objects. Many different rendering options exist as well (such as volume rendering). Finally, color maps can be fully specified in a variety of formats or one can simply use the defaults (as we have done in this article).

More information and tutorials on ezVizGeneric, the ezViz API, and other awesome goodies supported by the HPCMP DAAC can be found at:

Programming Environment Updates on Midnight

There will be several programming environment updates on midnight during the scheduled maintenance on March, 12. The following updates will be made to the PathScale and Sun Studio programming environments:

Module Name Alias to Notes
PrgEnv.old PrgEnv.path-2.5 new module
PrgEnv.path.old PrgEnv.path-2.5 new module
PrgEnv PrgEnv.path-3.0 was PrgEnv.path-2.5
PrgEnv.path PrgEnv.path-3.0 was PrgEnv.path-2.5 PrgEnv.path-3.1 was PrgEnv.path-3.0 PrgEnv.path-3.1 was PrgEnv.path-3.0
PrgEnv.sun.old PrgEnv.sun-2006-08 new module
PrgEnv.sun PrgEnv.sun-2006-12-r2 unchanged PrgEnv.sun-2007-06 was PrgEnv.sun-2006-12-r2

If you use the default Programming Environment and you do not wish to use the newer version of the PathScale compiler you will need to update your ~/.login (csh/tcsh users) or ~/.profile (ksh/bash users) to use PrgEnv.path-2.5.

# module load PrgEnv
# explicitly load PrgEnv.path-2.5
module load PrgEnv.path-2.5

Gaussian 03.E.01 and NWChem 5.0 Now Available on Midnight

Gaussian 03.E.01 and NWChem 5.0 Now Available on Midnight

The chemistry packages Gaussian and NWChem are now available on the midnight.

Gaussian 03.E.01 has been installed along with TCP Linda, which allows Gaussian to be run on multiple nodes. Additionally, Gaussview 4.1 has also been installed to help assist with the setup and launching of jobs. Gaussian samples are available in $SAMPLES_HOME/applications/gaussian. Gaussian Linda based examples for multinode jobs will be available soon.

NWChem is installed as version 5.0. Sample scripts are available in the samples directory $SAMPLES_HOME/applications/nwchem.

Quick-Tip Q & A

A:[[ I have an MPI code and would like to find out how long each 
  [[ MPI_Send call is taking.  The first way I thought of doing this 
  [[ was adding MPI_Wtime() calls before and after the MPI_Send and 
  [[ printing out the difference.  
  [[ Is there a better way to do this?  A method that doesn't involve
  [[ me changing my code a whole lot would be preferable!!

# Editor's solution.

There are probably more graceful ways of doing this, however I
thought it would be fun to solve this using the Profiled MPI (PMPI)
interface to MPI.  This solution involves implementing an alternate
version of MPI_Send.  Here's one implementation that does timing
using MPI_Wtime():

   mt342 % cat my_mpi_send.c
   #include <mpi.h>
   #include <stdio.h>
   #include <stdlib.h>
   int MPI_Send(void* buf, int count, MPI_Datatype datatype, int dest, 
                int tag, MPI_Comm comm)
       double ctime;
       int err;
       static int rank;
       static int rank_set;
       /* The next line ensure that we only call MPI_Comm_rank on the 
          first call to MPI_Send.
       if ( rank_set == 0 ) 
           PMPI_Comm_rank(comm, &rank);
       ctime=PMPI_Wtime();         /* start the timer */
       err=PMPI_Send(buf, count, datatype, dest, tag, comm);
       ctime=PMPI_Wtime()-ctime;   /* stop the timer */ 
       fprintf(stdout, "Send from %d to %d; tag=%d; count=%d; time=%f\n", 
               rank, dest, tag, count, ctime);

First we need to compile this code into a shared object:


   mt342 % mpicc my_mpi_send.c -shared -o

Next compile the MPI application as we normally would.  You need to
make sure not to use static MPI libraries for this technique to work.

   mt342 % mpicc mpi_bandwidth.c -o mpi_bandwidth

Lastly set the LD_PRELOAD environment variable when using mpirun:

  mt342 % mpirun -np 2 LD_PRELOAD=$PWD/ ./mpi_bandwidth 100 10000  10000
  Send from 1 to 0; tag=101; count=4; time=0.000010
  Send from 1 to 0; tag=101; count=4; time=0.000004
  10000   1063.829787
  Cleaning up all processes ...
The best part about this is that you don't even need to recompile
to go back to the normal behavior.
   mt342 % mpirun -np 2 ./mpi_bandwidth 100 10000  10000
   10000   1103.752759
   Cleaning up all processes ...

The following links provide a little bit more of an introduction
to PMPI:

Q:  I have an executable that uses shared libraries.  Is there a way 
    to show the which shared library each symbol is provided by?      

[[ Answers, Questions, and Tips Graciously Accepted ]]

Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top