ARSC HPC Users' Newsletter 393, August 22, 2008



Paraview and IDV, Common ARSC Viz Tools

[ By: Patrick Webb ]

Visualization is a great way to analyze a data set, and for presenting that data to the world at large. A wide variety of formats and visualization tools can make the task of visualizing your data daunting. ARSC has among its visualization resources two visualization packages, Paraview and Unidata's Integrated Data Viewer (IDV). These applications provide the ability to quickly visualize data and capture the output as stills or movies, and between the two cover many commonly used data formats. Here is a brief summary of both, which covers some major file formats they work with, what they do and a couple of tips.

Paraview is a visualization application developed by Kitware, Los Alamos National Labs and Sandia National Labs. It is available on the ARSC Linux Workstations by typing the commands:

   %module load paraview

Paraview is makes heavy use of Kitware's VTK file format. It is also capable of handling VRML, DEMs, and raw binary data. For a full list of available data types, take a look at the Paraview FAQ: < href=""> Paraview is useful for visualizing data that is not, or does not need to be geo-located. Data represented with some sort of geometry works well with Paraview, and data in an xyz format can be quickly converted to a VTK file and visualized with Paraview. Paraview also has several 2D analysis tools including histograms and line plots.

There is a pair tricks that can make visualizing your data with Paraview easier.

The Transform filter is very useful for changing the scale, orientation and position of a 3D data set. The Transform filter has three options for translation, rotation and scale. Each has three fields that correspond to the X, Y and Z dimensions. Translate and Rotate take a number that is then added to that dimension's coordinates from your data. Scale takes a multiplier that is applied to the corresponding dimension.

For example, for a model whose position in space is described in UTM coordinates it is useful to scale down the X and Y dimensions.

The Decimate filter is a good way to reduce a very high poly count object to a more manageable size that still maintains the shape of the original polygon. The "Target Reduction" option is the percentage of reduction (0.9 = 90% reduction in polys). The "Preserve Topology" check box tells the filter to preserve the overall shape as much as it can while reducing the polygon.

The Integrated Data Viewer (IDV) was developed by Unidata as an open source, cross-platform visualization tool for atmospheric data. It is available on the Linux workstations by typing the following commands:

    %module load idv-2.2
The IDV works well with gridded data. It reads NetCDF files that meet CF conventions ( ), DEMs, and Shapefiles as well as radar and atmospheric sounding files. The IDV is primarily meant for visualizing atmospheric data from a variety of sources, but is capable of handling any kind 2D or 3D gridded data, even comma separated ASCII files.

Here are two quick tips when using the IDV that can make visualizing your data easier.

Down-sample your data if possible. This saves memory and keeps the IDV from slowing down by running out of memory. Down-sampling can be done in the Field Selector pane of the Dashboard when choosing a field from the data set. The option is in the lower right corner, under the "stride" tab. There you will have the option to choose how many points to skip in the X, Y, and/or Z dimensions.

Adjusting the vertical scale is another easy way improve the visuals you are getting from your data. The option is accessible from the left hand button bar in the Display window as a tick-marked vertical line button, or from the View menu in the same window. Click "View->Properties" and go to the Vertical Scale tab. Adjust the minimum and maximum to fit your data.

You are encouraged to attend ARSC's IDV training, 9/30 - 10/02, as noted in the next article.


Physics 693 & Fall Training

Classes will begin in a few weeks at UAF and ARSC staff along with the UAF Physics department will be teaching Physics 693 once again. The Core Skills for Computation Science class serves as an introduction to the basic skills required to operate in a modern high performance computing environment. ARSC users are encouraged to attend any lecture they would find beneficial. Physics 693 lectures are the only formal ARSC user training scheduled for this fall. The course is organized into the following modules:

    Date(s)     Module
    9/4         Introduction to ARSC
    9/9-9/16    Introduction to Unix
    9/18-9/23   Introduction to Fortran (WRRB 010)
    9/25        Introduction to Midnight Sun Cluster
    9/30-10/02  Viz Week 1: Introduction to Integrated Data Viewer
    10/7-10/9   Performance Programming
    10/14-10/16 Viz Week 2: Importing Data and Animation 101
    10/21-10/30 Parallel Shared Memory Programming using OpenMP
    11/6-11/20  Parallel Distributed Memory Programming using MPI
    12/2        Introduction to Pingo Cray XT5 System

Additional topics covered throughout the semester will include debugging with Totalview, Subversion version control system, and validation and verification of scientific models.

With few exceptions, classes occur in the West Ridge Research Building (WRRB) Room 009 on Tuesday and Thursday from 9:15 - 11:15AM.

For a complete schedule visit the Core Skills class website at

Questions may be sent to Tom Logan ( )


2008 Arctic Science Conference

From the web site of the Arctic Division of the American Association for the Advancement of Science comes this announcement of their 2008 Arctic Science Conference which will be held in Fairbanks, September 15-17.

The conference theme is 'Growing Sustainability Science in the North: The Resilience of the People in the Arctic.'

The importance of the resilience of the people of the Arctic is little understood. Among the eight circumpolar nations are three of the four largest in the world in geographic area. The Arctic will become an important route for trade and commerce between Europe and the Eastern Pacific regions during the 21st century. It is necessary to observe and understand change in order to respond and adapt. IPY offers an opportunity to investigate brought about by the results of international collaborative science and engineering research.

For more information visit .


Discovery Lab Study Subjects Needed

ARSC high-performance computer (HPC) specialists are researching the question of how well people accomplish tasks in a virtual reality environment with and without an avatar to represent the user in the virtual space. It is expected that data from this study will help advance and improve user interface design in VR. Test subjects will perform simple tasks using BLUISculpt, a VR drawing program, with different types of avatars in order to gauge their effect.

Following the test, each subject will complete a questionnaire. Total time commitment is about 30 - 45 minutes and participants must be 18 or older. If interested, send an email to and to schedule a session. We sincerely appreciate your assistance in this research, and expect it will be a fun and interesting experience!


Quick-Tip Q & A

A:[[ I was trying to get a general idea how long my job was running by
  [[ issuing a date command before and after the "mpirun" commands in
  [[ my script.
  [[ E.g.
  [[ date
  [[ mpirun ./preprocess
  [[ mpirun ./analyze
  [[ mpirun ./postprocess
  [[ date
  [[ While it's nice to have the start and end time, I would also like
  [[ to know how long all three commands took cumulatively to run,
  [[ so I can specify a more accurate walltime in my PBS script.
  [[ How can I get the time it took to run all three commands?

# Thanks to Greg Newby for sharing this solution.

While within a program we'd use some sort of timing calls, from the
shell the main way of getting date/time information is through the
'date' command.  This command varies considerably across different
systems, though all Unix/Linux systems I know of count time from the
start of the Unix epoch: January 1, 1970.

My suggestion is to get the first date, save the output, and then
subtract it from the second date.  This will yield a difference in
seconds, or do a little more math (or more arguments to 'date') to get
minutes, hours, or days.

Quoting from 'info coreutils date' on ARSC's Midnight system:

   * To convert a date string to the number of seconds since the epoch
     (which is 1970-01-01 00:00:00 UTC), use the `--date' option with
     the `%s' format.  That can be useful in sorting and/or graphing
     and/or comparing data by date.  The following command outputs the
     number of the seconds since the epoch for the time two minutes
     after the epoch:

          date --date='1970-01-01 00:02:00 +0000' +%s

I think you might do well simply to get the date in %s format (seconds
since the epoch) at the start and end, and subtract them.  For your
purposes, how about this?

  # Get the start date in seconds since the epoch:
    STARTDATE=`date +%s`
    mpirun ./preprocess
    mpirun ./analyze
    mpirun ./postprocess
  # Get the end date in seconds since the epoch:
    ENDDATE=`date +%s`
  # Get total run time in seconds:
    echo "Total time is: ${TOTALTIME}

# Editors solution based on the date '--date' option that Greg pointed
# out.

This solution saves the start and end dates to a variable in the default
format for date.  After the end date is calculated, the --date option
is used to generate the time since epoch for each date and then the
difference is calculated.

    echo $st
    # do mpi stuff
    mpirun ./preprocess
    mpirun ./analyze
    mpirun ./postprocess
    echo $et
    total=$(( $(date --date="$et" +%s) - $(date --date="$st" +%s) ));
    echo "walltime: " $total

Note, this example requires bash or ksh and the GNU version of date.

# Bracy Elson offered the following solution.

The user need simply put  the commands in their own script and time the
call to the script.  I'd suggest the following:

    % cat do_job.csh
    time mpirun ./preprocess
    time mpirun ./analyze
    time mpirun ./postprocess

Then in the user's batch script would appear:

    time ./do_job.csh

in place of executing the individual command separately.  In my
solution, I've also put "time" in front of each of the calls to
mpirun.  This way the user gets the benefit of the individual times
along with something quite close to their sum (calling run_job.csh
adds a slight amount of overhead).

Q: [Thanks to Anton Kulchitsky for this question]
The following C program starts with an empty string. It then tries to
append some text to it using the function try_realloc.  That function
simply reallocates memory to increase the size of the target string and
copies the contents of a second string to its end. However, the code
sometimes works and sometimes doesn't. What is wrong and how do I fix it?

#include <stdio.h>
#include <string.h>
#include <stdlib.h>

/* extends str with the content of addstr, reallocates memory to
   be able to store a longer string */
void try_realloc( char* str, const char* addstr )
  size_t len    = strlen( str );
  size_t addlen = strlen( addstr );

  str = realloc( str, ( len + addlen + 1 ) * sizeof( char ) );
  strncat( str, addstr, addlen+1 );

int main()
  char *str = calloc( 1, sizeof(char) ); /* str = "" */
  try_realloc( str, "first string," );
  printf( "%s\n", str );

  try_realloc( str, " second string." );
  printf( "%s\n", str );

  free( str );
  return 0;

[[ Answers, Questions, and Tips Graciously Accepted ]]

Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top