ARSC HPC Users' Newsletter 245, May 10, 2002



ARSC Seminar: The SMS Library for Portable Parallelism

Title: The SMS Library for Portable Parallelism
Presenter: Kate Hedstrom, PhD
When: June 6, 2002, 2-5pm
Where: UAF Butrovich Building, room 007
Registration: By email to Kate Hedstrom,

NOAA's Forecast Systems Laboratory has created a portable library to aid in the parallelization of logically rectangular simulations. SMS and its documentation is freely available from:

An introductory talk was given ( describing what SMS does for you. More information is available in "The Scalable Modeling System: A High-Level Alternative to MPI", issue #219:


In this seminar, you will see SMS in action using some simple examples. We will work through some SMS programs and show how to compile and run them on both the T3E and the IBM. An account on one of these ARSC HPC systems or the ARSC Linux cluster is required.


Unix Tools for Portable Applications, Part I of IV

[ Many thanks to Kate Hedstrom of ARSC for contributing this series of articles. ]

I belong to a team of people using and modifying the Regional Ocean Modeling System (ROMS). ROMS consists of over a hundred files and it uses one or more external libraries. The challenge is to be able to compile and run it on as many different computers as possible. We use "make" for compiling and will introduce it here. In future installments of this series, we will explore gnu autoconf and other ways to generate the Makefiles.

Introduction to Make

Make is a tool which is almost as old as Unix, designed primarily for keeping track of how programs are compiled. That is what we will describe here, although it can be used for other things, including web page maintenance. It is just a matter of telling make what commands need to be run to update your files.

Make gets its instructions from a description file, by default named "Makefile". This file is also called the Makefile, but other files can be used by invoking make with the -f option, e.g.:

  make -f Makefile.yukon

When I first got our ocean model, its Makefile looked something like:

model: main.o init.o plot.o
        f90 -o model main.o init.o plot.o

main.o: main.f
        f90 -c -O main.f

init.o: init.f
        f90 -c -O init.f

plot.o: plot.f
        f90 -c -O plot.f

        rm *.o core

The default thing to build is "model", the first target. The syntax is

target: dependencies
[TAB]  command
[TAB]  command

The target "model" depends on the object files, main.o and friends. They have to exist and be up to date before model's link command can be run. The other targets tell make how to create the object files. The original version of this Makefile turned off optimization on plot.f due to a compiler bug, but hopefully you won't ever have to worry about that.

Compiling "model" is simple, just type "make". Make will look for the file Makefile, read it, and do whatever is necessary to make "model" up to date. If you edit init.f, that file will be newer than init.o. Make would see that init.o is out of date and run the "f90 -c -O init.f" command. Now init.o is newer than model, so the link command "f90 -o model main.o init.o plot.o" must be executed.

Clean up by typing "make clean". The clean target will be brought up to date. "clean" has no dependencies, so the command (rm *.o core) will always be executed.


Make supports a simple string substitution macro. Set it with:

MY_MACRO = nothing today
and refer to it with:


The convention is to put the macros near the top of your Makefile and to use upper case. Also, use separate macros for the name of your compiler and the flags it needs:

F90 = f90
F90FLAGS = -O3
LIBDIR = /usr/local/lib
LIBS = -L$(LIBDIR) -lmylib

Let's rewrite our Makefile using macros:

# IBM version
F90 = xlf90
F90FLAGS = -O3 -qstrict
LDFLAGS = -bmaxdata:0x40000000

model: main.o init.o plot.o
        $(F90) $(LDFLAGS) -o model main.o init.o plot.o

main.o: main.f
        $(F90) -c $(F90FLAGS) main.f

init.o: init.f
        $(F90) -c $(F90FLAGS) init.f

plot.o: plot.f
        $(F90) -c $(F90FLAGS) plot.f

        rm *.o core

Now when we change computers, we only have to change the compiler name in one place. Likewise, if we want to try different optimization levels, we only specify that in one place.

By the way, you can use comments by starting the line with a #.

Implicit Rules

Make has some rules already built in. For fortran, you might be able to get away with:

OBJS = main.o init.o plot.o

model: $(OBJS)
        $(FC) $(LDFLAGS) -o model $(OBJS)

as your whole Makefile. Make will automatically invoke its default Fortran compiler, possibly f77 or g77, with whatever default compile options it happens to have (FFLAGS). One built in rule often looks like:

        $(CC) $(CFLAGS) -c $<

which says to compile .c files to .o files using the compiler CC and options CFLAGS. We can write our own suffix rules in this same style. The only thing to watch for is that make by default has a limited set of file extensions that it knows about. Let's write our Makefile using a suffix rule:

# Cray version
F90 = f90
F90FLAGS = -O3

        $(F90) $(F90FLAGS) -c $<

model: main.o init.o plot.o
        $(F90) $(LDFLAGS) -o model main.o init.o plot.o

        rm *.o core


There may be additional dependencies beyond the source->object ones. In our little example, all our source files include a file called commons.h. If commons.h gets modified to add a new variable, everything must be recompiled. Make won't know that unless you tell it, using the syntax:

# include dependencies
main.o: commons.h
init.o: commons.h
plot.o: commons.h

Fortran 90 introduces module dependencies as well, but we'll save them for another day.

In conclusion, make is a very powerful tool. The book to read is "Managing projects with make" by Andrew Oram and Steve Talbott, 1991, O'Reilly.

Make has a portable subset of features, with system-dependent extensions. If you want to use extensions, I suggest sticking with those supported by gnu make (gmake), since it is available most everywhere.

The most common newbie mistake is to forget that the commands after a target *have* to start with a tab.


The Revised C Standard, C99

[ Thanks to Jim Long of ARSC for this contribution. ]

Did you think that C had stopped evolving? Read on...

ANSI C has a new revision, C99, that many are not familiar with, due in large part to the fact that few compilers implement many of its features. This article will highlight some of the more obvious changes relevant for scientific computing.

  1. Some new keywords in C99 include:

    (hint only, not a requirement)
    long long
    (64 bit int)
    (one writer or many readers: if an object accessed with a restricted pointer is modified then all access is through that pointer, but unmodified objects may be aliased.)
  2. C99 no longer requires that all declarations be made at the beginning of a block. This sample is thus legal under C99:

      #include <stdio.h>
      int main()
         int i=10;
         i += 2;
         int j;  // declaration after executable statement 
                 //  (note C++ style comments)
         for(j=0; j<i; j++)
           printf("j = %d\n", j);

    The declaration of j may even occur in the "for" loop, i.e.

      for(int j=0; j<i; j++),
    similar to C++. Hint: Delay the declaration of a variable until that variable's first assignment to avoid uninitialized variable errors [1].
  3. C99 introduces variable length arrays. When you first learned C, you may have wanted to do something like this:

      int main()
         int i;
         printf("Enter an integer\n");
         scanf("%d", &i);
         int array[foo(i)];   // runtime behavior

    Such runtime arrays are OK in C99. Unlike a malloc'ed array, however, variable length arrays cannot be resized while they are in scope. Function prototypes that take a variable length array are declared with a "*" in the array brackets, e.g.:

      foo(int array[*]);
  4. IEEE 754 floating point arithmetic is available in C99, along with complex numbers [2]. Complex numbers in C90 are handled clumsily as structures, while in C++ they are handled as classes. In C99, however, they are built-in types:

      _Complex float
      _Complex double
      _Complex long double
      _Imaginary float
      _Imaginary double
      _Imaginary long double
    _Complex and _Imaginary are not required in freestanding implementations [3]. An example given in [2] solves the equation x^2 - 4x + 13 = 0 with the following snippet:
      #include <stdio.h>
      #include <math.h>
      #include <complex.h>
      int main() {
         double complex a=1, b=-4, c=13, x1, x2;
         x1 = (-b + csqrt(b*b-4*a*c))/(2*a);
         x2 = (-b - csqrt(b*b-4*a*c))/(2*a);
         printf("x1 = complex(%f,%f)\n", creal(x1), cimag(x1));
         printf("x2 = complex(%f,%f)\n", creal(x2), cimag(x2));
    The output of this program is:
      x1 = complex(2.000000,3.000000)
      x2 = complex(2.000000,-3.000000)

An example of header file complex.h can be seen at [4], where there is a macro to change a declaration like "double complex" into "double _Complex" (order unimportant), along with various versions of math functions to do complex arithmetic and return real and imaginary parts (like creal() and cimag() above).

Other features in C99 not covered in this article include a bool type, flexible array members that allow struct members without bound, static array parameters, and others (see the various references).

Finally, be aware of differences between C99 versus C++98 when using a C++ compiler [5].

The status of C99 features in gcc can be found at [6]. The current availability of C99 at ARSC is as follows. gcc is installed on icehawk (IBM SP cluster) at version 2.95, which has limited support for C99 features. Quest (our linux cluster) and the SGI servers have gcc 2.91, which offer no support.


  [1] Randy Meyers, "The New C: Declarations and Initializations," 
      C/C++ Users Journal, April 2001, 
  [2] Harry H. Cheng, "C99 & Numeric Computing", Dr. Dobb's Journal, 
      March 2002





New Cray On-Line Documentation

Cray has replaced dynaweb with a new documentation server, "CrayDoc." CrayDoc is a nice improvement: faster and more powerful. We encourage you to snoop around:

ARSC's third party documentation remains available, at:

For the moment, there isn't a link from the CrayDoc welcome page, but our webmaster assures me it's coming soon. Till then, use the above URL. You'll find documentation for:

  • IMSL
  • HDF
  • PGI
  • ImmersaDesk
  • Ferret

ARSC Support for Public Radio and TV

  • ARSC is now the sponsor of "Earth and Sky" on our local NPR station, KUAC. Listen at 8:24 am during Alaska Edition.

  • ARSC, in conjunction with Spenard Builders Supply and the UAF Journalism Department, is also sponsoring a visit by NPR's Corey Flintoff.

    Flintoff will be in Fairbanks as the UAF commencement speaker, and will give a public lecture called "Inside NPR: Deciding what Makes the News" in the UAF Salisbury Theater at 8pm on Friday, May 10.

  • We're also sponsoring NOVA (Tuesday nights at 9) and Nature and American Masters (Sunday nights) on Alaska One.


Next Newsletter, May 31

Both editors will be at CUG in two weeks... looking for good stories! But, this'll make the newsletter a week late.


Quick-Tip Q & A


A:[[From time to time I'm instructed to append some path to "PATH", 
  [[ some file to "LM_LICENSE_FILE", or some such thing.  For instance,
  [[ in your last issue, regarding Totalview:
  [[     "To use it, add this: 
  [[       /usr/local/adm/pkg/flexlm/license.dat 
  [[   to the settings of your LM_LICENSE_FILE environment variable."
  [[ How specifically, would you suggest I do that?

  # Many thanks to Rich Griswold for his response:

  How you do this depends on which shell you are using.  For sh, ksh, bash,
  and similar shells, you can do the following:

    if [ ${MY_VAR} ]; then
    export MY_VAR

  The curly braces around the variable name in the assignment statement
  are there to prevent the shell from interpreting the rest of the line
  as part of the variable name.  For some of these shells, you may be
  able to combine the assignment and export statements.  For example:

    export MY_VAR=${MY_VAR}:var_value

  For csh and tcsh, you can do the following:
    if ( $?MY_VAR ) then
      setenv MY_VAR ${MY_VAR}:var_value
      setenv MY_VAR var_value
  In general both forms will append a colon and the new value to the
  environment variable if it already exists.  Otherwise, the new value
  will be assigned to a new environment variable.

Q: I thought I'd use the Fortran 90 compiler to compile some Fortran 90 
   code. Silly me. This is on the IBM SP, and it complains:

     ICEHAWK1$ xlf90 prog.f90
     xlf90: 1501-218 file prog.f90 contains an incorrect file suffix
     ld: 0711-715 ERROR: File prog.f90 cannot be processed.
         The file must be an object file, an import file, or an archive.
   So, what do I do now?

[[ Answers, Questions, and Tips Graciously Accepted ]]

Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top