ARSC HPC Users' Newsletter 248, June 14, 2002


This week, ARSC installed a Cray SX-6. Here's the press release which we sent as a "Bulletin" on Tuesday, in case you missed it:


Here's a photo of "rime" in our machine room...

(Local folks, drop by the machine room viewing window--Butrovich Bldg--to see the real thing. The view is somewhat blocked by other machines, but if you strain, you can catch a glimpse.)

Unix Tools for Portable Applications, Part III of IV

[ Many thanks to Kate Hedstrom of ARSC for contributing this series of articles. ]

Gnu autotools

Last time, we talked about using makemake to automatically generate Makefiles. It is a nice quick solution for relatively simple programs. Once our projects gain complexity, it would be nice to have some help in making them portable. The gnu autotools are used to create a configure script and a Makefile template; the Makefile itself isn't created until the configure script is run on the computer where you will do the compiling. It can then do sanity checking to make sure the compiler really is there and working. This article and the next will be a brief introduction to the gnu autotools. For more information, see the documentation that comes with the autotools and a book called "Gnu autoconf, automake, and libtool" by Gary Vaughan et al., 2000, New Riders.

There are three tools, called autoconf, automake, and libtool. Autoconf generates the configure script from its input file (older versions used Since much of what autoconf is used for is the generation of Makefiles, automake was created to help it out. Libtool is for creating libraries, both shared and static. These tools and much of their documentation are aimed at C and C++ users. However, they support Fortran to some extent; that is what we will talk about here.

Simple example

Let's revisit that ocean model with three source files and one include file. We need two new files:

## Process this file with automake to create
bin_PROGRAMS = model
model_SOURCES = commons.h init.f main.f plot.f

Here, bin_PROGRAMS is a special name with the name of the executable we want to build. The next line tells it all the source files, including the include file. The first line is a comment.

dnl Process this file with autoconf to produce a configure script.
AC_INIT([model], [1.0], [])

Again, the first line is a comment.

The AC_INIT line has to be the first macro. The three arguments are optional, but are the name of the package, the version number, and the email address of the responsible person. The brackets [] are quotes: it is recommended that all arguments be quoted.

AC_CONFIG_SRCDIR is a sanity check to see that main.f really is there.

AM_INIT_AUTOMAKE initializes automake.

AC_PROG_F77 says that we will be using the f77 language and instructs autoconf to search for a working compiler.

AM_CONFIG_HEADER is optional here, but instructs the autotools to put all the C preprocessor defines into a file (config.h).

AC_CONFIG_FILES says that configure will be creating the Makefile from

AC_OUTPUT has to be last.

You may come across examples that look somewhat different - the recommended style is an evolving thing as the autotools themselves evolve. I am trying to demonstrate current versions (autoconf 2.53 and automake 1.6.1).

Running autotools

So, now what? We need to have a development system with autoconf and automake installed. These programs in turn require gnu m4 and Perl. On that system, create the configure script with:

automake --foreign --add-missing --copy

Then, copy all the files to the system where you will be running the program and type:


You will now have a ferociously big Makefile with all kinds of stuff in it. This Makefile complies with the gnu Makefile standards, including the "install", "clean", and "distclean" targets. Unfortunately, it does not have our include file dependencies in it. If we were using C or C++, it would ask the compiler to generate the dependencies actually used. For instance, if you have something like:

#include "fooby.h"

it will get the fooby.h dependency only if HAVE_FOOBY is defined. It doesn't know how to do that in the Fortran world and therefore doesn't even bother to try. The gnu autotools are very flexible and extensible, however, so there is nothing to stop us from adding whatever we need to the system. We can write our own dependency checker, taking a hint from makemake. I happen to have such a checker, called "sfmakedepend", at:

Install it somewhere like $(HOME)/bin on the platform on which the program will be run. It is written in Perl and requires version 5.6 for the built-in man pages; that version of Perl is in /usr/local/pkg/perl/perl-5.6.1/bin on the ARSC systems.

Add these lines to

[TAB]  sfmakedepend $(MDEPFLAGS) $(model_SOURCES)

and rerun automake, autoconf, and configure. Now you can type "make depend" to get the dependencies added to the Makefile. A similar tool can be found at:

Fortran 90

The autotools maintainers want to support just one Fortran language, not duplicate the macros for F77, F90, F95, F2K, etc. The way to ask it to pick from just f90 and f95 compilers is to say:

AC_PROG_F77([f90 xlf90 pgf90 epcf90 f95 fort xlf95 lf95 g95])

Just ignore that the messages come out referring to Fortran 77.

There is still a need for macros to test things like module file names, whether it is an f95 compiler, etc. Some of these things have been contributed at various times to the autoconf list, but not collected and included in the distribution. There is also at least one person who really wants duplicate everything for f77 and f90, including things like F90FLAGS, F90LDFLAGS, etc. This is the bleeding edge.


On the ARSC supercomputers, you probably want to override which compiler it uses:

chilkoot or yukon% ./configure F77=f90
icehawk% ./configure F77=xlf

For a real application you will also want to override the FFLAGS, for example:

icehawk% ./configure F77=xlf FFLAGS="-O3 -qstrict"

Remember that make has default macro names for compilers, flags, etc? The same is true of the autotools as well. LDFLAGS and LIBS are meaningful and are both used in the link command. If you dare, look through the Makefile to see what names it is using. Look through config.h to see which C preprocessor macros are defined. Next time, we'll talk about a program which uses an external library.

ARSC Summer Tours

Every Wednesday, at 1:00pm, through August 28th, you can drop in for a tour of ARSC, and bring the family!

These tours are oriented toward the general public, and are synchronized with tours of the Geophysical Institute (2pm) and International Arctic Research Center (3pm), all on the West end of the UAF campus. See:

OpenMP Workshop at ARSC August 5-7

WOMPAT 2002 Workshop on OpenMP Applications and Tools

August 5 - 7, 2002

Hosted by:

Arctic Region Supercomputing Center University of Alaska Fairbanks

Technical Program:

OpenMP is emerging as an industry standard interface for shared memory programming of parallel computer applications. OpenMP offers a way to write applications with a shared memory programming model portable to a wide range of parallel computers. In addition, a number of research groups are actively developing future enhancements of the language, debugging and performance monitoring tools, optimizing compilers and run-time environments.

The Workshop on OpenMP Applications and Tools (WOMPAT2002) will serve as a forum for users and developers of OpenMP to meet, share ideas and experiences, and to discuss the latest developments in OpenMP and applications.

WOMPAT2002 follows a series of workshops on OpenMP, such as WOMPAT2001, EWOMP2001, and WOMPEI2002. It is part of the cOMPunity initiative whose main objective is the dissemination and exchange of information about OpenMP. See for more details of this activity and contents of past meetings.

WOMPAT2002 is co-sponsored by the OpenMP Architecture Review Board.

The program will feature tutorials on "Advanced OpenMP Programming" and "Dual-level Parallelism using MPI and OpenMP." Dr. Phuong Vu of BP will make an invited presentation on "Parallelization of Seismic Imaging Applications on SMP Clusters."

Registration and Additional Information:

For registration and details see:

For more information on the Arctic Region Supercomputing Center see:

For more information on the University of Alaska Fairbanks see:

Program Committee:

Guy Robinson Program Chair Arctic Region Supercomputing Center, University of Alaska Fairbanks

Barbara Chapman University of Houston, Houston

Daniel Duffy Engineer Research and Development Center, Vicksburg

Timothy Mattson Intel Corporation

Sanjiv Shah KAI Software Lab, Intel Corporation

Rudolf Eigenmann School of Electrical and Computer Engineering, Purdue University

ARSC Papers On-Line

New papers by ARSC staff, on our technical papers page:

  • Open Source Security Tools in a High-Performance Environment, Liam Forbes
  • Synthetic Aperture Radar Processing at the Arctic Region Supercomputing Center, Tom Logan
  • SV1ex Memory Upgrade Gives Greatest Boost to User Performance, Tom Baring

Polar Express Card Soon Required to enter Visualization/Access Labs

ARSC's three access labs--Duckering, Elvey, and the Natural Sciences Facility--will be re-"keyed" soon (probably the last week of June). Watch the MOTD for details.

You'll need a Polar Express card, and must FAX a new Viz Lab Access Agreement to ARSC (not to the Polar Express office), to be set up.

Lab policies and the agreement form are at:

Quick-Tip Q & A

A:[[ The "goto" in this fortran loop, with the label applied to the enddo 
  [[ statement, acts like "continue" in a C "for" loop.  Everything
  [[ between the "goto" and the end of the loop is skipped for the current
  [[ iteration.
  [[       do n=1,m
  [[       :
  [[       if (i .eq. j) goto 100
  [[       :
  [[ 100   enddo
  [[ Isn't there a cleaner way to do this?   

  Fortran 90/95's "cycle" and "exit" statements are analogous to C's
  "continue" and "break." When labeled, cycle and exit can be applied to
  any enclosing loop.

Q: I've gotten used to NQS, and the other batch schedulers on
   HPC systems, and would like to "submit" jobs on an SGI workstation
   in a similar way.  I want jobs to start, and keep running, even after
   I've logged off.  Can I do this?

[[ Answers, Questions, and Tips Graciously Accepted ]]

Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top