ARSC HPC Users' Newsletter 314, April 22, 2005

GNU Make, Part 1

[ Thanks to Kate Hedstrom of ARSC ]

In a previous article, I provided an introduction to make, the standard Unix project management tool. At that time I wrote:

Make has a portable subset of features, with system-dependent extensions. If you want to use extensions, I suggest sticking with those supported by gnu make (gmake), since it is available most everywhere.

Over the years, the community has moved from the stance of writing portable Makefiles to a stance of just using a powerful, portable make. The make bible from O'Reilly has come out in a new (third) edition, with the new title of "Managing projects with GNU Make" and a new author, Robert Mecklenburg, 2005, ISBN 0-596-00610-1. If you have been considering learning more about make, and perhaps dusting off your Makefiles, read this book.

Make Rules

The core of make hasn't changed in decades, but concentrating on gmake allows one to make use of its nifty little extras designed by real programmers to help with real projects. The first change that matters to my Makefiles is moving from suffix rules to pattern rules. I've always found the .SUFFIXES list to be odd, especially since .f90 is not in the default list. Good riddance to all of that! For a concrete example, the old way to provide a rule for going from file.f90 to file.o is:

.SUFFIXES: .o .f90 .F .F90
<TAB>     $(FC) -c $(FFLAGS) $<
while the new way is:

%.o: %.f90
<TAB>     $(FC) -c $(FFLAGS) $<

In fact, going to a uniform make means that we can figure out what symbols are defined and use their standard values - in this case, $(FC) and $(FFLAGS) are the built-in default names. If you have any questions about this, you can always run make with the -p (or --print-data-base) option. This prints out all the rules make knows about, such as:

# default

Printing the rules database will show variables that make is picking up from the environment, from the Makefile, and from its built-in rules - and which of these sources is providing each value.


In the old days, I only used one kind of assignment: = (equals sign). Gmake has several kinds of assignment (other makes might as well, but I no longer have to know or care). An example of the power of gnu make is shown by an example from my Cray X1 Makefile. There is a routine which runs much more quickly if a short function in another file is inlined. The way to accomplish this is through the -O inlinefrom=file directive to the compiler. I can't add this option to FFLAGS, since the inlined routine won't compile with this directive - it is the only file that needs it. I had:

    FFLAGS = -O 3,aggress -e I -e m
    FFLAGS2 = -O 3,aggress -O inlinefrom=lmd_wscale.f90 -e I -e m

<TAB>     $(FC) -c $(FFLAGS2) $*.f90

The first change I can make to this using gmake style assignments is:

    FFLAGS := -O 3,aggress -e I -e m
    FFLAGS2 := $(FFLAGS) -O inlinefrom=lmd_wscale.f90

The := assignment means to evaluate the right hand side immediately. In this case, there is no reason not to, as long as the second assignment follows the first one (since it's using the value of $(FFLAGS)). For the plain equals, make doesn't evaluate the right-hand side until its second pass through the Makefile. However, gnu make supports an assignment which avoids the need for FFLAGS2 entirely:

    lmd_skpp.o: FFLAGS += -O inlinefrom=lmd_wscale.f90

This appends the inlining directives to FFLAGS for the target lmd_skpp.o only. I think this is pretty cool!

Another assignment operator sets the variable only if a value has not already been set elsewhere (the environment, for instance):

    FC ?= mpxlf90_r

If we had used := or =, we would override the value from the environment.


Using gmake across all platforms solves some portability issues, but not all. For instance, there is an example in chapter one which uses the pr command with some gnu-only options. The IBM has an older pr in /usr/bin and the gnu version in /opt/freeware/bin. The example failed until I changed my path to make the gnu version my default.

The book has a chapter on portability with an emphasis on cygwin, which is great if you use cygwin. There is also an example of extracting the machine-dependent parts of the Makefile into a series of include files:

MACHINE := $(shell uname -sm 
 sed 's/ /-/g')
include $(MACHINE)

Running uname on the IBM gives differing values for the -m option between iceberg and iceflyer - perhaps uname -s is enough for my needs. Having an automatic way to set the MACHINE type is nice, but our users are using more than one Fortran compiler under Linux, so knowing we're on Linux is only half the battle.

In the past, I have been using imake to build Makefiles for all the different systems I have access to. Robert Mecklenburg claims that gmake is powerful enough to eliminate the need for imake. I'll let you know if I agree as I read more of this book and continue to update my Makefiles.

Introduction to IDL Class

ARSC spring training will continue on April 26th with an Introduction to IDL given by Sergei Maurits.

    Date:     Tuesday April 26th, 2005
    Time:     1:00 p.m - 2:00 p.m 
    Location: West Ridge Research Building (WRRB) Room 010 

For more information on this class, email Tom Logan at or call him at: 907-450-8624.

IBM: Array Bounds Checking

Out of bounds array operations can be a pain to track down. Fortunately the IBM compilers allow array bounds checking to be enabled for arrays of known size by simply adding a few compiler flags.

XL Fortran

Bounds checking is enabled with the "-C" compiler flag. When an out of bound operation is detected a trap signal will be generated and cause the application to core dump. Because this method causes a core dump, be sure to compile with debugging information (i.e. -g). Below is an example program.

iceberg2 1% cat out_of_bounds.f
! program: out_of_bounds
! example of array bounds trapping with xlf compiler
! compile:
! xlf_r -C -g out_of_bounds.f -o out_of_bounds
      program out_of_bounds
        implicit none
        real a(100)
        integer ii
        call init(a,100)            
        print *, a(10)

!     The loop below will generate a trap when element
!     a(102) is accessed.  
        do ii=2, 110, 2

      end program

      subroutine init(array, size)
!     The loop below access element 0 of array.  
!     which is illegal in this case.
!     The array bounds checking may not catch
!     access out of bounds at the end of the 
!     array if the array wasn't statically allocated
!     in the subroutine.
        real array(*)
        integer size
        integer ii

!      The array starts at element 1!           
        do ii=0, size
      end subroutine

iceberg2 2% xlf_r -g -C out_of_bounds.f -o out_of_bounds

Running the program yields the following:

iceberg2 3% ./out_of_bounds
Trace/BPT trap (core dumped)

Using your favorite debugger it doesn't take long to find the two bugs in the code.

Visual Age C/C++

The "-qcheck" flag allows array bounds checking, null pointers and integer divide by zero to be trapped. The array bounds checking will detect out of bounds array operations for arrays of known size (i.e. statically allocated). Below is an example with some array operations which will and will not generate a trap.

iceberg2 7% cat qcheck_demo.c
#include <stdio.h>
#include <stdlib.h>

void initArray(int * array, int size);

int main(int argc, char ** argv)
   int ii;
   int * dynArray;
   int statArray[100];
   dynArray = (int *) malloc(sizeof(int)*100);

     the following won't generate a trap! 
     neither will these out of bound accesses in a
     dynamically allocated array...  Though it may 
     cause a segmentation fault.
     the following will generate a trap. 
   statArray[0] = 10* statArray[100];

   return 0;

void initArray(int * array, int size)
      this function demonstrates an out of bound memory
      access that won't be trapped by -qcheck=all
   int ii;
     this for loop access array[size] which is out of
     bounds, but won't generate a trap.  

iceberg2 8% xlc_r -qcheck=all -g qcheck_demo.c -o qcheck_demo
iceberg2 9% ./qcheck_demo 
Trace/BPT trap (core dumped)

Using totalview we quickly find one of the problematic lines.

iceberg2 10% totalviewcli qcheck_demo core
d1.<> dwhere 
>  0 main             PC=0x10000470, FP=0x2ff22700 [qcheck_demo.c#29]
   1 .__start         PC=0x10000200, FP=0x2ff228e0 [/.../qcheck_demo]
d1.<> dlist 29
  29 >    statArray[0] = 10* statArray[100];
  31      return 0;
  32   }

Though the array bounds checking methods don't catch all out of bounds operations, it can be a quick way to find a pesky bug. As with other debugging options it's best not to enable bounds checking on production codes.

If these methods don't help track down the bug, other tools such as Electric Fence and Totalview allow more general memory error checking.

( See HPC User's Newsletter issue 244 for more information on Electric Fence. )

Quick-Tip Q & A

A: [[ Emacs isn't available on the Cray X1.  Thus, I must edit source 
   [[ code on my desktop workstation and move it to the X1 for 
   [[ compilation and testing.  It goes like this:
   [[    ---WORKSTATION---        
   [[       edit
   [[       save 
   [[       forget to move updated file to X1
   [[    ---X1---
   [[       make (Ooops! now I realize I don't have the updated file.)
   [[    ---WORKSTATION---
   [[       ftp file to X1
   [[    ---X1---
   [[       make
   [[       run

# Martin Luthi:

Believe me, I don't evangelize for Emacs! But here we go...

If you have ssh / krsh access ot the maschine, you can (in your local
Emacs session) load the file with Tramp (Transparent Remote (file)
Access, Multiple Protocol). Loading the file can be as simple as loading


(Examples and an overview can be found on
<a href=""></a>)

If your username is different:


In the case of ARSC, you might need to supply the protocol


or with Kerberos


There are also options for multi-hop login (I used that once to log into
a local firewall, a remote firewall and the actual machine.. it works!)

The huge benefit of Tramp is that you can use your local, customized,
fast Emacs within your environment, and edit files somewhere on the
globe. Only the initial load, and sometimes saving, take some time.

# Editors answer:

Another option is to automate the file transfer step as part of the make
process, back on the HPC system. Here are two examples, ready for you to
modify as desired. First, a shell script that retreives the file from
the workstation, and then runs make:

File: ./get_and_make                       

SYNTX="Expected Args: <remote_host_name> <remote_path_and_file>"

FILE=$(basename ${2:?$SYNTX})
PTH=$(dirname ${2:?$SYNTX})

echo "${0}:Retrieving file \"${FILE}\" from directory \"${PTH}\" on \"${HOST}\""
echo "
  cd ${PTH}
  newer ${FILE}
 kftp ${HOST}

echo "${0}:Running \"make\""

Sample session: 
klondike$ ./get_and_make pike /tmp/userdirectory/d.f90
./get_and_make:Retrieving file "d.f90" from directory "/tmp/userdirectory" on "pike"
GSSAPI authentication succeeded
Name (pike:baring): Verbose mode on.
250 CWD command successful.
local: d.f90 remote: d.f90
229 Entering Extended Passive Mode (

150 Opening BINARY mode data connection for d.f90 (20312 bytes).
226 Transfer complete.
20312 bytes received in 0.00093 seconds (2.1e+04 Kbytes/s)
221 Goodbye.
./get_and_make:Running "make"
ftn -O msp -rmo -eZ -F -U ORIGINAL -O ipa5 -c d.f90
ftn -O msp -o d d.o 


A second approach is to do everything from inside make.  The following
makefile has been modified to use "scp -p" to copy the file from
"workstn" to "klondike." (The "-p" option preserves the modification time
of the file.  Thus, "make" otherwise behaves as usual, only repeating
the compilation and link when the file has been modified.)

File: makefile.retrieve         
EXE     = d
FC  = ftn
FFLAGS  = -O msp -rmo -eZ -F -U ORIGINAL -O ipa5
LDFLAGS = -O msp
LIBS    = 
OBJ     = d.o timers.o 

all: retrieve $(EXE)

        scp -p workstn:/tmp/userdirectory/d.f90 .

$(EXE): $(OBJ)
        $(FC) $(LDFLAGS) -o $@ $(OBJ) $(LIBS)

.SUFFIXES : .o .f .f90 .f90

        $(FC) $(FFLAGS) -c $<

        rm -f core core.* $(EXE) $(OBJ)

Sample session: 
klondike$ make -f makefile.retrieve
  scp -p workstn:/tmp/userdirectory/d.f90 .
   d.f90                              100%   20KB  19.8KB/s   00:00    

  ftn -O msp -rmo -eZ -F -U ORIGINAL -O ipa5 -c d.f90
  ftn -O msp -o d d.o timers.o

Q:  When my job completes, I want to get the output back to my local
    machine in an automated fashion for post processing, etc.  Is there
    a user friendly and secure way of doing this?

[[ Answers, Questions, and Tips Graciously Accepted ]]

Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top