ARSC HPC Users' Newsletter 307, January 14, 2005

Adding Compiler Flag Information to an Executable

[ Thanks to Kate Hedstrom of ARSC]

A friend asked me the following question:

> I would like to store the compiler $(CFT) and compiler flags $(FFLAGS) > in a text string that is immediately inserted or defined in the > "mod_strings.F". Then, I want to use that string for a global > attribute in the NetCDF output. In this way we will have the > compiler and flags that ROMS used to create the file.

The question here is how to get the value of a "make" variable into a Fortran variable. One way to do it is with Perl, invoking it from within the Makefile:

(start Makefile)


CFT = f90
FFLAGS = -g -q64

CPP = cpp
CPPFLAGS = -P

strings.f90: strings.F
$(CPP) $(CPPFLAGS) strings.F > strings.f90
perl -i -pe "s/MY_FFLAGS/$(FFLAGS)/" strings.f90
perl -i -pe "s/MY_CFT/$(CFT)/" strings.f90

(end Makefile)

Note that I've got the source code in a file.F file and run it through the C preprocessor myself. This is how we handle all our files, but I want to be substituting the string into the second file, not my primary source file. If you're not generating a temporary file anyway, you'll have to create one just for this, perhaps copying strings.F to strings2.F, modifying strings2.F, and compiling that:


strings.o: strings.F
cp strings.F strings2.F
perl -i -pe "s/MY_FLAGS/$(FFLAGS)/" strings2.F
$(CFT) -c $(FFLAGS) strings2.F
mv strings2.o strings.o
rm strings2.F

Also note, the string I'm substituting in the Fortran file is MY_FFLAGS instead of just FFLAGS . There might be a reason to want a literal FFLAGS in your Fortran, perhaps as the name of the attribute being written to the output file - we don't want to change that! The strings.F file could contain:


      character*80 :: fflags, cft

      cft = 'MY_CFT'
      fflags = 'MY_FFLAGS'

resulting in:


      character*80 :: fflags, cft

      cft = 'f90'
      fflags = '-g -q64'

Using Loadleveler Job Steps

Loadleveler allows multiple job steps to be specified within a single script. Job steps can be used in many instances where job chaining would traditionally be used (e.g. preprocessing and postprocessing of job files).

We need to introduce two Loadleveler keywords to take advantage of job steps:

  1. step_name - the name of a job step. This will be used with the dependency keyword.

    E.g. # @ step_name = step1

  2. dependency - used to specify dependencies between job steps based on the return values and other attributes of previously run jobs. The Loadleveler documentation outlines the available logical operators (see the link below).

    E.g. # @ dependency = ( step1 == 0 )

    This dependency will allow the job step to run only if the return value for step1 was 0 (i.e. step1 exited normally without error).

A more in depth description of these two keywords can be found on IBM's website: http://publib.boulder.ibm.com/doc_link/en_US/a_doc_lib/sp34/LoadL/am2ugmst02.html#ToC_159

Let's look at an example which does the following:

  1. Copies files from long term storage ( $ARCHIVE ) to temporary storage ( $WRKDIR ) using the 'data' class.
  2. Runs a job using the 'standard' class.
  3. Copies the results from temporary storage ( $WRKDIR ) to long term storage ( $ARCHIVE ) using the 'data' class.

iceberg1 1% cat steps.cmd
#!/usr/bin/ksh
#
# @ step_name        = prep
# @ environment      = COPY_ALL; $PATH
# @ error            = $(executable).$(jobid).$(stepid).err
# @ output           = $(executable).$(jobid).$(stepid).out
# @ notification     = error
# @ job_type         = serial
# @ class            = data
# @ queue
#
# @ dependency       = ( prep == 0 )
# @ step_name        = comp1
# @ environment      = COPY_ALL; $PATH
# @ error            = $(executable).$(jobid).$(stepid).err
# @ output           = $(executable).$(jobid).$(stepid).out
# @ notification     = error
# @ job_type         = parallel
# @ node_usage       = not_shared
# @ node             = 2
# @ tasks_per_node   = 8   
# @ network.MPI      = sn_all,shared,us
# @ class            = standard 
# @ wall_clock_limit = 4:00:00
# @ queue
#
# @ dependency       = ( comp1 == 0 )
# @ step_name        = post
# @ environment      = COPY_ALL; $PATH
# @ error            = $(executable).$(jobid).$(stepid).err
# @ output           = $(executable).$(jobid).$(stepid).out
# @ notification     = error
# @ job_type         = serial
# @ class            = data
# @ queue

# The environment variable "$LOADL_STEP_NAME" is set by loadleveler.
# The value of this variable is identical to the value of the loadleveler 
# keyword 'step_name' for a particular step.  We use this variable to
# control which portion of the script gets executed for each step.
#

case "$LOADL_STEP_NAME" in
    "prep")
        # Preprocessing: Copy files from long term storage to $WRKDIR.
        #
        cp $ARCHIVE/myjob/* $WRKDIR/myjob
        ;;
    "comp1")
        # Computation: Run the application and create output files.
        #
        cd $WRKDIR/myjob
        ./my-job input_data
        ;;
    "post")
        # Postprocessing: Copy results to $ARCHIVE.
        #
        cp $WRKDIR/myjob/* $ARCHIVE/myresults/
        ;;
esac

## END OF FILE ########

Notice that the preprocessing, computation and postprocessing are all done by a single script, whereas traditional job chaining would require 3 scripts.

When the script is submitted to the Loadleveler, we see that there are 3 steps shown in the queue as expected.


iceberg1 2% llsubmit steps.cmd
llsubmit: The job "b1n1.54195" with 3 job steps has been submitted.
iceberg1 3% llq -u username
Id                       Owner      Submitted   ST PRI Class        Running On 
------------------------ ---------- ----------- -- --- ------------ -----------
b1n1.54188.0             username    1/12 15:35 R  50  data         b1n2       
b1n1.54188.1             username    1/12 15:35 NQ 50  standard                
b1n1.54188.2             username    1/12 15:35 NQ 50  data                    

While the first step ( prep ) is running the second and third steps wait in the NOT QUEUED state. If the first step exits successfully the next step ( comp1 ) will begin running, however if the first step fails then the next two steps will not run because the dependencies cannot be met.

X1 Kernel & New PrgEnv Updates

On 1/19/2005 at 6PM Alaska time, the new programming environment and kernel will be updated on klondike. The kernel upgrade to UNICOS/mp 2.5 may require that some codes be relinked.

After the upgrade the programming environments will be configured as follows:

  • PrgEnv.new : Updated: points to PE 5.3.0.1 was PE 5.3.0.0.

PE 5.3.0.0 will continue to be available in PrgEnv.53.first_set.

For more on programming environments and "module" commands read "news prgenv", "man module", or contact consult@arsc.edu.

The UNICOS/mp 2.5 kernel has the following notable changes:

  • Automatic stack trace on abort (rendering the TRACKBK environment variable obsolete).
  • Changes to the computation of memory stack and heap sizes.
Applications linked with UNICOS/mp 2.3 libc or older may generate a startup error and abort due to changes in the handling of RSS memory limits. Relinking with the new libraries should eliminate this error.

Quick-Tip Q & A


A: [[ Let's rest our brains for the holiday... ]]
 
   Hopefully everyone is well rested and ready for a new Quick-Tip. 



Q: I am writing some code in C++ which uses a library that was written 
   in C and compiled with a C compiler.   During the linking stage I get a 
   bunch of undefined symbols.

   E.g.
   ld: 0711-317 ERROR: Undefined symbol: .cb_compress(long*,long*,long,long)
   ld: 0711-317 ERROR: Undefined symbol: .cb_revcompl(long*,long*,long,long)
   ld: 0711-317 ERROR: Undefined symbol: .cb_version(char*)
   ld: 0711-317 ERROR: Undefined symbol: .cb_free(void*)

   It works fine when I recompile the library with a C++ compiler, but 
   I really don't want to have two versions of the same library.  What's 
   going on here?  There must be a way to use a C library with C++ without
   recompiling the library.

[[ Answers, Questions, and Tips Graciously Accepted ]]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top