ARSC T3E Users' Newsletter 173, July 16, 1999

C Pre-Processor Can Mess Up Fortran 90 Code

Recently, two ARSC users were bitten trying to build well-known applications. In both cases, the applications had been shipped with make files (or generated make files) which used the standard C pre-processor, cpp, to pre-process Fortran 90 source files.

The problem arose because:

  1. in Fortran 90, the string concatenation operator is "//",
  2. in C++, the comment delimiter is also "//", and
  3. by default, Cray cpp strips all C/C++ comments from the source.
The result: cpp removed actual code from the Fortran source files.

In one of the two cases, an F90 source file contained declarations like this:


      character (name_len) ::
     &  outfilenm,    ! output filename
     &  infilenm,     ! output filename
     &  basenm        ! filename root
and executable statements like this:

      outfilenm = trim(basenm)//'.res'
      infilenm =  trim(basenm)//'.dat'
After default pre-processing by cpp, however, these statements looked like this:

      outfilenm = trim(basenm)
      infilenm =  trim(basenm)
cpp had treated the concatenation, "//'.dat'", like a C++ comment, and deleted it.

One fix is to give cpp the option "-C" which instructs it to retain comments. For example:


  cpp -C prog.F prog.f  
You can probably append "-C" to end of the "CPP" definition in your makefile or there might be a "CPP_FLAGS" definition in the makefile, to which you could add "-C". For example, if the makefile contained this definition:

  Cpp = cpp -P
you could change it to:

  Cpp = cpp -P -C
If possible, a better idea is to use the Fortran 90 compiler to pre-process Fortran 90 code. f90 has three options related to pre-processing:
  • f90 -eP prog.F (pre-processes to file "prog.i", do not compile)
  • f90 -eZ prog.F (pre-processes to file "prog.i", do compile)
  • f90 -F ... (Enable macro expansion throughout the source file. Typically, macro expansion occurs only on source preprocessing directive lines.)

Read "man cpp" and "man f90" for details.

Using Linux COW's to Learn T3E Programming

[ This article contributed by Dr. Don Morton of the University of Montana (UMT) and ARSC. It is an overview of the talk he gave last week at ARSC. ]

During the 1990's, several events occurred that would ultimately make it possible for research and education activities in high performance computing to use clusters of workstations (COW's) AND supercomputers. First, the evolution of Linux since 1991 has made it possible for anyone to build a low-cost parallel computer. Second, supercomputer manufacturers such as IBM and CRI began to recognize the importance of using commodity processors, and of adopting tools, such as PVM, that were being used in the cluster environments.

The parallel (excuse the pun) evolution in these two areas has covered much common ground. These days, linux (or other Unix) workstations are often the ideal environment for training students and researchers in applied parallel computing, and for developing and testing parallel codes for ultimate use on machines such as the T3E. These activities typically require a high degree of interactivity between the user and the computer, and this is often not attainable on busy systems such as yukon, particularly in a training situation where numerous users are making simultaneous requests for a limited number of free processors.

Conversely, the presence of parallel clusters at numerous institutions increases the demand for supercomputers. These clusters facilitate the training of students and researchers--people who might not otherwise have easy access to a parallel computing environment--in concepts of parallel computing. By focusing on portable tools such as PVM, MPI and HPF, we produce an increased number of potential supercomputer users. Many of these users, once trained, will enjoy the fact that their codes are ready for supercomputers, and can move on to larger problems.

At UMT, we recently tested this idea that students could be trained in parallel computing on a cluster of Linux workstations, and be ready to run their codes on the T3E with very little extra effort. A graduate course, CS 580 - Parallel Processing, was offered during the Spring 1999 semester. Students were introduced to PVM, MPI, and HPF (Portland Group) with 2-3 weeks devoted to each. They wrote parallel programs for the UMT Linux cluster using each of these parallel programming tools, and tested their performance. By the last month of the course, students had written several parallel programs and had become somewhat confident in their skills. In the final phase of the course, students moved their code to ARSC's T3E. This was motivational for the students and allowed us to determine what kinds of problems new parallel programmers might encounter when migrating from a training environment to a production supercomputing environment.

It was already known that students would face some difficulty in porting their PVM codes to the T3E. As discussed in previous issues of the T3E Newsletter, the "network" version of PVM (that which runs in heterogeneous mode) has some significant differences from the Cray MPP PVM (that which runs on the T3D/E). Cray MPP PVM supports only SPMD programs, does not support dynamic task allocation, and has several Cray-specific PVM function calls that make Cray MPP programming a little easier. Although the network and Cray MPP programming models differ, the knowledgeable programmer can write code that runs with both types of PVM. Students were exposed to these issues, and given examples of their resolution, so actual porting of their codes wasn't too difficult.

The porting of MPI and HPF programs was quite simple for the students, since these programs assume SPMD mode and are supported almost identically on Cray MPP and cluster architectures. The significant problems faced by students were those unrelated to parallel programming tools. Experience has taught us that Cray MPP compilers are less forgiving of programmer mistakes than many Unix architectures. For example, many Unix workstation compilers automatically initialize variables to zero. Once can argue that the Cray MPP forces "tighter" programming, and that this is not necessarily a bad thing.

All in all, we were quite happy with the ease with which students were able to migrate to the T3E from a Linux cluster. This exercise further supported the view that clusters are effective platforms for learning and experimenting with issues in parallel computing. Once users have learned to write parallel code and have had an opportunity to address performance issues, they can be in a position to efficiently use the T3E for their high performance computing needs.

Review of New Beowulf Book

How to Build a Beowulf. A Guide to Implementation and Application of PC Clusters. Thomas L. Sterling, John Salmon, Donald J. Becker, Daniel F. Savarses. MIT Press. ISBN 0-262-69218-X

Many will remember that one of the authors, Thomas Sterling of JPL/Caltch, presented a seminar on Beowulf at ARSC on March 5th and 6th 1998. This book is a concentrated summary, written from scratch, of the information presented at various seminars and tutorials undertaken by the authors. It promises to tell the reader how to make a supercomputer out of PC components for a total cost of under $40k dollars.

The reader is lead step-by-step through the processes needed, beginning with a review of Beowulf principles, history, and purpose. The book then covers Linux, choice of processing nodes, networking hardware and software, plus some basic parallel programming principles. The complexity of system management is also discussed to some extend.

Space is also devoted to star applications and dreadful problems the user may encounter. A very useful closing chapter looks at the emerging technology and how Beowulf will be evolving in future.

I recommend this as a good first read for anybody considering putting together a system, either from new nodes or from "remainders."                                           --Guy Robinson

yukon:/usr/local/bin/grmap

ARSC users can now access "grmap" from /usr/local/bin/. This script reformats output from the "grmview -ld" command to a friendlier display. It shows how well your job is using memory, where it's running, and on how many PEs, among other things. For a longer description of grmap, see:

/arsc/support/news/t3enews/t3enews152/index.xml

Here's an example:


  yukon$ /usr/local/bin/grmap

       UserName Size BasePE      Mem avg:max  Command       
       ======== ==== =========== ============ ==============
   A - sven     48   0   [0x0  ]     126:141  run
   B - ollie    2    48  [0x30 ]     170:170  job   
   C - gunter   26   50  [0x32 ]      62:64   prog
   D - hunter   32   76  [0x4c ]     214:216  proj                          
   E - olga     64   108 [0x6c ]      27:28   junk
   F - hilda    11   172 [0xac ]       4:4    batch 
   G - lars     70   186 [0xba ]      85:146  script                        
  --------------------------------------------------------
   0  
 <A..A..A..A..A..A..A..A. .A..A..A..A..A..A..A..A.
   16 
 .A..A..A..A..A..A..A..A. .A..A..A..A..A..A..A..A.
   32 
 .A..A..A..A..A..A..A..A. .A..A..A..A..A..A..A..A>
   48 
 <B..B><C..C..C..C..C..C. .C..C..C..C..C..C..C..C.
   64 
 .C..C..C..C..C..C..C..C. .C..C..C..C><D..D..D..D.
   80 
 .D..D..D..D..D..D..D..D. .D..D..D..D..D..D..D..D.
   96 
 .D..D..D..D..D..D..D..D. .D..D..D..D><E..E..E..E.
   112
 .E..E..E..E..E..E..E..E. .E..E..E..E..E..E..E..E.
   128
 .E..E..E..E..E..E..E..E. .E..E..E..E..E..E..E..E.
   144
 .E..E..E..E..E..E..E..E. .E..E..E..E..E..E..E..E.
   160
 .E..E..E..E..E..E..E..E. .E..E..E..E><F..F..F..F.
   176
 .F..F..F..F..F..F..F>< . . .. ><G..G..G..G..G..G.
   192
 .G..G..G..G..G..G..G..G. .G..G..G..G..G..G..G..G.
   208
 .G..G..G..G..G..G..G..G. .G..G..G..G..G..G..G..G.
   224
 .G..G..G..G..G..G..G..G. .G..G..G..G..G..G..G..G.
   240
 .G..G..G..G..G..G..G..G. .G..G..G..G..G..G..G..G>
   256
 < .. .. .. >
  --------------------------------------------------------
  APP-PEs tot:used:free:down  MEM tot:used:free  JOBS run:blk
          260: 253:7   :0          64:  22:41           7:0   

Quick-Tip Q & A



A:{{ Is there a trick in vi to retrieve a "!" or ":" command, edit it,
     and re-execute it?  These shell and regular expression command are
     difficult to enter correctly, and once executed, disappear! }}


    VI's "@" command lets you execute a "named buffer." You can use
    this as an editing facility for ":" and "!" commands.  For
    instance, to edit and execute the command:

      :%s/^  \*\* /\<H1\>/c

    you would type it 
as text
 into your document; position the cursor
    on the line; type,

      "nyy

    to yank the command into buffer "n";  and then type

      @n

    to execute it.  If you were unhappy with the result, you'd hit "u"
    to undo the effect of the command, then edit the command, yank it
    again, and re-execute it.  After success, you'd delete the text of
    the command from your document.

    To simplify the process, you can define the following two maps in
    your ~/.exrc file:

      map ^N "nyy 
      map ^M @n

    (To enter a control character into the .exrc file, hit CNTL-V
    first.  For instance, CNTL-V CNTL-N enters a single ^N code into
    the file.)

    A flaw with this method is that the command becomes a temporary
    part of your document, and can be acted upon by the command itself
    when you execute it.  For instance the command:

      !1000j sort

    rearranges 1000 lines of text. If you weren't paying attention,
    those 1000 lines might include the string, "!1000j sort", which, on
    being "sorted" might move off-screen and become a permanent part of
    your document.




Q: I've been handed a large Fortran 77 program, and need to get it
   going.  F90, however, is finicky about type consistency and won't
   compile it.  This example shows one of my problems:


       program test
       implicit none
       integer*4  int4
       integer*8  int8
       integer*8  res8

       int4 = 4
       int8 = 5
       res8 = mod (int8, int4)

       print*, res8

       end

   Here's the error printout from the f90 compiler:

      yukon$ f90 test.f             

               res8 = mod (int8, int4)
                      ^                
        cf90-774 f90: ERROR TEST, File = test.f, Line = 9, Column = 15 
          Improper intrinsic argument type or inconsistent types.

        cf90: Cray CF90 Version 3.2.0.1 (f42p14m32015a47) Fri Jul 16, 1999  09:51:24
        cf90: COMPILE TIME 0.096635 SECONDS
        cf90: MAXIMUM FIELD LENGTH 1374992 DECIMAL WORDS
        cf90: 14 SOURCE LINES
        cf90: 1 ERRORS, 0 WARNINGS, 0 OTHER MESSAGES, 0 ANSI
        cf90: CODE: 0 WORDS, DATA: 0 WORDS
        cf90: "explain cf90-message number" gives more information about each message

      yukon$ explain cf90-774

        Error : Improper intrinsic argument type or inconsistent types.
        The type and/or the kind type of an actual argument is not valid.

   Any suggestions?

[ Answers, questions, and tips graciously accepted. ]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top