ARSC HPC Users' Newsletter 201, August 4, 2000

Newsletter News

With issue #115 (and the arrival of yukon, in 1997), the T3D Newsletter evolved into the T3E Newsletter. Today we have a hardware announcement and release the 201st issue, both good reasons to upgrade to the "HPC Users' Newsletter."

The focus of the newsletter will always be on user issues (see the "mission statement" at the top of this newsletter), and we'll keep it informal.

For now, the platform specific emphasis will remain the T3E, but we'll be including J90 and O3000 articles as well. By necessity, our focus will be on ARSC platforms, but we value our non-ARSC readers, highly!

The newsletter is driven by readership: if you have anything to report, send us items, ideas, URLs, and (of course) "Quick-Tips." We don't promise to use everything, but we'll review every submission and suggestion carefully. Details of papers/events/conferences you are involved with, or simply find useful in your work, are always welcome.

We've made another change (besides the obvious formatting tweaks).

Finally entering the '90s, we've adopted a mailing list manager (majordomo) to handle subscriptions and the bi-weekly mailing. You shouldn't notice any changes (except in the mail headers), but please bear with us as we work out the kinks, and let us know of problems.

Origin 3000: Exciting Times Ahead

ARSC looks forward to exciting times with the announcement of a partnership with SGI and ERDC DSRC. Full details at:

http://www.arsc.edu/pubs/bulletins/Partnership.shtml

Plans are also being made for the next ARSC SGI Users Group meeting. UAF folks with SGI specific topics that they'd like to discuss with other campus users and SGI people should contact Liam Forbes ( lforbes@arsc.edu ) and/or Bob Huebert ( huebert@arsc.edu ). The O3000 is one proposed focus for the next meeting. Information and SGI literature will be available.

Unified Parallel C (UPC)

[[ Thanks to Mohamed Bennani and Tarek El Ghazawi of George Mason University (GMU) for contributing this article. Mohamed spent a couple weeks at ARSC earlier this summer, testing the UPC compiler installed on yukon.

This Tuesday, Tarek gave a talk here on UPC, the Computational Science Department, and other activities at GMU. Slides from his talk are available by request from Guy Robinson, robinson@arsc.edu . ]]

The Unified Parallel C (UPC) is a parallel extension of ANSI C based on the distributed shared memory (DSM) programming model. UPC, through its shared data declarations, allows the programmers to specify how shared array data are decomposed over threads. Furthermore, UPC allows programmers to exploit memory locality and minimize remote memory accesses by using work sharing constructs specifying that each thread processes its local part of the shared data.

UPC allows the programmer to intervene in the memory consistency model by choosing the mode under which memory accesses happen. This can be done at the statement, multiple statements, or program level, allowing the programmer to decide which parts of the code could benefit from access re-ordering optimizations. Thus, UPC maintains the heritage of C in making programmers close to and in control of the hardware.

Among the other advanced features are non-blocking barriers, shared pointers, and shared memory allocation.

In spite of its power, UPC maintains the simplicity, conciseness and efficiency of C. A matrix multiplication example , C = A x B, is given at the end of this article. It clearly shows how UPC is simple and very C-like. It also demonstrates how memory locality can be exploited. In the example, the A and C matrices are block distributed row-wise, while the B matrix is block distributed by column chunks. The upc_forall states that statements that contain array A elements will be processed by the threads that have those elements locally in that thread.

The UPC effort is led by the IDA Center for Computing Sciences in collaboration with University of California at Berkeley, Lawrence Livermore National Lab, George Mason University, and the Arctic Region Supercomputing Center. Vendors show significant interest and many vendors played an active part in the first UPC developers workshop held at IDA last May. Representatives for Compaq, Cray, Sun, SGI, and HP were present and many have UPC in their plans. There is currently an open source UPC compiler for the Cray T3E/T3D, and Compaq has recently announced its UPC compiler for the Alpha clusters. For more information on UPC or to join the discussion lists, visit:

http://hpc.gmu.edu/~upc

A simple matrix multiplication example:

// UPC Matrix multiplication Example
// a(N, P) is multiplied by b(P, M).  Result is stored in c(N, M)
// In this example, a is distributed by rows while b is distributed by columns.
// We do use the upc_forall construct in this example

#include<upc.h>
#include<upc_strict.h>

#define N 4
#define P 4
#define M 4

shared [N*P /THREADS] int a[N][P] , c[N][M];
shared int b[P][M] ;

void main(void) {
  int i,j,l;
  upc_barrier();

// Arrays initialization   by thread 0
if(MYTHREAD==0) {                
         for (i=0;i<N;i++) 
             for(j=0;j<P;j++) a[i][j] = i * j ;

         for (i=0;i<P;i++) 
             for(j=0;j<M;j++) b[i][j] = i * N +j ;
}
 
upc_barrier();

// all threads  perform matrix multiplication
upc_forall(i=0;i<N;i++;&a[i][0])

// &a[i][0] specifies that this iteration will be executed by the thread that has affinity to     
//  element a[i][0]

         for (j=0; j<M; j++) {
            c[i][j] = 0;
            for(l=0; l< P; l++) c[i][j] +=a[i][l]*b[l][j];
         }
 

//  thread 0 displays results         
if(MYTHREAD==0) {
        printf("\n\n");
        for(i=0;i<N;i++)
           for(j=0;j<M;j++)
            printf("c[%d][%d]=%d\n",i,j,c[i][j]);
  }
}          

To compile this program for 8 threads, type:

upc -O2 -fthreads-8 multiplication.c -o multiplication

To run it, type:

./multiplication

For more information on UPC, please, visit the UPC home page:

http://hpc.gmu.edu/~upc

TCP/IP Tuning, Multicast and IP-V6 Training

Training opportunity hosted by ARSC:

NLANR (National Laboratory for Applied Network Research) Onsite Training in TCP/IP Tuning, Multicast and IP-V6


When:       1:00 September 13 through noon September 15, 2000
Where:      Room 109, Butrovich Building, University of Alaska Fairbanks,
            Fairbanks, Alaska
Speakers:   Andrew Adams  (NLANR)
            Michael Lambert (NLANR)
            Phil Dykstra (WareOnEarth Communications)
Registration:   
            

http://www.arsc.edu/pubs/NLANR/Registration.html

ARSC Summer Intern Presentations, Aug. 9

ARSC's summer interns, from The University of Texas El Paso and The University of Alaska Anchorage, will will present their research next Wednesday.

Time and date:  Wednesday, August 9, 2pm

Location:       109 Butrovich

Titles:
  o Mapping Footprints of Slotted Waveguide Antennas
  o Adventures in Fortran and Fairbanks
  o Rectenna Model Study
  o Digitgizing Museum Artifacts
  o Adding Menu Functionality to the "Body Language User Interface" 
Refreshments will be served. For more information on these presentations and ARSC internships, in general, contact the Program Manager, Betty Studebaker, at: studebak@arsc.edu, or: 907-474-6307.

Quick-Tip Q & A



A:[[ I think someone has been spying on me, and they may have seen my
  [[ SecurID Card PIN.  And besides, I'm tired of those same 4 boring
  [[ digits.  Remind me, how do I create a a new PIN?
  
 
  Mere mortals can't change their PINs.  You must call your friendly help
  desk, with card in hand.
  
  The consultant will ask for whatever 6-digit code is currently showing
  on your card, you'll read it off, and they'll enter it into the
  SecurID server software.  90% of the time, the 1st 4 digits of the
  next code will be your new PIN.  The consultant will tell you to
  memorize this number when it appears on your card (but will never
  actually see it).  

  10% of the time, the next code will start with a 0, which makes the
  PIN unacceptable.  You and the consultant can discuss fishing while
  you go through the process again (it only takes a minute or so...).

  Hopefully, you'll like your new PIN.  If not, you can always try
  again.




Q: When I try to compile my old reliable Fortran 90 code on my SGI
   workstation I get a message like:

      sgi%  f90 -o test test.f
        "test.f": Warning: Stack frame size (157784752) larger than 
        system limit (67108864)
    
  and the executable produced is unrunnable.

  What's wrong, and what can I do to get my code to compile?

[[ Answers, Questions, and Tips Graciously Accepted ]]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top