ARSC HPC Users' Newsletter 232, November 6, 2001

SC2001

This special issue of the newsletter attempts to include those readers who are not attending SC2001 in the conference, and to offer advance orientation to those who are. If you're attending, be sure to drop by the ARSC booth and say "hi" to the editors. For more, see:

http://www.sc2001.org/

Booths you might want to visit:

ARSC

  • R101     Arctic Region Supercomputing Center
ARSC Partners:
  • R0127      Albuquerque High Performance Computing Center
  • R309     HPCMO and ERDC (at the HPCMO booth)
ARSC Projects:
  • R547     UPC (at the George Washington University booth)
Tutorials:

Presented by Colleagues at ARSC Partner, ERDC:

Mixed-Mode Programming Introduction Daniel Duffy and Mark R. Fahey Sunday AM

BOFs:

Is US climate modeling in trouble?

Thursday (exact time TBD) Guy Robinson contributing to this BOF, along with others.

Panels:

The Access Grid: Where the Vision Meets Reality

Thursday: 10:30am-noon Don Morton will be on the panel.

Supercomputing's Best and Worst Ideas

Friday: 10:30am-noon Guy Robinson will be one of the panelists

SCglobal events on the Access Grid (View at UAF, SC2001, or any AG node):

Collaborative Course in Computational Parallel Programming

Tuesday 13 November at 1730 MST (1530 AKST).

ARSC is part of a collaborative HPC course in computational parallel programming with the University of Montana and the University of New Mexico, Albuquerque High Performance Computing Center.

Students are now halfway through this course (at UAF, it is the 3-credit course, PHYS693). It is the first collaborative course taught over the AG.

The three sites will be linked on the AG with SC2001 in Denver. ARSC participants will include Frank Williams, Guy Robinson, and Roger Edberg of ARSC, in Denver, Jim Long and other PHYS693 students, in Fairbanks.

Solar Terrestrial Physics

Thursday 15 November at 1030 MST (0830 AKST).

Sergei Maurits of ARSC is the first speaker in a session on "Solar Terrestrial Physics" on the AG, Thursday 15 November at 1030 MST/0830 AKST. This session will be led by a group at the University of Manchester (UK).

Fairbanksans staying home during SC2001 are welcome to attend these over the access grid in UAF's Butrovich Building, room 109.

See the SCGLOBAL web site for more:

http://www-fp.mcs.anl.gov/scglobal/

Next Newsletter

Next issue, #233, November 30th.

Happy Thanksgiving everyone!

Quick-Tip Q & A



A:[[ I'm writing a few specialized MPI functions for eventual distribution 
  [[ as a small library.  How can I keep the underlying MPI sends/recvs
  [[ from conflicting with MPI calls elsewhere in the calling program?
  [[ Any other gotchas to worry about?
  [[
  [[ I'd appreciate it if you could just point me in the right
  [[ direction...


# 
# Many thanks to our two respondents:
# 

# 
# From John B. Pormann
# 
I think the quickest approach is to have the calling subroutine include an 
initial communicator, then use MPI_Comm_dup to duplicate it:


        int foo( MPI_Comm initcom, int arg1, int arg2 ) {
                MPI_Comm newcomm;
                int ierr;

                /* dup the communicator so we isolate */
                /* all library messages               */
                ierr = MPI_Comm_dup( initcomm, &newcomm );
                if( ierr < 0 ) {
                        return( ierr );
                }

                /* now use newcomm as your communicator */
                /* MPI_Isend( ..., newcomm, &sendreq ); */
                /* MPI_Recv( ..., newcomm, &mpistat );  */

                /* now free it up, since MPI_Comm's are */
                /* a 'valuable commodity'               */
                ierr = MPI_Comm_free( &newcomm );
                if( ierr < 0 ) {
                        return( ierr );
                };

                return( 0 );
        }

You could just do a MPI_Comm_dup of MPI_COMM_WORLD, but that would
assume that your library must *always* be called by all tasks.  By using
an initial communicator argument, you could allow the user to have only
half the nodes call your library -- assuming they set up the proper
MPI_Group and MPI_Comm objects (note that you don't need to pass the
MPI_Group as an argument, since you can always get it from
MPI_Comm_group function).

# 
# From Brad Chamberlain
# 
I believe that the right direction is to create your own communicators
so that when performing communication you can use a reference to your
own communicator structure rather than MPI_COMM_WORLD.  This should keep
your messages from being "in their own world" and prevent them from
conflicting with other MPI messages being sent around.

# 
# Editor's note: 
# 
# Be aware that duplicating communicators can be expensive, as it 
# entails some checking in between the processors involved.
# 
# This is not to discourage you from creating your own libraries. 
# 
# On the contrary!  If you create any MPI add-ons or libraries, we'd be
# disappointed if you didn't tell us about it.
# 




Q: I'm having trouble linking my C program on the T3E. It uses Cray FFT
   libraries which, in turn, use BLACS grid routines.  I've tried
   linking with libsci, but it still can't find BLACS_GRIDINIT.  The
   Fortran compiling system seems to take care of this by magic.  How
   can I get my C program to work?

[[ Answers, Questions, and Tips Graciously Accepted ]]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top