ARSC HPC Users' Newsletter 265, March 14, 2003



Major Systemwide Upgrades at ARSC

[ This is an ARSC press release, from Mar 11: ]

Supercomputing Center Celebrates 10th Anniversary With Significant Upgrades

Fairbanks, Alaska - The Arctic Region Supercomputing Center (ARSC) is gearing up for its 10th anniversary with significant upgrades to all of its systems. Using these new visualization, storage and computational resources, scientists will be able to move through a tsunami and experience its run-up, quickly store and retrieve massive amounts of data and run computations faster than ever before possible.

Visualization Upgrades In spring of 2003, ARSC staff will unveil the center's first fully-immersive virtual environment, a Mechdyne MD Flex* system. The three walls and single floor three-dimensional displays that make up the system allow scientists to virtually explore data without the constraints of traditional two-dimensional displays. This ARSC Discovery Lab is currently being configured in the University of Alaska Fairbanks (UAF) Rasmuson Library, located on the main UAF campus. It will be available to researchers, scientists and students at the university to explore virtual environments and conduct research in areas like computer-user interfaces, tsunami inundation and the aurora borealis, as well as to work in other forms of creative expression such as three-dimensional animation. The Discovery Lab will supplement existing visualization systems at ARSC including the virtual reality ImmersaDesk. UAF students will be able to explore the Discovery Lab through classes in virtual reality programming, which will be taught during upcoming semesters by ARSC/Computer Science faculty.

Storage Upgrades The center recently installed two Sun Fire* 6800 systems, each with eight, 900 MHz UltraSparc 3* processors and 10.5 terabytes (TB) of raw disk. The Sun systems will provide ready access to data on ARSC supercomputers, visualization resources and workstations and will ease the burden of working with the large data volumes that many computations produce.

These systems will be connected to the center's existing StorageTek* data silos, which have been upgraded to include 6 STK 9840B (20 gigabyte (GB) capacity, 19 megabyte (MB) per second transfer rate) and 4 STK 9940B (200 GB capacity, 30 MB per second transfer rate) tape drives for each Sun Fire*. The robotic tape silos store the massive volume of data generated by ARSC researchers. This more general solution will provide better access and retrieval and can be expanded to accommodate many different computational platforms and data sources.

"These drives will provide a significant boost in total transfer rate to and from the silos as well as a huge increase in capacity," said ARSC storage specialist Gene McGill.

Computational Upgrades In May, the center will install the first phase of a 128-processor high-efficiency Cray X1(TM) parallel vector system, which will be available to users later in the year. The system will be delivered with 512 GB of memory with an option to upgrade when higher density memory becomes available. The X1 will boast a peak performance of 1.6 trillion calculations per second (teraflops) and will eventually replace the center's current Cray T3E* and Cray SV1ex* systems.

The center will also be adding an integrated architecture IBM supercomputer system, composed of two IBM eServer p690 systems, each with 32 processors and 256 GB of memory and IBM eServer p655 systems with two GB of memory per processor. This system will be expanded with additional next generation IBM eServer p655 systems and integrated with IBM's next generation clustering technology to bring the overall peak theoretical performance of the supercomputer to five-teraflops. The last phase of the installation will be in the fall of 2003.

The Cray and the IBM systems will each provide unique hardware and software capabilities that will allow ARSC users to tackle complex problems in a variety of fields. Currently, researchers use ARSC resources to solve problems in bioinformatics, global climate change, space physics, ocean circulation, galactic formation, computational fluid dynamics and arctic engineering.

"ARSC continues to move forward in providing its users with the best possible resources with which to explore data and create new understanding," said ARSC director Frank Williams. "These tools will allow our researchers to continue finding solutions to the important questions of today and the future. We are anxious to launch into the next 10 years of computational science."

About ARSC The Arctic Region Supercomputing Center, located on the campus of the University of Alaska Fairbanks, supports computational research in science and engineering with emphasis on high latitudes and the Arctic. The center provides high performance computational, visualization, networking and data storage resources for researchers within the University of Alaska, other academic institutions, the Department of Defense and other government agencies. ARSC is a DoD Supercomputing Resource Center in the Department of Defense's High Performance Computing Modernization Program. For additional information about ARSC, go to:


ARSC Viz Training Next Week

  Title:        Introduction to Scientific Visualization
  Date:         Weds., March 19th, 2pm
  Location:     Butrovich 109/007
  Instructors:  Roger Edberg, Sergei Maurits

This class will take students through a series of hands-on exercises designed to illustrate essential concepts and tasks in data preparation, visualization basics, and use of several key scientific visualization software packages on ARSC SGI systems. Students will be encouraged to continue working with ARSC staff on visualization projects of their own interest after completion of the class.

For more info, and to register:


Next ARSC Technology Watch

  Topic:       Q+A on storage issues 
  Date:        March 26th, 12 noon 
  Location:    Globe Room, UAF Geophysical Institute

Discussion on any issue is invited, but we'll start with a focus on storage issues: how to share data between systems, visualize data sets, keeping track of all your results, etc.

For more info:



The Preliminary Program for CUG 2003 - "Flight to Insight" is available from the Cray User Group web site, It can be viewed by selecting the upcoming conference link and then on "Preliminary Program".

The Program Committee, with the assistance of Cray Inc. has assembled a diverse array of detail-packed presentations for the Columbus CUG. A few items included in this exciting program are:

  • a full day of tutorials,
  • exciting keynote speakers,
  • the latest information on Cray's new X1, and,
  • most importantly, opportunities to consult with your colleagues and to engage the insights and expertise of representatives from Cray Inc.

While you are at the conference web site, you may go ahead and register for the conference...remember the deadline for the early bird rate is April 1st.



        WOMPAT 2003: Workshop on OpenMP Applications and Tools
        June 26 - 27, 2003
        Toronto, Ontario Canada

WOMPAT 2003 is latest in a series of OpenMP-related workshops, which have included the annual offerings of WOMPAT, EWOMP and WOMPEI. Additional information about these previous workshops can be found through the cOMPunity web site.

The Workshop on OpenMP Applications and Tools (WOMPAT 2003) will serve as a forum for users and developers of OpenMP to meet, share ideas and experiences, and to discuss the latest developments in OpenMP and its applications. WOMPAT 2003 is co-sponsored by the OpenMP Architecture Review Board (ARB) and cOMPunity, a community of researchers and developers in academia and industry.

Updated info. can be found at:


Invitation to ARSC's 4th "Faculty Camp"

The Arctic Region Supercomputing Center (ARSC) is pleased to invite interested members of the UA community and their associates, as well as users from other research institutions, to attend the 2003 ARSC Faculty Camp. Building on the success of past Faculty Camps, this year's event aims to bring together a diverse group of researchers to learn about high performance computing and share research experiences. Past Faculty Camps have introduced many researchers to ARSC resources, and built strong links between attendees and ARSC staff. Dates for this years camp are August 4th-22nd.

Faculty Camp will combine a series of seminars presented by ARSC staff, UAF/ARSC Joint Faculty, and current users with independent/self-guided study and access to ARSC specialists. Seminar topics will depend on the needs of the selected attendees but will include the basics of programming high performance computers, visualization software and skills, and collaborative environments.

Individuals or groups wishing to attend are invited to register their interest by April 1st, and by May 1st to submit a short, ~250 word, description of the skills they would like to develop or of the project they intend to undertake. Please submit text in ASCII or pdf format. This description is important as it is the basis for ARSC to organize events and speakers for the Faculty Camp to match the attendees' needs. Successful applicants will be notified by May 9th. Those accepted for Faculty Camp are expected to participate full-time. UA researchers will be compensated at regular salary.

More details about ARSC Faculty Camp can be found at:

Applications and questions regarding Faculty Camp should be sent to Guy Robinson, Please feel free to circulate this announcement.


Quick-Tip Q & A

A:[[ Here's part of my PATH on ARSC's SX-6 frontend host, "rimegate": 
  [[ rimegate$ echo $PATH
  [[ /SX/opt/crosskit/inst/bin:/SX/opt/sxcc/inst/bin:/SX/opt/sxc++/inst/b
  [[ in:/SX/opt/sxf90/inst/bin:/usr/psuite:/SX/opt/mpi2sx/inst:/SX/opt/mp 
  [[ isx/inst:/usr/psuite:/SX/opt/vampirsx/inst/bin:/usr/local/krb5/bin:/ 
  [[ usr/sbin:etc...
  [[ Can I grep through the separate paths?  For instance, can I tell grep
  [[ to use colons rather than newlines for the delimiter?  I want to see
  [[ all the paths containing "SX", like this:
  [[ echo $PATH 
 grep SX

  #   Thanks to Rich Griswold:

  You can use perl to turn the colons into newlines before feeding them
  to grep.  The 'g' at the end of the regex tells perl to match as many
  times as possible.

    echo $PATH 
 perl -pe 's/:/\n/g' 
 grep SX

  #  Thought we'd give it a try... and sure enough:
  #  rimegate$ echo $PATH 
 perl -pe 's/:/\n/g' 
 grep SX
  #  /SX/opt/crosskit/inst/bin
  #  /SX/opt/sxcc/inst/bin
  #  /SX/opt/sxc++/inst/bin
  #  /SX/opt/sxf90/inst/bin
  #   etc...

Q: In vi, and perl for that matter, it's a nuisance to search and
   replace on strings which contain forward slashes. Is there an easier
   way to do what I want here?  Some Unix trick, maybe?


[[ Answers, Questions, and Tips Graciously Accepted ]]

Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top