ARSC HPC Users' Newsletter 218, April 20, 2001



SV1e Upgrade

On April 11, ARSC upgraded chilkoot with faster processors. Faster memory will be added in June, completing chilkoot's upgrade process.

The new processors have improved caching, and a 500MHz clock, 66% faster than the 300MHz clock in the SV1. We will present data on user codes later (and ask users to share their performance numbers and other experiences with us). It's also interesting, of course, to discuss peak performance.

As in the SV1, the SV1e processor contains two vector functional units, each capable of producing 2 results, 1 floating point add and 1 multiply, per clock cycle. Thus, theoretical peak is 4 times the clock rate, or 2 GFLOPS.

Guy's code, which won him a beer in our "GFLOPS Contest" (see issue 213, /arsc/support/news/hpcnews/hpcnews213/index.xml ), shows the processor speedup nicely.

The following table gives MFLOPS achieved by the "gflops" code, as measured on chilkoot, on a dedicated system, by hpm. Results from executables compiled under Programming Environment 3.4 are in the columns labeled "PE3.4". Results from PE3.5 are labeled accordingly.

The rows show the number of CPUs on which the executable was run. An MSP (multi-streaming processor) uses 4 CPUs. Note that the MFLOPS values reported by hpm are per processor. Thus, the 1-MSP and 4-CPU values should be multiplied by 4 to get the actual MFLOPS achieved, so, for instance, the 1-MSP run, under PE3.5, on the SV1e, actually ran at 6484 MFLOPS.

The code was compiled with the default level of optimization (-O2) for the 1-CPU and 4-CPU runs and with "-Ostream3" for the 1-MSP runs.

                 MFLOPS Achieved, per processor, by 
                    "gflops" Code on Chilkoot

             SV1 Processors             SV1e Processors
             --------------             ---------------
             PE3.4      PE3.5           PE3.4     PE3.5
             =====      =====           =====     =====
  1-CPU      1043       1066            1750       1785            
  1-MSP      856        1011            1344       1621            
  4-CPU      934        950             1449       1434            

Two observations:

First, the 1-CPU, PE3.5 values are almost 90% of peak, on either processor. It's nice to have proof that processors can indeed run almost as fast as claimed (although real users codes are generally in the 15-40% range). Second, comparing the 1-MSP values versus the 4-CPU values shows that under PE3.5, multi-streaming performance (for this code) has surpassed multi-tasking performance.


You may have seen this elsewhere, but here's the joint Cray/ARSC press release on the the SV1ex:

SEATTLE--April 17, 2001-- Global supercomputer leader Cray Inc. (Nasdaq: CRAY) today announced the installation of the first Cray SV1ex[tm] supercomputer at the Arctic Region Supercomputing Center (ARSC), University of Alaska Fairbanks.

The system, part of a $3 million contract announced last September, replaces ARSC's Cray SV1[tm] supercomputer. Additional enhancements to the system memory are scheduled for later this quarter.

"The Cray SV1ex is part of ARSC's ongoing effort to provide top high-performance computing resources to its users," said ARSC Director Dr. Frank Williams. "We're proud to have this opportunity to share Cray's excitement in this cutting edge technology and are thrilled to be the first to offer this improvement to our scientists and researchers."

ARSC's government and academic researchers will use the Cray SV1ex supercomputer to study atmospheric, environmental and geophysical problems unique to the Arctic, polar regions and higher latitudes. The system's high-end performance, improved clock speed and extremely high-speed cache memory will be especially beneficial to ARSC users running applications in ocean modeling, climatology and space physics.

A research team led by Uma Bhatt, of the International Arctic Research Center (IARC) Frontier Research System for Global Change, will use ARSC's Cray SV1ex supercomputer to apply a global climate model that simulates the atmospheric response to changes in sea ice over the Arctic ocean. These simulations will help the team to understand how shrinking sea ice affects the atmosphere. While researchers traditionally study the tropics for clues to the origin of global weather patterns, this project is one of the first to investigate how changes in the Arctic may influence world climate.

"It's a pleasure to ship our first SV1ex to ARSC with significantly more power and better price/performance than previously indicated," said Cray Inc. Chairman and CEO Jim Rottsolk. "And it's a privilege to work alongside the user base of this long-time customer, providing the computing platforms for their breakthrough science."

"We're excited about the SV1ex's increased performance and appreciate Cray's efforts to create what we are confident will be an excellent product for our users," said Virginia Bedford, director of technical services at ARSC.

The Cray SV1ex enhanced product line is the technological forerunner to the Cray SV2[tm] supercomputer due out in the second half of 2002. It is a binary-compatible upgrade path for Cray SV1 and Cray J90[tm] customers. Each Cray SV1ex processor has a peak performance of two billion calculations per second (gigaflops).

About ARSC

ARSC supports computational research in science and engineering with an emphasis on high latitudes and the Arctic. The center provides high performance computational, visualization, networking and data storage resources for researchers within the Department of Defense, the University of Alaska Fairbanks, other academic institutions and government agencies.

About Cray Inc.

Cray Inc. designs, builds and sells high-performance MPP, vector processor and general-purpose parallel computer systems. The company has leading edge technology, multiple product platforms, nearly 900 employees, a worldwide installed base of supercomputer systems, major manufacturing and service capabilities and extensive global customer relationships. Cray believes its Multithreaded Architecture and Cray T3E[tm], Cray SuperCluster and Cray SV2 systems together represent the future of supercomputing. Go to for more information on the company.

Safe Harbor Statement

This press release contains forward-looking statements. There are certain factors that could cause Cray's execution plans to differ materially from those anticipated by the statements above. Among such risk factors are expected delivery and acceptance times, and timely availability of commercially acceptable components from third party suppliers. For a discussion of such risks, and other risks that could affect Cray's future performance, please see "Factors That Could Affect Future Results" in Cray Inc.'s annual report on Form 10-K.

Cray and SuperCluster are registered trademarks, and Cray SV1ex, Cray SV1, Cray SV2, Cray T3E and Cray J90 are trademarks, of Cray Inc. All other trademarks are the property of their respective owners.


Talk on Computational Biology

Monday 23rd April, 3pm 109, Butrovich.

Systems Biology: An Overview with Opportunities for Collaboration

George Lake Jimmy Eng Andrew Markiel Institute of Systems Biology,

We'll present an overview of Systems Biology and the Institute of Systems Biology, touching briefly some of the many opportunities for collaboration (Lake). We will then give a more detailed discussion of some technical and computational aspects of the new field of Proteomics (Eng). Finally, we'll discuss some visualization work, showing some astrophysical results from ARSC and other supercomputers together with prospects/needs for systems biology visualization (Markiel).

George Lake is a recent retread to Systems Biology from Astrophysics. He is Project Scientist for NASA's HPCC project in Earth and Space Science. He was the first power user at ARSC and his group has been one of the largest users of ARSC resources over the years. Jimmy Eng has been a leader in analysis software for proteomics, having written SEAQUEST and the next generation package COMET. Andrew Markiel recently moved from the UW Astronomy department to ISB, being the lead visualization developer at both places.


New PEMCS Paper: OpenMP and MPI on O2000

A new paper has just been published under the PEMCS banner:

"Scalability and Performance of OpenMP and MPI on a 128 Processor SGI Origin 2000", Glenn R. Luecke and Wei-Hua Lin, Iowa State University Ames, Iowa, USA

It is available in pdf or html at:

PEMCS welcomes submissions in all areas related to performance modeling and/or the evaluation of computer systems.


Quick-Tip Q & A

A:[[ Help!  I've got myself stuck on 4 (FOUR!) list servers, and can't
  [[ unsubscribe! It was 3, but last month I joined a support group for
  [[ people addicted to listservers!  Ahhhh!  That made 4.
  [[ Problem seems to be I'm subscribed as (making something up here),
  [[   ""
  [[ but they've retired "my_old_host" and moved me to a new workstation
  [[ and now, when I send an "unsubscribe" message, I get a reply that,
  [[   ""    is not subscribed  
  [[ How can I get my name off these lists?

  First, ask the list manager to unsubscribe you. For the future, you
  might speak with your own sysadmin:  sendmail can be configured such
  that, regardless of which workstation you're on when you send mail, it
  appears to come from a consistent server address.

Q: My Fortran code reads from an unformatted input file (originally
   written on a Cray T3E). Now I'm attempting to run it on a Cray SV1
   and it stops with,

    "A READ operation tried to read past the end-of-file"

   What should I do next?  What's wrong?

[[ Answers, Questions, and Tips Graciously Accepted ]]

Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top