ARSC HPC Users' Newsletter 296, July 30, 2004
- Tips for Better I/O Performance on the Cray X1: Part I
- Short FAQ on "find", "ls", and /tmp purging at ARSC
- SC04 Posters Submission Deadline Extended to August 2nd
- Dealing with X1 PE5.2 compiler error
- Faculty Camp Schedule -- Users Invited to Drop In
- Quick Tip
Tips for Better I/O Performance on the Cray X1: Part I
[ Thanks to U.S. Naval Academy Midshipman Nathan Brasher for this report on his work at ARSC this summer. ]
This article is the first in a two part series that comes as the result of a 3-week long internship at ARSC during which I studied certain parameters which affect input and output performance on Klondike, the center's Cray X1 supercomputer.
The Cray X1 at ARSC is a scalable vector system composed of 128 multistreaming processors, giving it a peak performance of roughly 1.6 Teraflops, or 1.6 trillion floating point operations per second. Supporting the Cray is 512 GB of RAM and 21 TB of disk storage space. It is the interface between the Cray and the disk that is the primary focus of this article.
It has long been known that the connection between the processors and the I/O devices is a bottleneck, a major obstacle in achieving good computing performance. The disk drives are mechanical in nature, requiring access time, time to change tracks, and time to rotate the platter to the proper position to write files. The processors and memory by contrast are solid state devices with no moving parts making them much more efficient. Because I/O to the disk drives is so slow, speeding up this part of the program can have great benefits in terms of run times and performance. Thus, the aim of this study was to determine under which conditions I/O performance was optimal in order to aid Klondike users in their programming.
The test was conducted as follows, a 1,000,000 byte array containing 250,000 4-byte elements was written to a file under various conditions. The time taken to write the array was tracked using the Fortran routine MPI_Wtime. All programming for this test was done in Fortran 90 (the code for this program is included at the end of this article).
The factors I evaluated were:
- file access method,
- file format,
- buffer size,
- chunk size, and
- blocking scheme.
What follows is a brief introduction to these five factors:
The Fortran programming language provides two types of file access: direct and sequential. With sequential files, you access records beginning at the start of the file, and must access them in order. Record size is allowed to vary. With direct access files, record size is fixed when the file is opened, but individual records can be accessed in any order. Some types of media are inherently sequential (tape drives for instance) and cannot support direct access files.
File format is the content of the output file. It is either binary data (unformatted) or text (formatted). A formatted file is readable in any text editor (eg NEdit) whereas unformatted files are an untranslated data dump, using the machine's internal numerical representation.
File buffering is provided by UNICOS/mp and was another important characteristic that was studied. One strategy to increase I/O speed is to store up many smaller writes into a section of memory set aside as a buffer. Once the buffer is full, it is written to disk. This scheme reduces disk access (and the resulting access times) and speeds up I/O.
Chunk size is a term we use to describe the number of elements per write. The 1MB array was written in chunks of various sizes. A chunk size of 250,000 4-byte elements would write the entire array at once whereas a chunk size of 1,000 would require 250 separate write statements to output the entire array. In the latter case, the record size would be 4,000 bytes (1,000 elements times 4-bytes per elements).
The fifth and final parameter of my study was that of the blocking scheme. File blocking is a system provided by the OS to structure files on disk. The four blocking schemes tested in this study are Fortran 77, Fortran 90, COS (Cray Operating System), and unblocked.
In part II of this series, I will present test results, a sample of the code used, and conclusions.
Short FAQ on "find", "ls", and /tmp purging at ARSC
Last Wednesday, ARSC restarted purging of files on $WRKDIR file systems on almost all ARSC hosts. Files not accessed in 10 days or more are subject to purging. Here are some questions you might have:
Why didn't these files in my /tmp directory get purged? They're over 10-days old:
% ls -l -rw------- 1 baring staff 9632 Mar 10 11:54 makefile1 -rw------- 1 baring staff 6685 Sep 18 2003 tree
The "ls -l" command shows time of last modification, not the time of last access. You must have accessed the files in the previous 10 days. The "ls -l -u" command shows access time:
% ls -l -u -rw------- 1 baring staff 9632 Jul 28 16:27 makefile1 -rw------- 1 baring staff 6685 Jul 28 16:27 tree
How can I tell, in advance, the files which will be whacked by the purger on a given night?
This "find" commands will do the trick. (You can replace "$WRKDIR" in these commands to any root directory from which you want to start the search):
% find $WRKDIR -atime +10 -print
I save tar files in $ARCHIVE and "un-tar" them in $WRKDIR. The dates on the untarred files are way older than 10 days, because I created the tar files a long time ago. Won't the purger just delete all these files?
No. See answer 1. The act of untarring counts as a file access.
Question 4: [ DANGER! ]
I copy everything to $WRKDIR from $ARCHIVE with "cp -p" because it's important to me to know how old the files are. ("cp -p" preserves both the modification time AND the access time.) Won't the purger just delete all these files?
Yes! Unless you "access" them somehow before it runs, the purger WILL DELETE these files. E.g.:
% cp -p /imports/archive/u1/uaf/baring/klondike/pat ./pat % ls -l -u pat -rw------- 1 baring staff 1270 Feb 26 11:08 pat % ls -l pat -rw------- 1 baring staff 1270 Feb 26 11:08 pat
SC04 Posters Submission Deadline Extended to August 2nd
The deadline for SC04 poster submissions has been extended to August 2. The conference is seeking poster presentations displaying cutting-edge research in high performance computing, networking and storage. Posters at this year's conference will occupy a prominent location at the convention center. Abstracts for accepted posters will be included on the conference proceedings CD-ROM, and a prize will be awarded for the "Best Poster" at the conference Awards session. For instructions and details on submitting a poster, go to:
Dealing with X1 PE5.2 compiler error
The X1 PE5.2 ftn compiler (the current default version) has an internal bug which halts compilation on some source files. We've only seen this error brought out by a couple of files. It should be fixed in the next PE release.
If you see this error on some file, the solution we're recommending is to compile everything else with PE5.2, then recompile the necessary file using the previous programming environment, PE5.1, and then switch back to PE5.2 to link.
Switching between programming environments is easy on Crays. The following module command switches you from the current to the previous default (at this time at ARSC, from PE5.2 to PE5.1):
klondike% module switch PrgEnv PrgEnv.old
And this switches you back to the current default:
klondike% module switch PrgEnv.old PrgEnv
If you have to deal with this, the more difficult challenge will probably be modifying your build process. Let us know if you need assistance.
If by some slim chance you get it, the error message looks something like this:
=============================== ftn-7991 ftn: INTERNAL FORCNL, File = mysource.F, Line = 1060 INTERNAL COMPILER ERROR: "Unexpected opcode class" (c_obj.c 52.29, line 1330)
And here's the "explanation" from the "explain" command:
klondike% explain ftn-7991 INTERNAL: INTERNAL COMPILER ERROR: "item" (item, item) Notify your system support staff with this error message number and any supporting information. This message does not indicate a problem with your code, although you may be able to change your code so that this error is not encountered.
Faculty Camp Schedule -- Users Invited to Drop In
ARSC's annual faculty camp takes place over the next three weeks.
Registered "campers" will attend all sessions, but general ARSC users and prospective users are invited and encouraged to attend any lectures.
The schedule, below, could change. For this reason, and to aid our planning, we request you send us an email (to: "email@example.com") before you show up.
The majority of the talks are scheduled to take place at ARSC's new location in the West Ridge Research Building (WRRB) on the UAF campus. (For the location of the WRRB, see: http://arsc.edu/news/wrrb.html .)
ARSC Faculty Camp 2004 Schedule: ================================ TIME TITLE/PRESENTER(S) ROOM (*) --------------------------------------------------------------- Monday, 8/2 10:00 AM Welcome to Faculty Camp 009 10:30 AM Introduction to ARSC 009 11:30 AM Account Access 009 12 -- Break 1:00 PM ARSC Virtual Tour RAS 2:00 PM Attendee Presentations 009 Tuesday, 8/3 10:00 AM Introduction to Unix 009 11:00 PM Storage Systems at ARSC 009 12 -- Break 2:00 PM Batch Queueing Systems & Hands On 009 Wednesday, 8/4 10:00 AM Validation and Verification 009 11:00 AM Debugging and Profiling 009 12 -- Break 2:00 PM IBM AIX 5.2 Overview 009 Thursday, 8/5 10:00 AM Performance Programming 009 11:00 AM Vector Programming 009 12 -- Break 2:00 PM Parallel Programming with OpenMP 009 Friday, 8/6 10:00 AM HPC in Chemistry and Life Sciences 010 12 -- Break 2:00 PM Hands On Session 009 Monday, 8/9 10:00 AM Advanced Unix Scripting 009 NOON Brown Bag Session: 010 Supercomputing, building on past experience to realize the potential of current and future systems. Guy Robinson. 2:00 PM Cray X1 Overview 009 Tuesday, 8/10 10:00 AM Parallel Programming with MPI 009 12 -- Break 2:00 PM Hands On Session 009 Wednesday, 8/11 10:00 AM Open Session 12 -- Break 2:00 PM Grid Computing 010 Thursday, 8/12 10:00 AM Visualization Overview 009 12 -- Break 2:00 PM Visualization Overview 009 Friday, 8/13 10:00 AM Visualization Applications 009 12 -- Break 2:00 PM Visualization Applications 009 Monday, 8/16 10:00 AM IBM Parallel Performance Bottlenecks 009 NOON Brown Bag Session: 010 A Consistent Computational Environment: How Parallel Programming Tools Can Help You. Dr. David Cronk, Innovative Computing Lab, University of Tennessee , Knoxville 2:00 PM Hands On Session 009 Tuesday, 8/17 10:00 AM IBM Maximizing MPI Potential 009 12 -- Break 2:00 PM IBM Put/Get Communications Wednesday, 8/18 10:00 AM Visualization Workshop 009 12 -- Break 2:00 PM Overview of Visualization Packages 010 3:00 PM Building Blocks for Application Development Thursday, 8/19 OPEN DAY Friday, 8/20 10:00 AM Final Presentations from ARSC 010 12 -- Break 2:00 PM Final Presentations from Attendees 009
(*) All numbers refer to rooms in the WRRB.
Quick-Tip Q & A
A: [[ I've noticed that I can sometimes get colors to work with emacs, but [[ other times I cannot. Colors are great for syntax and variable [[ highlighting. [[ [[ A remote linux system + a Mac terminal window ($TERM=linux) does great [[ with colors, but many other combinations do not (including the emacs [[ that ships with the Macs). Does anyone know good ways to find out [[ whether color is available in a particular emacs installation, and if [[ so how to get colors to display? # # Nobody had an answer to this one? Happens sometimes... # Q: C and Fortran compilers let me define pre-processor macros on the command line, like this, for instance: cc -D VERBOSE -c mysource.c But I use makefiles, and would prefer this: make -D VERBOSE myapp Is there a way to pass macro setting "through" a make command to be used as compiler options?
[[ Answers, Questions, and Tips Graciously Accepted ]]
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669 Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678 Arctic Region Supercomputing Center University of Alaska Fairbanks PO Box 756020 Fairbanks AK 99775-6020
Subscribe to (or unsubscribe from) the e-mail edition of the
ARSC HPC Users' Newsletter.
Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.