ARSC T3D Users' Newsletter 21, February 3, 1995
ARSC T3D Upgrades
The next month or two will be a busy time for the ARSC T3D. We will be upgrading the following:
- 2MW to 8MW per PE, tentatively set for February 7th and 8th
- The T3D Programming Environment (libraries, tools and compilers) P.E. 1.1 to P.E. 1.2 sometime in the next two months.
Upgrade to MAX 1.2On January 31st ARSC upgraded to the 184.108.40.206 version of MAX, the T3D operating system. If any users notice differences in their codes running on the T3D they should notify Mike Ess.
Next T3D Class at ARSC
Introduction to Programming the CRAY T3D
Dates: February 8 - 10, 1995 Time: 9:00 AM - noon, 1:00 - 5:00 PM Location: University of Alaska Fairbanks main campus, room TBA Instructor: Mike Ess, Parallel Applications Consultant Course Description: To satisfy increasing computational demands, computers of the future must have multiple processors executing on the same program. The Cray T3D is a step in this direction. The Cray T3D, a MPP or Massively Parallel Processor, consists of 128 processors attached to the Cray Y-MP.
This class will cover the characterization and history of MPPs. With this background the students will experience how the T3D approaches the problem of executing a program in parallel. The class will cover the three programming paradigms for extracting parallelism:
- Data-sharing, as with Fortran 90
- Work-sharing, as with Craft Fortran and
- Message-passing as implemented with PVM or shmem
- Performance measurement and tools
- Debugging techniques and tools
Application ProcedureThere is no charge for attendance, but enrollment will be limited to 15. In the event of greater demand, applicants will be selected by ARSC staff based on qualifications, need, and order and completeness of application. The class may be cancelled if there are fewer than 5 applicants.
Send e-mail to email@example.com with the following information:
- course name
- your name
- UA status (e.g., undergrad, grad, Asst. Prof.)
- advisor (if you are a student)
- denali userid
- preferred e-mail address
- describe programming experience
- describe need for this class
I/O on the T3D and Y-MPIn the last newsletter I mislabeled a table showing the speeds of unformatted reads and writes on the T3D and Y-MP for the following construct:
parameter( ia1size = 1024 ) real a1( ia1size ) . . . call asnunit( iun, '-a /tmp/ess/fort.12', ier ) open( iun, form = 'unformatted' ) t1 = rtc() write( iun ) ( a1( i ), i = 1, ia1size ) t1 = rtc() - t1 time = t1 * .000 000 006 666 speed = a1size / timeThe table should look like:
Table 3 Y-MP and T3D times (MW/sec) for unformatted I/O with 2nd read/write construct on /tmp file system array size T3D Y-MP (in words) reads writes reads writes ---------- ----- ------ ------ ------ 1024 0.066 0.067 13.361 13.480 2048 0.067 0.067 22.439 22.442 4096 0.067 0.067 33.953 34.211 8192 0.067 0.067 46.392 46.397 16384 0.067 0.067 55.360 55.217 32768 0.066 0.066 25.136 25.456 65536 0.066 0.066 24.863 25.043 131072 0.066 0.066 21.975 21.816 262144 0.066 0.066 21.987 22.012The point of that section was to warn users of the construct:
write( iun ) ( a1( i ), i = 1, ia1size )versus
write( iun ) a1on the T3D. I'm sorry for any confusion this has caused.
The flexibility of the implied do loop on the write can be recovered with a little effort. Consider speeds for the three methods below with the following declarations:
real a( 262144 ) real a1(1024),a2(2048),a3(4096),a4(8192),a5(16384), + a6(32768),a7(65536),a8(131072),a9(262144) equivalence( a1, a9 ) equivalence( a2, a9 ) equivalence( a3, a9 ) equivalence( a4, a9 ) equivalence( a5, a9 ) equivalence( a6, a9 ) equivalence( a7, a9 ) equivalence( a8, a9 )Method #1 (size of write comes from array declarations)
if( 1024 .eq. isize( i ) ) write( iun ) a1 if( 2048 .eq. isize( i ) ) write( iun ) a2 . . . if( 262144 .eq. isize( i ) ) write( iun ) a9Method #2 (do loop to move data, then write of known size array
if( 1024 .eq. isize( i ) ) then do 11 j = 1, isize( i ) a1( j ) = a( j ) 11 continue write( iun ) a1 endif if( 2048 .eq. isize( i ) ) then do 12 j = 1, isize( i ) a2( j ) = a( j ) 12 continue write( iun ) a2 . . . if( 262144 .eq. isize( i ) ) then do 19 j = 1, isize( i ) a9( j ) = a( j ) 19 continue write( iun ) a9 endifMethod #3 (do loop on the write statement)
write( iun ) ( a( j ), j = 1, isize( i ) )Now the speeds look like:
Table 4 T3D speeds (MW/s) for unformatted I/O with three methods for I/O Method #1 Method #2 Method #3 array (known array (do loop to known (do loop on size sizes) array sizes) write stmt.) (words) writes reads writes reads writes reads ------ ------ ----- ------ ----- ------ ----- 1024 4.326 4.100 2.883 2.853 0.041 0.035 2048 5.518 5.380 3.467 3.425 0.041 0.035 4096 6.493 6.260 3.866 3.792 0.041 0.035 8192 7.088 6.838 4.097 3.834 0.041 0.035 16384 7.105 7.136 4.226 4.133 0.041 0.035 32768 3.011 0.083 2.307 0.095 0.041 0.025 65536 3.109 0.135 2.393 0.613 0.041 0.033 131072 2.637 0.188 2.097 0.303 0.041 0.033 262144 2.665 2.374 2.137 1.972 0.041 0.035So with a little extra code most of the speed of the fastest unformatted I/O can be recovered. This also points out some of the speed difference between CPU code (the copy to an array of known size) and I/O code (the unformatted write).
List of Differences Between T3D and Y-MPThe current list of differences between the T3D and the Y-MP is:
- Data type sizes are not the same (Newsletter #5)
- Uninitialized variables are different (Newsletter #6)
- The effect of the -a static compiler switch (Newsletter #7)
- There is no GETENV on the T3D (Newsletter #8)
- Missing routine SMACH on T3D (Newsletter #9)
- Different Arithmetics (Newsletter #9)
- Different clock granularities for gettimeofday (Newsletter #11)
- Restrictions on record length for direct I/O files (Newsletter #19)
- Implied DO loop is not "vectorized" on the T3D (Newsletter #20)
In newsletter #18 there is a list of CRI T3D optimization articles available from ARSC.
In Newsletter #19 there is a list of CUG articles on the T3D available from ARSC.
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669 Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678 Arctic Region Supercomputing Center University of Alaska Fairbanks PO Box 756020 Fairbanks AK 99775-6020
Subscribe to (or unsubscribe from) the e-mail edition of the
ARSC HPC Users' Newsletter.
Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.