ARSC HPC Users' Newsletter 414, August 18, 2010

ARSC Interns Nationally Recognized

Fairbanks, Alaska- ARSC summer interns Jose Antonio Figueroa and Vahid Ajimine are two of only four students selected from a national field of HPC scholars to be recognized by DoD's High Performance Computing Modernization Program (HPCMP) at its 2010 Users Group Conference (UGC) in Schaumburg, Ill.

For more details, please see the following press release: http://www.arsc.edu/arsc/news/20100615jeom

Bandwidth Comparison of ARSC Machines

[ By Tom Logan ]

With the start of production of ARSC's newest supercomputer, Pacman, it was appropriate to perform bandwidth testing to determine the speed of the interconnect network. At the same time, tests were run on ARSC's other two big irons, Pingo and Midnight, for comparison purposes.

The bandwidth testing program used for these comparisons was an ARSC in house MPI based program called DCPROG, short for the Distributed Congestion Program. DCPROG tests bandwidths by sending messages of varying sizes (from 1K to 4M 4-byte data items) a number of times (50 in this case) and calculating the average bandwidth at each message size. For further consistency, DCPROG was run 6-8 times for each configuration and these results were also averaged.

The tests were performed in two different configurations - one to test only off-node communication (interconnect bandwidth) and one to test only on-node communication (memory bandwidth and the MPI implementation).

Off-Node Communications

Off-node bandwidths were tested using one processor per node communicating with single processors on other nodes. For these tests, only the message size that achieved the maximum bandwidth was graphed. For off-node this was almost invariably the largest message size on Pacman and Midnight, while for Pingo these varied randomly from 10,000 to 100,000 data items.

The Off-Node Aggregate Bandwidth above shows the maximum average global bandwidth achieved during the experiments by number of nodes used. Of the three systems tested, one can see that the interconnect network on Pacman gave the best bandwidths for experiments with 4 or more nodes. Although some of the consistency seen in the Pacman data can be attributed to a light load on that system during these experiments, the results are certainly still encouraging.

The Off-Node Average Bandwidth above shows the maximum average bandwidth per processor involved in the experiment. This graph plainly shows that, compared with the other systems, Pacman gave the best results when using 4 or more nodes. It also shows that the Pacman numbers are entirely consistent independent of the number of nodes used, while each of the other systems' performance degrades as the number of nodes used increases.

On-Node Communications

On-node bandwidths were tested by sending messages between PEs within a single node of a system. Again, only the message size that achieved the maximum bandwidth was graphed. For on-node communications, Pingo favored messages of 20,000 data items, Pacman varied from 5,000 to 100,000 data items, and Midnight preferred messages from 1,000 to 20,000 data items for the best bandwidths.

The On-Node Total Bandwidth data above shows the total bandwidth achieved when passing messages only internal to a node. Tests were run using from 2 cores to the number of cores available on the node being tested, including running tests on both the 4-way and 16-way nodes of Midnight. As before, Pacman outperformed Midnight for nearly all cases. However, unlike the off-node communications, we see a very large disparity in that the Pingo numbers are at least 3.5 times higher than the other systems. One can conclude that the Cray architecture and/or MPI implementation bandwidth is far superior to the other ARSC platforms.

Finally, the On-Node Average Bandwidth data above shows the maximum average on-node bandwidth achieved during these experiments. Again, we see that Pacman consistently meets or exceeds twice the bandwidth of the Midnight 16 way nodes, while Pingo is roughly 3.5 times the bandwidth of Pacman.

In conclusion, this testing has shown that Pacman's interconnect performance exceeds that of Midnight and Pingo in most cases, although a light system load may play some part in this finding. Also, on-node communications are better on Pacman than Midnight, but far better on Pingo than on either of the other systems. It should be noted that these numbers represent a snapshot of these three systems and their default software stacks and settings.

Still More About Git

[ By Kate Hedstrom ]

I've written about git here twice before, where I described needing to deal with legacy svn repositories. These articles can be seen here: http://www.arsc.edu/arsc/support/news/hpcnews/hpcnews404/index.xml#article1 http://www.arsc.edu/arsc/support/news/hpcnews/hpcnews410/index.xml#article2

Perhaps it's time to get a little more specific about how I'm using it so you can all be suitably horrified.

First of all, I started playing with git before I found a system with a working git-svn command. I created a repository with a main trunk which mimics the svn code, plus a branch with exploratory code we're not ready to share via svn just yet. I have these codes in a bare "origin" repository in my home directory on my desktop system. A bare repository has the database, but no working files - it is unsafe to push to a non-bare repository.

I can do a "git clone" of the "origin" out to any of the supercomputers here at ARSC. The "origin" repository doesn't know about any of the clones, but each clone can pull updates from the "origin" and push changes back into it. Good practice means making sure the "origin" is always up to date with any changes made anywhere.

It turns out the system with the working git-svn is a laptop, though security prevents the laptop from hosting the "origin". The laptop also has a clone of "origin" in addition to the repository generated by "git svn clone" and a third repository from "git svn clone" on a colleague's trunk version. These three repositories know something about each other. How does that work? Say we create them with:


    git clone <desktop:dir> git_code
    git svn clone <my_branch_url> svn_code
    git svn clone <main_trunk_url> trunk_code

We'll assume that everything is on the master branch below, but it could be on some other active branch.

Remotes

I can go into git_code and type:


    git remote add svn_code ../svn_code
    git remote add trunk_code ../trunk_code

Likewise, I go into svn_code and type:


    git remote add git_code ../git_code
    git remote add trunk_code ../trunk_code

Because my view of trunk_code is read-only, I have no need for it to know about the others.

Note that the "git clone" operation automatically generates a remote called "origin".

Remote Tracking Branches

I'm not quite set up yet to use the remotes. They need to be turned into remote tracking branches, branches in the local repository which point to the remotes. To do this, go into each directory with remotes and type:


    git remote update

This will create a new tracking branch for each of the remote sites. These branches show up with "git branch -r" or "git branch -a" but not with simply "git branch". They look something like:


    git branch -a
    * master
      remotes/git-svn
      remotes/git_code/master
      remotes/trunk_code/master

I can check out one of the remotes:


    git checkout /remotes/trunk_code/master

This gives a strange message about being in a "detached HEAD" state with some explanation of your options at this point. If you want to turn it into a permanent branch here, type:


    git checkout -b new_branch_name

Incorporating Svn Updates

Suppose you decide to check for updates from the main trunk, after a period of weeks (or years) and you want to bring those changes to your code. Using svn, an "svn update" would bring a copy of that trunk directory up to date all in one fell swoop. Ditto with a merge from the old to the current, even if some dozens of updates have been checked in.

Using git, you invoke the following from the trunk_code directory:


    git svn rebase

This will bring over the changes one at a time, with the svn check-in message attached to each change.

Next, go to the svn_code directory and type:


    git remote update

Now you can apply the changes via "git cherry-pick", one change at a time. Conflicts get resolved as they are encountered, commit by commit. This could get tedious if you let it go too long....

Once that's done, it's time to commit these changes to your svn repository:


    git svn dcommit

In the git_code directory, repeat the "git remote update" and the "git cherry-pick" commands, then:


    git commit
    git push origin master

Now your workstation's "origin" is up to date as well.

Incorporating Git Updates

Sometimes the changes come from you, over on one of the supercomputers. On that supercomputer, type:


    git commit -a
    git push origin master

Now the changes are on the workstation, but not the laptop. On the laptop in the git_code directory, type:


    git pull origin master

Now go to the svn_code directory and:


    git remote update
    git cherry-pick <sha1>
    git svn dcommit

This brings your changes over to your svn branch.

Last Thoughts

If I were to start fresh, I would start with the "git svn clone" operation and create a bare clone of that on the workstation. I'm still enjoying using git, though it makes me feel like an old dog faced with many new tricks!

Also, I talked about git at a ROMS meeting and that talk is available here:

http://www.myroms.org/Workshops/ROMS2010/presentations/Thursday/Hedstrom_git.pdf

Finally, I heard an episode of FLOSS ( http://twit.tv/FLOSS ) about Mercurial. Mercurial and git were begun on the same day to solve the same problems. Mercurial prides itself on having a better user interface, so if you like some of what git has to offer, but don't like the interface, give Mercurial a look (http://mercurial.selenic.com/). It's written in Python and is used heavily by the Python community.

Quick-Tip Q & A


A:[[ I'm trying to create a tar file of a directory on my system.  
 [[ Unfortunately the disk is so full that I can't actually create the tar 
 [[ file without filling this disk.
 [[ 
 [[ Ultimately I would like to get a gzipped tar file on my other system. I 
 [[ would rather not have to copy over the directory tree in order to get 
 [[ enough space.
 [[ 
 [[ Is there a way I can create the tar file on the remote system without 
 [[ copying the directory tree to the remote system first?

#
# Daniel Kidger, Izaak Beekman, Rahul Nabar, Jed Brown, Don Bahls, and 
# Greg Newby all submitted similar solutions.  In Greg's words:
#

Assuming you are using ssh or a similar method that can pipe data from
one system to another, you can tar on one system, and send output
directly to another system.  Without needing to store the tar file on
the first system.

For example, if this were your usual command to make "test.tar" from a
subdirectory "test", then send it to a system called "b".  The old
method would be:

 tar cf test.tar test
 scp test.tar b:

Instead, you could pipe to ssh, and use "cat" to create the output
file on the remote system.  As usual, there are a few ways to do this.
Use "-" as the output file argument to tar, to specify stdout:

 tar cf - test 
 ssh b "cat > test.tar"

Here is a variation, where you want to add some other shell commands
on the remote system.  Simply separate multiple commands with a
semi-colon:

 tar cf - test 
 ssh b "echo started at `date` ; cat > test.tar ; echo ended at `date`" 

GNU tar (which is what you will be using on the ARSC Linux systems)
can create a compressed file from the command line.  Just add "z" for
gzip, or "j" for bzip2:

 tar czf - test 
 ssh b "cat > test.tar.gz"

#
# But if you do not have GNU tar installed, you can always use Dale 
# Clark's approach:
#

tar cf - sourcedir 
 ssh targetsystem "gzip - > /tmp/rt.tgz"

#
# Dan Stahlke emphasizes efficiency:
#

You can pipe the output of tar through ssh:

ssh remote_system "cd /path/to/your/files && tar cvfz - ." > archive.tar.gz

Or, if your version of tar doesn't have the 'z' option,

ssh remote_system "cd /path/to/your/files && tar cvf - . 
 gzip" > archive.tar.gz

This technique also provides an efficient way to copy files if you run it 
back through tar on your side (faster than scp from what I've heard, 
although I haven't tried):

ssh -c arcfour remote_system "cd /path/to/your/files && tar cf - ." 
 tar xvf -

The "-c arcfour" option tells ssh to use a faster, less secure (although 
probably still uncrackable) cipher.

#
# Scott Kajihara's solution uses the same SSH technique as the other 
# responses, but going the other direction:
#

All right, I am amazed that this worked for binary as well as ASCII data. 
Perhaps, tar(1) produces a suitably generic data stream. The slick part is 
that this is all done in I/O buffers, so no intermediate results exist on 
either system's disk storage (at least _I_ think that it's slick).

 ssh host.domain 'cd directory ; tar cf - .' 
 gzip > foo.tgz

which tar's the contents of _directory_ at _host_._domain_ and gzip's the 
incoming tar file data on the local machine. Adjust as necessary for your 
circumstances.


Q: The colored text in my terminal window can be pretty annoying.  It's 
useful sometimes, but difficult to see other times.  What are my options?  
Any of the following would be useful:

  1) Turn off colored text altogether
  2) Remove the color, but allow styled text (e.g., bold, underline) to 
     point out particular things like executable files or directories
  3) Toggle text coloring on and off with the press of a button or a short 
     command
 

[[ Answers, Questions, and Tips Graciously Accepted ]]


Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top