ARSC HPC Users' Newsletter 330, December 02, 2005

Dynamic Linking - Part I

[ Jesse Niles, User Services Consultant, ARSC ]

One of the most effective ways to increase the extensibility and maintainability of your code is to utilize dynamic linking. Most systems support it, and those that do often use dynamically linked standard libraries, so chances are that you are already using dynamic linking. There is, of course, a tradeoff in using dynamic linking as opposed to static linking. Static linking is done by linking compiled object files into a standalone executable, so the executable can become quite large. Any updates to the system libraries or dependencies will not affect the executable, but because there is no runtime lookup, it will be slightly faster. It is also easier to build a statically linked executable, as some operating systems and linkers have dozens of options and configurations that require quite some time to learn. The power of dynamic linking lies in its robustness. If your application needs to be able to adapt to newer environments or if you want it to have a large amount of flexibility, then dynamic linking can be very handy.

In this article, all code examples are in C++, the shell used is bash, and the compilation and linking examples use g++.

One of the most useful commands for determining what an executable is dynamically linked to is the 'ldd' command. It is used by simply giving it an executable or .so (shared object) file:

snuggles % ldd /bin/vi => /lib/ (0x40020000) => /lib/ (0x40064000) => /lib/ (0x40067000)
        /lib/ => /lib/ (0x40000000)

snuggles % ldd /lib/ => /lib/ (0x4004f000)
        /lib/ => /lib/ (0x80000000)

Here, vi depends on the ncurses, dl, c, and ld-linux libraries. The ncurses library depends on the c and ld-linux libraries. These are located in system library directories, so the loader has no trouble finding them. If they were not, the loader would be unable to link to them, and the execution would fail with an error similar to the following:

snuggles % ./sos
./sos: error while loading shared libraries: cannot open
shared object file: No such file or directory

snuggles % ldd sos => not found => /usr/lib/ (0x40021000) => /lib/ (0x400d5000) => /lib/ (0x400f8000) => /lib/ (0x40100000)
        /lib/ => /lib/ (0x40000000)

If you have used Linux or UNIX for even a small amount of time, you have probably encountered this error at some point. If you don't have permission to add to the system library directories and you instead have a local, personal copy of the needed shared object, you can give the loader a colon-delimited list of directories to search. This is done by setting the LD_LIBRARY_PATH environment variable:

snuggles % export LD_LIBRARY_PATH=/home/niles/lib
snuggles % ldd sos => /home/niles/lib/ (0x40015000) => /usr/lib/ (0x40023000) => /lib/ (0x400d7000) => /lib/ (0x400fa000) => /lib/ (0x40102000)
        /lib/ => /lib/ (0x40000000)

Here, the sos executable has a reference to The loader can now find it so the path replaces the "not found" message as seen in the ldd output.

To actually make an .so file, you must compile source and build the shared object out of the object files. The following example consists of two source files, shared.cpp that contains a simple function, and sos.cpp that contains the main() function required as an entry-point to the application:

1 int someOperation(int a, int b)
2 {
3    return a+b;
4 }

1 #include<iostream>
2 int someOperation(int, int);
4 int main()
5 {
6     std::cout << someOperation(1, 5) << std::endl;
7     return 0;
8 }

1 default : all
2 all : sos
3 : shared.cpp
4         g++ shared.cpp -shared -o
6 sos : sos.cpp
7         g++ sos.cpp -L. -lshareme -o sos

There is a function prototype in sos.cpp on line 2 for the function so the compiler can do its work. It leaves the definition of the function up to the linker. If the function were not defined, you would get a linker error at the end of your build. Line 3 and 4 in the Makefile perform the compilation of the source file and build a shared object out of the function. Line 6 and 7 of the Makefile perform the compilation of the executable and perform the linking with the shared object. The resulting files are and sos. If you try running the executable without setting or updating the LD_LIBRARY_PATH variable, they you'll receive the same "No such file or directory" loader error above. Just append the working directory to the library path, and the program will run as expected:

snuggles % ./sos
./sos: error while loading shared libraries: cannot open shared

object file: No such file or directory
snuggles % export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:`pwd`
snuggles % ./sos

As you have probably already noticed, this isn't any more useful than just statically linking the executable. To make this more interesting, let's assume that the functionality of someOperation changes. By renaming the old library, we can make sure it doesn't get overwritten by the new one, so a comparison can be made. The new function is then changed to:

1 int someOperation(int a, int b)
2 {
3    return a+b*b;     
4 }

If it is rerun with the new and old libraries, the difference is apparent:

snuggles % mv
snuggles % make
g++ shared.cpp -shared -o
g++ sos.cpp -L. -lshareme -o sos
snuggles % ./sos
snuggles % mv
snuggles % ./sos

Additionally, if the were removed from line 6 in the Makefile, the program would still run the same way without recompilation. This is really one of the most powerful benefits to using dynamic linking.

Keep in mind that more than one function can be included in a single object file and more than one object file can be included into the .so file.

An even more powerful, but more advanced usage of .so files is to explicitly load them from within the application at runtime. You can continue to add new functionality to your application long after the product has been released without having to recompile the executable. The source is more complicated and is as follows:

1 extern "C" int operation1(int a, int b)
2 {
3     return a+b;      
4 }
6 extern "C" int operation2(int a, int b)
7 {
8     return a-b;
9 }

1 default : all
2 all : sos
3 : shared.cpp
4         g++ shared.cpp -shared -o
6 sos : sos.cpp
7         g++ sos.cpp -ldl -o sos

Two different functions are declared in the shared.cpp file and they both have extern "C" before them so the symbols don't get mangled by C++. If that were omitted, the calling program would have to figure out what the mangled name would be, which isn't a trivial task.

The Makefile has been altered so now there is no dependency on the executable. Also, note that the library that defines the dlopen, dlclose, etc. functions has been added (-ldl).

The method shown for the example program is the most common way to open a function from a file and call it. Because it is quite a bit longer than before, the comments are located in the source. The LD_LIBRARY_PATH also affects the dlopen() call, so use a './' or add the current directory to your LD_LIBRARY_PATH .

1  #include<iostream>
3  //Include file for dl functions
4  #include<dlfcn.h>
6  //Pointer to a function that takes two ints and returns an int
7  typedef int (*FuncType)(int, int);
9  int main(int argc, char **argv)
10 {
11     //Handle that the dl function use
12     void *handle;
14     //Function pointer to the loaded function
15     FuncType loadedFunction;
17     //Error message returned by dlerror()
18     char *error;
20     if (argc != 3)
21     {
22         std::cerr << "Usage: sos [file] [symbol]" << std::endl;
23         return -1;
24     }
26     //Load in symbols from filename located in argv[1]
27     //RTLD_LAZY means the symbols will be resolved when needed
28     handle = dlopen(argv[1], RTLD_NOW);
30     //If handle is null, exit with error message
31     if (!handle)
32     {
33         std::cerr << dlerror() << std::endl;
34         return -1;
35     }
37     dlerror(); //clear error messages, if any
40     //dlsym returns a function pointer to the symbol matching string argv[2]
41     loadedFunction = (FuncType)(dlsym(handle, argv[2]));
43     //dlsym didn't work, exit with error
44     error = dlerror();
45     if (error)
46     {
47         std::cerr << error << std::endl;
48         return -1;
49     }
51     //Call function and output return value
52     std::cout << (*loadedFunction)(1, 5) << std::endl;
54     //Unload library
55     dlclose(handle);
56     return 0;
57 }

The basic process is:

  1. Open the shared object.
  2. Find the symbol and return the address to it so it can be called.
  3. Call the function.
  4. Close the shared object.

The output of a few runs of the program is provided:

snuggles % ./sos operation1
snuggles % ./sos operation2
snuggles % ./sos operation3
./sos: undefined symbol: operation3

In the next part of this series we will show how dynamic libraries can be used to add plug-in functionality.

IBM: Multiple Program Multiple Data - Part I

IBM's parallel operating environment or poe, has built-in functionality which allows one to run two separate programs which communicate via MPI. This paradigm, called multiple program multiple data (mpmd), allows you to couple otherwise independent programs.

Here's a simple example. The first program (prog1.c) acts as the master sending messages to all other tasks. The second program (prog2.c) listens for a message from task 0 and prints out the message then exits.

iceberg2 1% more prog2.c 
#include <mpi.h>
#include <stdio.h>

#define BUFSIZE 1024

main (int argc, char ** argv)
    int mype, totpes,ierr;
    int ii;
    char message[BUFSIZE];


    printf("Hello from %d\n",mype);

    if ( mype == 0 )
            sprintf(message,"Hello Processor %d from Processor %d\n", 
            ii, mype);
        fprintf(stderr,"%s must be task 0! Exiting\n", argv[0]);


iceberg2 2% more prog2.c 
#include <mpi.h>
#include <stdio.h>

#define BUFSIZE 1024

main (int argc, char ** argv)
    int mype, totpes,ierr;
    int ii;
    char message[BUFSIZE];
    MPI_Status stat;


    printf("Hello from %d\n",mype);

    if ( mype != 0 )
 %d: %s",mype, totpes, message);
        fprintf(stderr,"%s cannot be task 0! Exiting\n", argv[0]);


Compile the executables as you normally would.

iceberg2 3% mpcc_r -q64 prog1.c -o prog1
iceberg2 4% mpcc_r -q64 prog2.c -o prog2

The mpmd model requires a command file to describe which tasks will run on each node. The example below uses prog1 for task 0 and prog2 for the other 7 tasks.

iceberg2 5% more cmdfile 

We use the following flags to run the mpmd applications:

  1. -cmdfile specifies the list of executables to run.
  2. -pgmmodel specified the program model to use. The default is spmd (single program multiple data), in this case we will use mpmd.

The complete poe statement becomes:

poe -pgmmodel mpmd -cmdfile cmdfile

When the sample code is run, the following output is generated.

iceberg2 6% more Output
Hello from 0
Hello from 1
 8: Hello Processor 1 from Processor 0
Hello from 2
 8: Hello Processor 2 from Processor 0
Hello from 3
 8: Hello Processor 3 from Processor 0
Hello from 4
 8: Hello Processor 4 from Processor 0
Hello from 5
 8: Hello Processor 5 from Processor 0
Hello from 6
 8: Hello Processor 6 from Processor 0
Hello from 7
 8: Hello Processor 7 from Processor 0

Coupling two existing MPI codes will take a bit more work. In the next part of this series we will consider that task.

Job Notification with PBS and Loadleveler

Both Loadleveler and PBS provide options which will send email messages when a job starts and/or ends. These messages contain information which does not appear in the stderr and stdout by default, such as wall clock run time and exit status.

Below are PBS and Loadleveler examples:

PBS version:

klondike 1% cat mailme.pbs
#PBS -q default
#PBS -l mppe=4
#PBS -m abe
# -m sets mail options
#   a  mail is sent when the job is terminated by the batch system
#   b  mail is sent when the job begins execution
#   e  mail is sent when the job when the job terminates
# one or more options above can be used.
# -M specifies a list of users that will recieve the notification 
#    message 
#   if you use your ARSC username, be sure to include the domain 
#   (i.e. to ensure proper delivery of the message.   

Loadleveler version

iceberg 1% cat mailme.cmd
# @ error          = error
# @ output         = output
# @ notification   = always
#   always    mail is sent when the job starts and when it ends 
#             regardless of how it ends
#   error     mail is sent when the job exits in error only
#   start     mail is sent when the job starts only
#   never     mail is never sent
#   complete  mail is sent when the job exits without error. 
# @ notify_user    =
#   if you use your ARSC username, be sure to include the domain 
#   (i.e. to ensure proper delivery of the message. 
# @ job_type       = parallel
# @ node           = 1   
# @ tasks_per_node = 8 
# @ network.MPI    = sn_all,shared,us
# @ node_usage     = not_shared
# @ class          = standard 
# @ queue


File Access Times: mv versus cp

If you are accustomed to using mv instead of cp to move data from long term storage (i.e. $ARCHIVE ) to temporary storage (e.g. $WRKDIR or $SCRATCH ) be aware that mv will not update the access time for files. Depending on when the files were last accessed, this could mean that the files you just moved are eligible for purging immediately after being moved. You can avoid this problem by using cp.


cp -r $ARCHIVE/mydirectory $WRKDIR 

Quick-Tip Q & A

A: [[ Here's a challenge for vi and vim experts: vi/vim have yank 
   [[ and put. I want a new operation, "replace."

Here's an okay solution, for VIM, only.  Unlike vi, vim lets you use
named buffers in maps. It also lets you define multi-character map names
(to use, you must type the name fast enough but not too fast).

To try this, add the following to your .vimrc file... (A double-quote 
in column 1 starts a .vimrc comment.)

" VIM maps to implement a "replace" operation using the named 
" buffer "y.
" This redefines the built-in vi/vim command Y, which normally yanks one
" line.   If you've already mapped the normally unused q, you should
" probably delete that map.  To use, yank the replacement text into the
" buffer "y (using the Y map, below), and then use one of the other maps
" to copy it.
" --
" Type Y followed by a cursor movement command to yank text into 
"   buffer "y.  E.g., Y$ yanks to the end of the line.
map Y "yy
" qp and qP put the contents of "y, after or before the current cursor
"   position, respectively.
map qp "yp
map qP "yP
" qw deletes 1 word and puts the contents of "y (effectively replacing it)
map qw dw"yP
" q2w, q3w, and ql replace 2 words, 3 words, and the entire 
"   line, respectively.
map q2w d2w"yP
map q3w d3w"yP
map ql dd"yP
" qm replaces all text up to the mark, m. To use, first move cursor to end
"   of section to replace and hit mm.  This sets the mark, m.  Move cursor
"   to beginning of section to replace and type qm.
map qm d`m"yP

Q:  I would like to build a tar file of all of the files in a directory
    and subdirectories, except for the *.o and *.nc files.  Is there a 
    way to selectively add the files I want to a tar file?

[[ Answers, Questions, and Tips Graciously Accepted ]]

Current Editors:
Ed Kornkven ARSC HPC Specialist ph: 907-450-8669
Kate Hedstrom ARSC Oceanographic Specialist ph: 907-450-8678
Arctic Region Supercomputing Center
University of Alaska Fairbanks
PO Box 756020
Fairbanks AK 99775-6020
E-mail Subscriptions: Archives:
    Back issues of the ASCII e-mail edition of the ARSC T3D/T3E/HPC Users' Newsletter are available by request. Please contact the editors.
Back to Top