4
–
Running MPI on QLogic Adapters
Open MPI
IB0054606-02 A
4-15
Open MPI directs UNIX standard output and error from remote nodes to the node
that invoked
mpirun
and prints it on the standard output/error of
mpirun
. Local
processes inherit the standard output/error of
mpirun
and transfer to it directly.
It is possible to redirect standard I/O for Open MPI applications by using the
typical shell redirection procedure on
mpirun
.
$ mpirun -np 2 my_app < my_input > my_output
Note that in this example only the
MPI_COMM_WORLD
rank
0
process will receive
the stream from
my_input
on stdin. The stdin on all the other nodes will be tied to
/dev/null
. However, the stdout from all nodes will be collected into the
my_output
file.
Environment for Node Programs
The following information can be found in the Open MPI man page and is
repeated here for easy of use.
Remote Execution
Open MPI requires that the PATH environment variable be set to find executables
on remote nodes (this is typically only necessary in rsh- or ssh-based
environments -- batch/scheduled environments typically copy the current
environment to the execution of remote jobs, so if the current environment has
PATH and/or LD_LIBRARY_PATH set properly, the remote nodes will also have it
set properly). If Open MPI was compiled with shared library support, it may also
be necessary to have the LD_LIBRARY_PATH environment variable set on
remote nodes as well (especially to find the shared libraries required to run user
MPI applications).
It is not always desirable or possible to edit shell startup files to set PATH and/or
LD_LIBRARY_PATH. The
--prefix
option is provided for some simple
configurations where this is not possible.
The
--prefix
option takes a single argument: the base directory on the remote
node where Open MPI is installed. Open MPI will use this directory to set the
remote PATH and LD_LIBRARY_PATH before executing any Open MPI or user
applications. This allows running Open MPI jobs without having pre-configured
the PATH and LD_LIBRARY_PATH on the remote nodes.
NOTE
The node that invoked mpirun need not be the same as the node where the
MPI_COMM_WORLD
rank
0
process resides. Open MPI handles the
redirection of
mpirun's
standard input to the rank
0
process.
Содержание OFED+ Host
Страница 1: ...IB0054606 02 A OFED Host Software Release 1 5 4 User Guide...
Страница 14: ...xiv IB0054606 02 A OFED Host Software Release 1 5 4 User Guide...
Страница 22: ...1 Introduction Interoperability 1 4 IB0054606 02 A...
Страница 72: ...3 InfiniBand Cluster Setup and Administration Checking Cluster and Software Status 3 48 IB0054606 02 A...
Страница 96: ...4 Running MPI on QLogic Adapters Debugging MPI Programs 4 24 IB0054606 02 A...
Страница 140: ...6 SHMEM Description and Configuration SHMEM Benchmark Programs 6 32 IB0054606 02 A...
Страница 148: ...8 Dispersive Routing 8 4 IB0054606 02 A...
Страница 164: ...9 gPXE HTTP Boot Setup 9 16 IB0054606 02 A...
Страница 176: ...A Benchmark Programs Benchmark 3 Messaging Rate Microbenchmarks A 12 IB0054606 02 A...
Страница 202: ...B SRP Configuration OFED SRP Configuration B 26 IB0054606 02 A Notes...
Страница 206: ...C Integration with a Batch Queuing System Clean up PSM Shared Memory Files C 4 IB0054606 02 A...
Страница 238: ...E ULP Troubleshooting Troubleshooting SRP Issues E 20 IB0054606 02 A...
Страница 242: ...F Write Combining Verify Write Combining is Working F 4 IB0054606 02 A Notes...
Страница 280: ...G Commands and Files Summary of Configuration Files G 38 IB0054606 02 A...
Страница 283: ......