- OpenMPI is 'the new kid on the block', should become the de facto implementation of MPI (?)
- OpenMPI merges (is the successor of) LAM-MPI, PACX-MPI (MPI for grids) and FT-MPI
- "OpenMPI - recommended for macs, compiles smoothly but takes quite a while to compile. Further information is available at http://en.wikipedia.org/wiki/Open_MPI
"
- "MPICH - a portable, open source MPICH. Can be emerged on Gentoo (ssh) where it will install in /usr. Further information on MPICH is available at: http://www-unix.mcs.anl.gov/mpi/mpich/
, including documentation and manual pages."
conform to a set of
MPI standards. Thus, sometimes they differ slightly
in implementation, but normally a code will run on all of them (though
you will need to compile it usuing the correct libraries)."
- At Darthmouth, they state
:
- Examples of Different Implementations
- MPICH - developed by Argonne Nationa Labs (freeware)
- MPI/LAM - developed by Indiana, OSC, Notre Dame (freeware)
- MPI/Pro - commerical product
- Apple's X Grid
- OpenMPI -recent project, MPI-2 compliant, thread safe
- Similiarities in Various Implementations
- source code compatibility (except parallel I/O)
- programs should compile and run as is
- support for heterogeneous parallel architectures
- clusters, groups of workstations, SMP computers, grids
- Difference in Various Implementations
- commands for compiling and linking
- how to launch an MPI program
- parallel I/O (from MPI-2)
- debugging
- Programming Approaches
- SPMD - Single Program Multiple Data (same program on all processors)
- MPMP - Multiple Program Multiple Data ( different programs on different processors)
MPICH on LCG howto
- "Unlike the LCG middleware, gLite WMS is able to support both configurations (shared and not shared) automatically for both LSF and Torque. With gLite-1.4 job wrapper will take care to mirror the working directory in all nodes dedicated to the mpi job if the home are not shared."
JMS-MPI Interoperability
17. How do I run with the SLURM and PBS/Torque launchers?
If support for these systems are included in your Open MPI installation (which you can check with the ompi_info command -- look for components named "slurm" and/or "tm"), Open MPI will automatically detect when it is running inside such jobs and will just "do the Right Thing."
Specifically, if you execute an mpirun command in a SLURM or a PBS/Torque job, it will automatically use the SLURM- or PBS/Torque-native mechanisms to launch and kill processes. There is no need to specify what nodes to run on -- Open MPI will obtain this information directly from SLURM or PBS/Torque. For example:
# Allocate a SLURM job with 4 nodes
shell$ srun -N 4 -A
# Now run a 4-process Open MPI job
shell$ mpirun -np 4 a.out
This will run the 4 MPI processes on the nodes that were allocated by SLURM. Similar results occur with PBS/Torque:
# Allocate a PBS job with 4 nodes
shell$ qsub -I -lnodes=4
# Now run a 4-process Open MPI job
shell$ mpirun -np 4 a.out
Other stuff
--
RichardDeJong - 14 Jun 2006