UNICORE MPI Examples

This Wiki page gives some examples how UNICORE's execution environments can be used to easily submit parallel jobs. The client used is the UNICORE commandline client UCC.

As general background how UNICORE supports parallel applications (also with the graphical client) see the attached presentation.

This is part of the EMI MPI work, for details see the EMI-MPI page

SITE Prerequisite: OpenMPI execution environment

It is assumed that the site administrator has installed OpenMPI and configured an execution environment called "OpenMPI" in the UNICORE server. The backend configuration file ("IDB" in UNICORE speak) needs to have an entry such as

<jsdl-u:ExecutionEnvironment xmlns:jsdl-u="http://www.unicore.eu/unicore/jsdl-extensions">
  <jsdl-u:Name>OpenMPI</jsdl-u:Name>
    <jsdl-u:Description>Run an OpenMPI application</jsdl-u:Description>
    <jsdl-u:ExecutableName>/vsgc/software/openmpi/bin/mpiexec</jsdl-u:ExecutableName>
    <jsdl-u:Argument>
      <jsdl-u:Name>Processes</jsdl-u:Name>
      <jsdl-u:IncarnatedValue>-np </jsdl-u:IncarnatedValue>
      <jsdl-u:ArgumentMetadata>
        <jsdl-u:Description>The number of processes</jsdl-u:Description>
        <jsdl-u:Type>int</jsdl-u:Type>
      </jsdl-u:ArgumentMetadata>
    </jsdl-u:Argument>
    <jsdl-u:Argument>
      <jsdl-u:Name>Export Environment Variable</jsdl-u:Name>
      <jsdl-u:IncarnatedValue>-x </jsdl-u:IncarnatedValue>
      <jsdl-u:ArgumentMetadata>
        <jsdl-u:Description>Export an environment variable (e.g., "foo=bar" exports the environment variable name "foo" and sets its value to "bar" in the starte\
d processes)
        </jsdl-u:Description>
        <jsdl-u:Type>string</jsdl-u:Type>
      </jsdl-u:ArgumentMetadata>
    </jsdl-u:Argument>
    <jsdl-u:Option>
      <jsdl-u:Name>Verbose</jsdl-u:Name>
      <jsdl-u:IncarnatedValue>-v</jsdl-u:IncarnatedValue>
      <jsdl-u:OptionMetadata>
            <jsdl-u:Description>Be verbose</jsdl-u:Description>
      </jsdl-u:OptionMetadata>
    </jsdl-u:Option>
</jsdl-u:ExecutionEnvironment>

'Hello world' with precompiled binary

The user's job file (for the commandline client) looks like this:

 {
  Executable: "./hello.mpi",
  Imports: [
    {From: "/myfiles/hello.mpi", To: "hello.mpi" }, 
  ],
  Resources:{ CPUsPerNode: 2, Nodes: 2, },
  Execution environment: {
    Name: OpenMPI,
    Arguments: { Processes: 4,  },
  },
}

In this case the binary is available on the user's local workstation (in /myfiles/hello.mpi) and needs to be staged in to the job's working directory.

The job will run on two nodes with two CPUs allocated per node, and MPI will start a single process on each host.

The job can be executed simply by

  ucc run myjob.u
and the client will automatically send the job to a site where the required OpenMPI is available and which has the required number of nodes/CPUs.

Of course there are many variations of this example, but in principle this is how it works.

'Hello world' without precompiled binary

In contrast to the previous case, the source file needs to be compiled. In this case, the user can use a precommand to compile his source file.

 {
  Executable: "./hello.mpi",
  Imports: [
    {From: "/myfiles/hello.c", To: "hello.ci" }, 
  ],
  Resources:{ CPUsPerNode: 2, Nodes: 2, },
  Execution environment: {
    Name: OpenMPI,
    Arguments: { Processes: 4,  },
  
     # this compiles the source file
    User precommand: "mpicc hello.c -o hello.mpi",
  },
}

Here we assume that the MPI executables are in the user's path, otherwise the user would need to know the full installation path to mpicc

Complex case: application like Gromacs

Not really sure how this applies here. For applications, UNICORE has its special abstraction mechanism, so the user need not know any details at all. If the application can leverage multiple nodes/CPUs, it is already defined correctly in the backend config (IDB). All the user has to do, is select the correct application and choose the number of CPUs he wants to use.

A sample job could look like this:

 {
  ApplicationName: "Gromacs",
  Arguments: [...],
  Environment: [...],
  Imports: [
    ...
  ],
  Resources:{ CPUsPerNode: 2, Nodes: 2, }, 
}

-- BerndSchuller - 12-Nov-2010

Topic attachments
I Attachment History Action Size Date Who Comment
PDFpdf MPI_in_UNICORE.pdf r1 manage 566.4 K 2010-11-12 - 10:44 BerndThomasSchullerExCern Introduction to parallel job support in UNICORE
Edit | Attach | Watch | Print version | History: r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r3 - 2010-11-25 - BerndThomasSchullerExCern
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    EMI All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback