User Level Scheduling in the Grid: the outline of the technology potential and the research directions

The User Level Scheduling (ULS) techniques have been very successfully
applied in a number of application areas such as bio-informatics,
image processing, telecommunications and physics simulation. The ULS
helped to improve Quality of Service (QoS) in the Grid, what have been
experienced as reduced job turnaround time, more efficient usage of
resources, more predictable, reliable and stable application
execution. To be universally adopted however, the ULS techniques must
be proved to be compatible with the fundamental assumptions of the
Grid computing model such as respect for the resource usage policies,
fair-share toward other users, traceability of user's activities and
so on.  In this talk we will the outline the main benefits and
possible pitfalls of the ULS techniques. We will also try to introduce
initial research ideas for modeling and measuring of the Quality of
Service on the Grid and for analysis of the impact of the ULS on other
users (fair-share). Finally we will present ideas for enhanced support
for certain applications such as the iterative algorithms or
parameter-sweep.

This presentation builds on the "The Quality of Service on the Grid with 
user-level scheduling" presented at UvA on September 1st 2006.

recall of the current activities

Placeholders and late binding

  • the technology is also called: placeholder, late binding, pilot jobs
  • you do not send specific job to the resource, you acquire the resource and assign the job at runtime, you free the resource when you are done
  • some examples:
    • HEP production systems (centralized task queue, server acts on behalf of the user): Alien (Atlas), DIRAC (LHCb), PANDA (Atlas)
    • Condor glide-ins (build a virtual Condor pool from Globus resources)
    • Boinc (CPU cycle scavanging)

User Level Scheduling

  • it's the late binding technology
  • the scheduler has the application knowledge (may make better error recovery or load balancing decisions)
  • it runs in the user space (resources are accountable for and tracability is not compromized)
  • it is capable of creating transient/volatile overlays on top of the regular infrastructure ("virtual clusters")
  • DIANE implementation:
    • not specific to any particular technology or infrastructure (Grid, LSF/PBS, explicit IP host list + mixing of the resources)
    • portable (python and CORBA)
    • self-contained and small distribution with fully automatic installation

Outstanding issues of User Level Scheduling

  • Improvement of QoS characteristics
    • extra reliability (fail-safety and application-specific fine tuning)
    • reduction of turnaround time
    • stabilization of the output inter-arrival rate (which is also more predictable)
  • Potential flaws
    • effect on fair-share: would other users be penalized by ULS jobs?
    • potential harmfullness of the redundant batch requests

Area of applicability

Research Directions

Grid slot model

  • slot i is defined by: tq(i) = queuing time, ta(i) = available computing time (wallclock), p(i) = weight (power) of the resource
  • W is the total work-load of the job

Estimation of the number of needed slots
  • We can derive N (the minimal number of slots needed) from this equation:
    \[W + overhead = \sum_{i=1}^{N}ta_{i}*p_{i}\]
  • overhead represents the ULS overhead (networking, scheduling) and the adjustment for the unit of execution
  • additional complication: p(i) may change with time (time-sharing on the worker node with other jobs)
  • either a largely redundant slot acquisition (the case now) or adding a capability to acquire more resources on demand while the jobs are running (in the future)
  • currently we do a rough estimation of the total CPU demand and then request a double or so slots assuming some average processor power (largely ficticious)

Estimation of the turnaround time
  • Currently we do not predict the tq - queing time, however:
    • promising techniques exist (e.g. BMBP Binomial Method Batch Predictor) -> relying on long traces from batch-queue logs + parallel workload archives
    • we have a wealth of monitoring data (Dashboard)
    • we try to capture the 'background' by sending very short jobs a few times daily to monitor the responsiveness of the system (in 3 different VOs)
  • Provided that we can get reliable estimates on tq on the Grid (which has not been tried yet, AFAIK)
  • In real applications the user may also be interested in partial output (e.g. histograms)

Fault-tolerance and measure of the reliability
  • Reliability should be measured "on the application's reliability background": minimize the infrastrucure faults which have no relation to application problems
    • If application has an intrinsic problem then there is not so much we can do
    • If there is a configuration problem on the sites, then we can enhance reliability of the system as observed by the user by providing fault-tolerance
    • Additionally, we can customize the fault-tolerance behaviour
  • How to measure it?
    • Classify the faults, disregard the intrinsic application faults, the ratio of failures is the reliability measure

Fair share

  • would other users be penalized by ULS jobs?
  • would fair share policies defined in the CE be compromised?
  • effect on fair-share:
    • fair-share can be measured (find the paper)
    • can be modeled and simulated
  • potential harmfullness of the redundant batch requests
    • pure redundant requests (submit n, execute 1, cancel n-1) have been studied (->):
      • jobs which do not use redundant requests are penalized (stretch increses linearly wrt the number of jobs using redundant requests)
      • load on middleware may be a problem
    • ULS have certain degree of redundancy (submit n, execute k, cancel n-k)
      • measure the harmfullness in this case
      • how to cope with it: meta-submitter would steadily increase the number of submission according to needs
      • this is clearly in conflict with minimizing the global turnaround time (QoS), what should the balance be?
    • estimate the level of redundancy

Engineering Directions

Use-Cases

-- JakubMoscicki - 06 Dec 2006


This topic: Main > TWikiUsers > JakubMoscicki > AmsterdamSeminarDecember2006
Topic revision: r4 - 2006-12-07 - JakubMoscicki
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback