Abstract (as announced in UvA)

Friday, 01 September 20006, at 10.00, Room .............

               Jakub Moscicki, CERN - IT

The Quality of Service on the Grid with user-level scheduling.

Currently the largest Grids lack  an appropriate level of the Quality
of Service (QoS) in two ways: the infrastructure and middleware is not
enough  reliable  and a  simple,  batch-oriented  processing model  is
suboptimal for  a number of  applications. User-level scheduling  is a
light software technique that enables new capabilities to be added and
QoS  characteristics and  reliability to  be improved,  on top  of the
existing Grid middleware and infrastructure.

User-level  scheduling  techniques  may  be  used to  reduce  the  job
turnaround  time and  to provide  a  more stable  and predictable  job
output  rate. Splitting  the processing  into many  fine-grained tasks
improves  the load  balancing and  ensures that  the workers  are used
efficiently. As the result the  computing resources may be returned to
the Grid faster. We discuss the implications of this technique for the
users, the application developers and the resource providers.

Applications which have been  interfaced with the user-level scheduler
include  High Energy  Physics data  analysis, Monte  Carlo simulation,
Biomed  applications and  others. Distributed  frequency  analysis and
autodoc-based drug discovery are the recent large-scale activities and
are summarized below.

In May and  June, CERN successfully supported a  series of large-scale
data-processing   activities   carried   out  by   the   International
International  Telecommunications Union  (ITU)  as part  of the  ITU's
Regional  Radiocommunication  Conference. Several  sites  of the  EGEE
infrastructure provided a computing Grid  of more than 400 PCs to work
on each analysis  in parallel, and the processing  was conducted using
the user  scheduling layer.   The system completed  more than  200 000
very-short  frequency  analysis  jobs  (clustered  in  around  40  000
processing tasks) in around one hour, proving that on-demand computing
with a short deadline is possible on the Grid.

Earlier this  year the same technique  was used to  perform a sizeable
fraction of an in silico drug discovery application using the EGEE and
other Grid infrastructures. The challenge was to analyze possible drug
components  against the avian  flu virus  H5N1.  This  activity showed
that a user-level scheduler can improve the distribution efficiency on
the Grid from  below 40% to above 80% by  optimizing the allocation of
the fine-grained computing  tasks.  Efficient automatic-error recovery
mechanisms proved  to be efficient  in extended periods  of continuous
work (30 days).


[1] C.  Germain-Renaud, C.Loomis, J.T.Moscicki,  R.Texier: "Scheduling
    for  Responsive  Grids"  to  appear  in:  2006,  Journal  of  Grid
    Computing (special issue on EGEE User Forum)

[2] J.T.Moscicki: "Distributed Geant4 Simulatrion in Medical and Space
    Science Applications using DIANE  framework and the Grid", Nuclear
    Physics B, 125 (2003) 327-331

[3] Web links:
     - http://cern.ch/diane
     - http://cern.ch/twiki//bin/view/ArdaGrid/WebHome

Answers to questions at the seminar

  • are the queues at the worker agents? no.
  • jobs are unrelated on the grid, is it application assumption? yes, jobs are unrelated on the grid. the RB does not have DAGs. gLite has bulk submission but there are no further relations between the jobs

Seminar contents (draft)


explain ARDA: between middleware and users

  • particular environment: EGEE and LCG (largest grids)

questions to the public:

  • who of you have a contact with Grids? as:
    • resource provider (administrator, deployment of middleware, operation of the site)
    • application developer (VO operations, application porting)
    • user?

why using grids:

  • quantitiative: provide more resources than available locally
  • qualitative: provide more special resources than available locally
    • example: need to analyse data which is not available locally
    • example2: need to use to a device (e.g. a scientific instrument) which is not available locally
    • example3: use a special version of software not available locally or architecture (?)

raison d'etre of Grid computing

  • a user is able to run a job faster than locally
  • a user is able to run a bigger job in the same time
  • a user is not able to run a job locally

scope of considered applications:

  • computing intensive, loose to moderate coupling coupled: no or few synchronization points
  • data movements not considered: assumed that data is available locally
    • either by policy (eg. LHCb computing model for analysis assumes all data replicated in Tier1 sites)
    • or on demand, before the job starts
  • low communication to computing ratio: more computing than communication

application cases for QoS:

  • data analysis -> extraction of parameters from data (quasi-interactive)
    • you may want to see the evolution of the parameter (e.g. histogram) and take a decision to change input cuts,...
  • simulation -> obtaining parameters, building images, etc. for example in radiotherapy
    • you may want to see the energy deposits in the tissues given geometry approximation, radioactive dose etc.
  • testing activities: geant 4 regression test (parameter sweep)
    • you need to run number of simulations in various configurations
  • avian flu : check a number of drug candidates docking proteins...

QoS concepts:

  • QoS historically refers to networking: latency, throughput, jitter, package loss
  • QoS efforts in parallel/distributed computing are focused on the network issues (communication) as well
    • example: MPICH-GQ, GARA
  • here we do not consider network to be a bottleneck:
    • for the applications are NOT network intensive
    • we can use differentiated network services (SA2 SLAs if needed)

We look at the QoS in the following way:

  • the computing system provides an appropriate QoS if responds in an acceptable way to the user and is automatically capable of maintaining the processing goals defined by the user
  • response is (depending on the nature of the application: interactive, batch, mixed)
    • obtaining final and partial results of the processing (for example for the on-the-fly analysis)
    • estimation/prediction of how the processing will evolve
    • automatic and efficient coping with failures
  • goals may be specified as:
    • the turnaround time (typically minimize the total execution time of the job)
    • the response time (time for the first N partial results to arrive)
      • response curve (e.g. filling histograms with events -> significance of individual partial results decreases with time)
    • the order of arrival of the partial results (application specific)
      • this may not be guaranteed on the Grid (nor efficiently by Condor DAG either)...
    • the rate of arrival of the partial results (jiiter)

  • in the Grid (computing system) the basic interaction of a user is sending jobs (either directly or indirectly via a portal)

IDEA: - when the worker number goes low, then master may notify a user or submit more workers.... or draw some from the floating pool

  • QoS is inherently bound to the application: i.e. the meaning of the quality of service is defined in the application (use-case/scenario) context

CPU reservations: Dynamic Soft Real-Time CPU Scheduler [23]. DSRT works by overriding the Unix scheduler and performing soft real-time scheduling of select processes.

QoS parameters in practive (examples and data):

  • stability of the output (G4 example)
  • reduced turnouround time (ITU)
  • improved reliability (Avian Flu)

grid map from LCG monalisa

Related work

Google MapReduce -> backup tasks

In the HEP world

DIRAC (LHCb), ALIEN (Alice):

  • agent-based systems for data production in HEP
  • goals:
    • policy enforcement: ability to control VO activity via central service (task queue)
    • improvement of reliability
    • possibility to bridge Grid and VO-dedicated resources
  • drawbacks:
    • all users run as single 'production user' -> accounting/security problems
    • specialized for HEP needs (deployment of file catalogs, services "in parallel" to the grid etc)
    • require active VO effort in maintainance of the deployed services
  • results:
    • running large data productions (controlled by limited number of power-users)
    • optimized for high-throughput:

Outside of HEP

AppLeS - APST: parameter sweep applications

  • agent-based system
  • allows executing commands at shell level - described in XML files
  • LSF/PBS authentication: ssh keys
  • interfaced to multiple resource backends, including Globus

Condor M/W

GARA General Purpose Architecture for Reservation and Allocation


gtpm3d : differentiated services, queue virtualization over MAUI scheduler -> also embedded application-level scheduler

Techniques for achieving QoS:

  • dedication of resources (wasteful)
  • advanced reservations (difficult for some users, cannot plan ahead interactive work)
  • preemption

The implementation of QoS:

  • modification of site services (faster queues, better scheduler (MAUI))
  • modification of the middleware (checkpointing/migration features, uncontrolable)
  • overlay schedulers (pilot jobs, agents,...)
  • modifications at the system level (unix kernel modules,...)

Singe user mode overlay schedulers

  • obvious improvement from single user perspective
  • from perspective of other users:
    • in the congested system probably will collide (not fully compatible) -> depends on the queue
    • in cases where majority of the computing is for production and minority for (chaotic=not at the same time) analysis then it will work ok
  • from the perspective of the efficiency of using resources:
    • more efficient, resources returned back to the grid faster (on average)


problems with distributed environment

  • examples from wikipage



several DVD disks with installation packages, > 230 packages, >90 levels of dependency however, high-level python interface for steering the main event loop


  • we have a practical system, tested and applied in a number of important activities
  • we would like to quantify the QoS parameters and check against the underlying grid computing model based time-slots
  • verify the impact in the multi-user case
  • maybe develop a multi-user sharing service


L. A. Tuura: Ignominy: tool for analysing software dependencies and for reducing complexity in large software systems; Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment Volume 502, Issues 2-3 , 21 April 2003, Pages 684-686


QoS as Middleware: Bandwidth Reservation System. G. Hoo, W. Johnston, I. Foster, A. Roy. Proceedings of the 8th IEEE Symposium on High Performance Distributed Computing. pg. 345-345, 1999

A Differentiated Services Implementation for High-Performance TCP Flows. V. Sander, I. Foster, A. Roy, L. Winkler. (Accepted to The TERENA Networking Conference 2000). (PDF) (PS)

MPICH-GQ:Quality of Service for Message Passing Programs. A. Roy, I. Foster, W. Gropp, N. Karonis, V. Sander, B. Toonen. (Accepted to Supercomputing 2000). (PDF) (PS)

-- JakubMoscicki - 29 Aug 2006

Topic attachments
I Attachment History Action Size Date Who Comment
PDFpdf Amsterdam-seminar-2006.pdf r1 manage 3265.6 K 2006-12-07 - 13:39 JakubMoscicki  
Unknown file formatsxi Amsterdam-seminar-2006.sxi r2 r1 manage 5238.0 K 2006-12-07 - 13:38 JakubMoscicki  
Edit | Attach | Watch | Print version | History: r9 < r8 < r7 < r6 < r5 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r9 - 2006-12-07 - JakubMoscicki
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback