ALICE demos

Title:

General demo title: Distributed computing system of ALICE

  1. ALICE solar system
  2. ALICE Event displays (videos)
  3. Native Monitoring system (MonaLisa)
  4. Site Status Board for ALICE WLCG Services

ALICE VIDEOS and EVENTS

Alice video1

ALICE video2

ALICE video3

ALICE event1

ALICE event2

ALICE event3

ALICE event4

Abstract

Demo 1: Distribution of ALICE sites and running jobs in a solar model

Demo 2: Videos of the PIT of ALICE and evolution of event displays

Demo 3: Visualization of the production/analysis jobs in real time

Demo 4: Display of the status of the ALICE sites according to the WLCG services

Authors

Latchezar Betev, Federico Carminati, Costin Grigoras, Alina Grigoras, Matevz Tadel, Daniel Kleine-Albers, Fabrizio Furano, Pablo Saiz (presenter), Patricia Mendez (presenter)

Plans

Video available in YouTube:http://www.youtube.com/watch?v=1BYmnHfKUko&feature=PlayList&p=A4C864DB217CC74E&playnext=1&playnext_from=PL&index=5

ATLAS demos

Title

The ATLAS Distributed Computing

Abstract

The ATLAS detector will produce more than 3 PB of data every year. This enormous amount of information must be recorded on tape, processed and distributed around the world so that the physicist of the 37 countries of the collaboration can have the possibility to analyze them. ATLAS has set up a multi-tiered world side computing infrastructure. The CERN computing centre hosts the Tier-0 and the cluster used for immediate data processing and detector data quality monitoring; the Tier-0 also stores on tape all data that are recorded by the ATLAS detector. Raw and processed data are distributed from CERN to the Tier-1 and Tier-2 centres. Because of the large amount of data to be processed, ATLAS has developed new technologies to automate the operation of this distributed computing system and to reach a high level of efficiency and reliability; the final aim is that everyone will be able to use the ATLAS Grid infrastructure as if it was just an extension of a local computing cluster.

Authors

difficult to say right now: I think present will be Xavi, myself, Ricardo, but maybe there is someone else that I'm missing. maybe we should simply said "on behalf of the ATLAS Distributed Computing group"

Plans

since there will be 3 screens, the idea is that on 2 screens we'll have 'film' looping continuously, while on the third one (the middle one) we'll have the ATLAS DDM dashboard that is the perfect starting point for the ADC different activities.
  • screen1: ATLAS view from googleearth
  • screen2: more technical infos on ATLAS DDM, production and analysis system using the dashboard
  • screen3: the HammerCloud

CMS demos

Title

The Distributed Computing of the CMS Experiment at a glance

Abstract

The CMS Experiment at the Large Hadron Collider (LHC) at CERN/Geneva is scheduled to start taking data during the falll/winter of 2009. The computing system of the CMS experiment works using distributed resources from more than 60 computing centres worldwide.These centres, located in Europe, America and Asia are interconnected by the Worldwide LHC Computing Grid. The CMS Computing, Software and Analysis projects are developed to meet the expected performances in terms of data archiving, calibration and reconstruction at the host laboratory, and in terms of data transferring to many computing centers located around the word, where further archiving and re-processing will take place. Hundreds of physicists will then expect to find the necessary infrastructure to easily access and start analysing the long awaited LHC data. This demo will overview the distributed system and the typical CMS computing workflows, the tools developed to transfer data, data bookkeeping system, submission of jobs to sites, all the relevant monitoring tools and the CMS efforts in improving the overall reliability of the Grid from the point of view of the CMS computing system.

Authors

Julia Andreeva, Josep Flix, Nicoḷ Magini, Pablo, Saiz, Andrea Sciabà [all presenters], on behalf of the CMS collaboration

Plans

CMS demos

Demo 1: CMS distributed computing system and production/analysis workflows overview

  • Detector photographs + some event displays (cosmics + MC)
  • Include a description of the distributed system
  • CMS workflow map to demonstrate the offline distributed system
  • GoogleEarth for showing transfers and job processing

Demo 2: CMS data transfer monitoring

  • Overview of PhEDEx data management system
  • Overview of the Debugging Data Transfer system: testing and debugging data transfer links
  • Watching CMS data transfers on Google Earth

Demo 3: production/analysis task monitoring via Dashboard

  • Introduction with couple of slides on job processing (scope in terms of jobs, sites, users, etc..., complexity...) Main focus on the analysis since currently it is one of the main challenges of the WLCG computing
  • Submission of a CRAB analysis task (optional)
  • Show job processing on the GoogleEarth display
  • Starting from the T2 analysis display of the workflow map, navigate through user analysis jobs
  • Demonstrate Task monitoring. If we can submit a Crab task would be perfect, since we would be able to show it's processing in the real time with the task monitoring

Demo 4: The Readiness of CMS Computing Centres in the WLCG Grid

  • Description of the tool and its components
  • Site Status Board for site readiness
  • Site readiness quality plots and summary tables

Some demos will use the available displays to interactively describe the items and some of them will be complemented with posters and other material.

CMS posters

  • PhEDEx data transfer system
  • Debugging Data Transfers
  • User analysis
  • Site Readiness

Useful info and material

Documentation to the GoogleEarth installation can be found here.

Movie which provides a nice introduction to LHC can be found at http://cdsweb.cern.ch/record/1125472

LHCb demos

Title

A challenge for the LHCb computing: how to produce, manage and analyse PetaBytes of data over the Grid

Abstract

LHCb is one of the four experiments which will be operative at the Large Hadron Collider (LHC) at CERN starting from next year. LHCb will produce a huge amount of data, of order of 1.5 PetaBytes of RAW data per year, in addition to the already ongoing Monte Carlo simulation.

The DIRAC project is the gateway to the Grid for the LHCb experiment to carry out massive Monte Carlo simulation, data processing on various distributed computing resources and data management. It also allows to monitor any computing activities of the experiment thanks to its accounting system.

In this demo a concrete example of how to submit a Monte Carlo production will be shown. The jobs will then be followed from the submission at CERN, to the worker nodes where they are distributed and processed at some site, through the DIRAC monitoring system.

The solution provided by the Ganga interface and the DIRAC backend has been designed to support the distributed analysis activity over a community of about 500 scientists in Europe, North and South America. The parallel demo of Ganga will show how physicists can access Grid resources to carry out their particular analysis on the data stored in some site around the world.

The demo will be complemented by an event display of a particle collision inside the detector, in order to show from where all these data come from.

Authors

R.Santinelli, E.Lanciotti, R.Graciani, A.Casajus

Plans

Notes on the preparation of the LHCb demo are reported here

Dashboard demos

Title

Monitoring of the LHC Distributed Computing using Experiment Dashboard

Abstract

The Large Hadron Collider (LHC) is preparing for data taking in the end of 2009. The World Wide LHC Computing Grid (WLCG) provides data storage and computational resources for the high energy physics community. Operating the heterogeneous WLCG infrastructure, which integrates 140 computing centers in 33 countries all over the world, is a complicated task. Reliable monitoring is one of the crusial components of the WLCG for providing the functionality and preformance that is required by the LHC experiments. The Experiment Dashboard system provides monitoring of the WLCG infrastructure from the perspective if the WLCG experiments and covers the complete range of their computing activities, namely, data distribution, job processing and site commissioning. The demo will overview the Dashboard monitoring applications widely used by the LHC experiments.

Authors

J.Andreeva, E.Lanciotti, G.Maier, R.Rocha, P.Saiz

Plans

Ganga/Diane demos

Title

Ganga/DIANE Dashboard

Abstract

Authors

Lukasz Kokoszkiewicz, Maciej Wos, Jakus Moscicki, Massimo Lamanna (presenter), Patricia Mendez (co-presenter)

Plans

SLS demos

Title

Abstract

Authors

Plans

Time Schedule

For the Indico agenda:

Proposal for the time schedule to appear in the indico agenda of the conference:

  • Monday:
    • 10:30-11:00 Distributed computing system of ALICE
    • 12:30-14:00 Distributed computing system of ALICE
    • 16:30-17:00 Ganga/DIANE Dashboard
    • 19:00-20:00 Ganga/DIANE Dashboard
  • Tuesday:
    • 10:30-11:00 The ATLAS Distributed Computing
    • 12:30-14:00 The ATLAS Distributed Computing
    • 16:30-17:00 The ATLAS Distributed Computing
  • Wednesday:
    • 10:30-11:00 The Distributed Computing of the CMS Experiment at a glance
    • 12:30-14:00 The Distributed Computing of the CMS Experiment at a glance
    • 16:30-17:00 The Distributed Computing of the CMS Experiment at a glance
  • Thursday:
    • 10:30-11:00 A challenge for the LHCb computing: how to produce, manage and analyse PetaBytes of data over the Grid
    • 12:30-14:00 A challenge for the LHCb computing: how to produce, manage and analyse PetaBytes of data over the Grid
    • 16:30-17:00
      • Demo 1): Monitoring of the LHC Distributed Computing using Experiment Dashboard
      • Demo 2): VO Specific Service Monitor

Interesting links:

CERN general video

Internal timetable:

  Monday   Tuesday   Wednesday   Thursday  
  Side A Side B Side A Side B Side A Side B Side A Side B
10.30-11.00 ALICE Dashboard for ALICE ATLAS Dashboard for ATLAS CMS Dashboard for CMS LHCb Dashboard for LHCb
12.30-14.00 ALICE   ATLAS Ganga for ATLAS CMS   LHCb Ganga for LHCb
16.30-17.00 Ganga and DIANE   ATLAS SLS for ATLAS CMS SLS for CMS SLS Dashboard
19.00-20.00 Ganga and DIANE

Edit | Attach | Watch | Print version | History: r15 < r14 < r13 < r12 < r11 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r15 - 2009-09-20 - PatriciaMendezLorenzo
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback