3.3 From Collision to Primary Datasets

Complete: 1
Detailed Review status

Newsbox
Under construction

Goals of this page:

This page briefly explains how the event data is selected and divided in primary datasets.

Contents

Introduction

The CMS Trigger and Data Acquisition (DAQ) System is designed to inspect the detector information at the full beam crossing frequency and to select events at a maximum rate of O(10^2) Hz for achiving and later offline analysis. The full online selection is split into two steps. The first step (Level-1 Trigger) is designed to reduce the rate of events accepted for further processing to less than 100 kHz. The second step (High-Level Trigger or "HLT") is designed to reduce this maximum Level-1 accept rate to a final output rate of 100 Hz. The HLT farm - a computing cluster consisting of large amount of processers - writes the events in primary datasets depending on their trigger history.

#CMS.OnSel

Online Selection

Level-1 Trigger

Custom hardware processors form the Level-1 decision. The Level-1 triggers involve the calorimetry and muon systems, as well as some correlation between these systems. The Level-1 decision is based on the presence of "trigger primitive" objects such as photons, electrons, muons, and jets above a set of ET or pt thresholds. It also employs global sums of ET and missing Et.

During the Level-1 decision-making period, all the high-resolution data is held in pipelined memories.

High-Level Trigger

Upon receipt of a Level-1 trigger, the data from the pipelines are transferred to front-end readout buffers. Each event is contained in several hundred front-end readout buffers. Through the event building "switch", data from a given event are transferred to a processor in the HLT filter farm. Each processor runs the same HLT software code to reduce the Level-1 output rate of 100 kHz to 100Hz for mass storage. More details on the HLT software and its use with the simulated data are given in section WorkBookHLTTutorial. At this point, the data are in RAW format which is the input to the reconstruction at Tier-0.

Primary Datasets

Tier-0 repacks the RAW data received from the DAQ into primary datasets (PD) based on physics attributes (e.g., their trigger path). These datasets are currently being defined and the physics requirements considered in the choice of the datasets are the following:
  • it is preferred to run analysis jobs or further selection jobs (skims) on 1 or 2 PDs only to reduce the problem of bookkeeping duplicate events
  • PDs should be as small as practical to avoid analysis jobs on very large samples
  • prescaled triggers (triggers selecting only a fraction of accepted events) should go to separate PDs.
The current (September 2007) proposal is to have 6 PDs based on HLT decision
  • Tau, Muon, Electron, Photon, b-jet, CMS.JetMET
This splitting is intended for CSA07, and it is subject to further change.

The data are calibrated and reconstructed at Tier-0. There will be a two copies of RAW data, one at CERN and another at a Tier-1. The primary datasets in RECO format are then distributed from Tier-0 at CERN among the different Tier-1 centres and AOD format in all Tier-1 centres. Details of the dataflow between the Tier-0, Tier-1 and Tier-2 centres are given in section WorkBookComputingModel.

Skimming

Skimming jobs are CMSSW jobs with filter processes which select part of the primary dataset for a given physics channel. These jobs will be run at Tier-1 computing centres ( at request/ by default? )

Information Sources

Review status

Reviewer/Editor and Date (copy from screen) Comments
CMSUserSupport - 11 Sep 2007 created page

Responsible: ResponsibleIndividual
Last reviewed by: YourName - date

Edit | Attach | Watch | Print version | History: r8 < r7 < r6 < r5 < r4 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r8 - 2007-10-02 - JennyWilliams


ESSENTIALS

ADVANCED TOPICS


 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    CMSPublic All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback