Machine properties and Status

How are we going to know how the machine is running and with what conditions?

The LHC real time information is produced in the LHC real time monitoring. Daily and weekly information about LHC status , performance and are obtained from the LHC Programme Coordination web pages. In particular the latest daily news report in ppt, the daily operations report, the weekly plan

The other essential page is the LHC machine operation where all the LHC online info pages are reported. In particular the status of the cryogenics system also seen visually inthe hardware commission visual page.

The LHC Page 1 is reported also from the ATLAS Operation web page with its detailed meaning.

Info about the meaning of the parameters can be obtained by looking at the LHC Beam parameters and the whole LHC design report. In addition the jargon about the definition of the different machine states (flat top, squeeze,...) is defined here.

The updated status of the machine is reported in the LHC report presentation. The "LHC commissioning with beam" page provides the evolution of plans.

ATLAS properties and Status

The ATLAS updated status will be found at the ATLAS data summary pages. The general experiments's status is also available.

The Handshake

Machine adn expriments exchange data to ascertain the possibility of collisions by determining their present state: the data exchange is defined . One can even subscribe to the mailing list The definition of Machine experiment Handkshake is here

Groups whose activities are important

Data Retrieval and Format

What data sets do we use?

We start from

  • skimmed topdAODs (when available)
  • or physics containers (when available)
  • or single run AODs

We make D3PDs(ntuples)/get them thorugh the general proudction systrem

We store the D3PDs on xenia in

The Analysis Model for the first Year foresees a limited number of data formats:

  • centrally produced ( atTier0 and then reprocessed and stored at Tier1s/Tier2s): ESDs,dESDs, AODs,
  • group produced (at Tier1s/Tier2s , then stored in group areas at Tier2s): dAODs

The user is expected to run on AODs or dAODs and produce D3PDs. The plan is to use TopD3PDMaker. The Top dAODs are documented in TopD2PD2010.

The data set are

Retrieving data for the user means to produce a good run list Operationally= Good Run list + Reprocessed data so as to get update calibrations + consistent release and bug fixes Remeber to check the OutPut Stream naming convention. This is important for DPD making.

At Nevis a tier3 is being set up with these instructions that also need Tier3 users' s guide.

Initial data sets

Check the list of interesting runs InterestingRuns2010

Data Set Content and browsing

The Top given format is the D2PD. We need to start from those and they are described in the D2PD event filters.

The location of reprocessed D2PD is given in th TopD2PD 2010 Twiki. The proper way to run on a large set of runs is to use the Physics Containers.

These are the datasets that are produced for every run (by David Cote)

HIST: always produced with f235, and never with f236.

SD: always produced.

AOD: always produced.

TAG_COMM: always produced.

NTUP_MUONCALIB: always produced.

NTUP_TRIG: always produced.

NTUP_TRKVALID: produced for express, IDCosmic, RNDM, MinBias and debug streams.

DESD_SGLEL: produced for collision runs, for debug, MinBias and L1CaloEM streams.

DESD_PHOJET: produced for collision runs, for debug, MinBias and L1CaloEM streams.

DESD_SGLMU: produced for collision runs, for debug and MuonswBeam streams.

DESD_MBIAS: produced for collision runs, for debug and MinBias streams.

DESDM_TRACK: produced for collision runs, for debug and L1Calo streams.

DESDM_MUON: produced for collision runs, for debug and MuonswBeam streams.

DESDM_CALJET: produced for collision runs, for debug and L1Calo streams.

DESD_MET: produced for collision runs, for debug and L1Calo streams.

DESDM_EGAMMA: produced for collision runs, for debug, L1CaloEM and MinBias streams.

DESD_TILECOMM: produced for cosmics runs, for debug and CosmicMuons streams.

DESD_IDCOMM: produced for cosmics runs, for debug, IDCosmic and CosmicMuons streams.

DESD_CALOCOMM: produced for cosmics runs, for debug, L1Calo and L1CaloEM streams.

DESD_PIXELCOMM: produced for cosmics runs, for debug and IDCosmic streams.

DESD_MUONCOMM: produced for cosmics runs, for debug, CosmicMuons and MuonswBeam streams.

The tag being used for reconstruction for a given dataset is available from Tier0 according to How do I find the release used at Tier0

The AMI description (click “browse f tags”) provides the info about the datasets corresponding to a given reconstruction tag.

All of these data sets properties are defined by the PrimaryDPDMaker and specifically in the Performance DPD Twiki. For jets the interest is on Notice that that the DESDM has calocells and basic track info.

Some datasets are also found on /castor/

Monitoring Data replication

The links below will allow you to monitor the dataset replication across T1's and T2's (by lexei Klimentov and the DDM team through G Brooijmans)

The status can be monitored using links below

replication between Tier-1s - Pandareplication

replication between Tier-1s - dcops

replication within clouds (to T2s)- dcops

replication within clouds (to T2s) - panda

Note that replication to Tier-2s is started only when parent T1 has a complete replicas.

replicas distribution between tiers

Information about data reprocessing is on the Data Preparation Reprocessing page.

Time scales for data availability

Have a look at the general Data /MC for Analysis page also reachable from the Data Preparation Reprocessing page.

Data Quality And DB releases

We need to have
  • Time and range and machine conditions
  • DQ
  • Trigger config

What data format are we going to use to extract info at different steps ?

What formats are produced at Tier0 and how do we know them for a given run?

Raw Data are located here For Top a central production of D2PDs is planned. Data format already available from production are defined here.

For performing the phys an we need good-run/LB list and data availability on the grid.

Grid space/production manager for Top Group is Marcello Barisonzi Data quality strategy si summarized in theDQ paper and the most updated meaning of the DQ flags is kept on the the DQ Flag interpretation page. The definition of virtual flags is stored on the DQ Flag proposal page. All info are found on the DQ page.

How are we going to access database info?

Appropriate database information for any GRID submission and any release is usually taken care of in the main JO. The updated status of the database is found in AtlasDBRelease. In particular the connection between a given release and the database is found in SW release vs DB Release. The inclusion of the database information in non-standard Job options (not using the standard RecExCommon for instance ) is included as mentioned in Conditions database in standalone job options. The instructions to set up a special Database should also be kept in mind. For specifying the correct database release when running on the grid, this is especially useful. An essential troubleshooting page is the CoolTroubles twiki.

Example errors being caused by the wrong DB are

IOVDbSvc ERROR Tag OFLCOND-DR-BS7T-ANom-04 cannot be resolved for folder /LAR/Align

How do we know what geometry is being used?

Have a look at tthe Geometry DaB tags.

Useful references


GoodRunList page


Performance DPDS (in particular Definition of old DESDCOLLCAND)


Eric's presentation for the software tutorial

MaxBaak's tutorial

Data Selection,Trigger and Luminosity

How does your analysis treat luminosity? If your favorite trigger is prescaled because of rate, what is your backup plan?

Luminosity can be calculated only after final GRL and trigger info is specified.

For calculating cross sections The efficiencies for calculating cross section are as follows

  • unprescaled trigger efficiency (prescales are included in LUmi calculation)
  • skimming efficiency in dAOD based analysis
  • TAG selection efficiency (in TAG based analysis)
  • event selection cut efficiencies:


Consider the initial express stream being used at the HLT. Preferred triggers for muons and electrons

A useful paper on trigger combination.


Need to understand Trigger prescales, L1 trigger deadtime (provided by Lumi group in its calculation) The value of luminosity is determined by more than one detector. The absolute measurement from the machine comes from the Van Der Meer scan with a summary of the procedure at LHC [[][here].

Useful references

Coll Lumi Calc page Lumi Calculation tutorial

Data Quality

What input to the data quality group have you provided? How do you get the good run/lumi blok list?

Good runs lists are produced by the DQ group. ForJet/ETmiss the info about GRLs is on the main page under recommendation for jet/etmiss data analysis For Top the info about GRLs is in TopGRLs. The standard info is in StandardTopGRLs. The overall 2010 Top GRL is in 2010TopGrl

The general GRL calculator is here


Are there duplicate events in your ntuple?Are there missing runs? What are the values for selection? How do we handle overlap removal?

Event reconstruction And Visualization

What level of calibration and aligment do you need? How do we habvdle overlap removal? How do we recalculate missing ET? How do we recalibrate jets?

The object definition for top analyses is obtained by the Top Reconstruction group in the Top Common Objects page.

Cut flows that are preliminary to top are important. Onee needs to validate the code first using the Top MC/Data challenge for semileptonic channel and the di-lepton channel.

The recommendation for making event displays are described in the twiki on Official event display Guidelines. In order to get hold of single events in an ESD for event display one can use the SimpleGetEventScript.


  • Electrons are reconstructed according to here.
  • A reminder on bitmask usage is in here for bitwise operators and in here.
    • So in the expression const unsigned int HADLEAKETA_ELECTRON = 0x1 << ClusterEtaRange_Electron | 0x1 << ClusterHadronicLeakage_Electron; << is a left shift operator and "| " is the logical OR ; with This line is taken from the electronID definition
    • For instance the electron cut definition shown in ElectronCutIDTool at lines 345 and 349 is calling calo=based and track based selection respectively that (see line 459) simply set certain bits based on the electronID definition (for instance flag |= ( 0x1 << egammaPID::ClusterHadronicLeakage_Electron)). At the end m_iflag is the variable that needs to be checked against the masks of the electronID definition . The isem method is defined at line 580 of the egamma event appropriate code. It uses the isEM fnction from the egammaPID cide (at line 125 ): it compares the given mask with the isEM flag . Notice that if a cut is NOT passed then a bit is set in the isEMflag. This is why the isEm method has to return zero when compared with the mask that identifies a given type of electron: if all bits related to that definitions are not set it means that the electron passes those cuts.
  • Notice that the isEM E/p bit is set according to the lines of the line 905 to 910 egammaElectronCutIDTool. The values changes in bins of eta with the recipe defined at line 852 of ElectronCutIDTool.The variables(m_CutminEp_electrons) and (m_CutmaxEp_electrons) that represent the cuts are defined in theElectronCutIDTool and the python JO that sets the values for each given special eta bin is the

Missing Energy


Various Jets calibrations can be obtained. The configuration of a calibration scheme goes through four steps

  • configure individual calib tools. Setting up an individual jet calibrator has a series of helper functions in SetupJetCalibrators.p . These are elements that go into the configuration o fthe JetAlgorithms
  • configure sequences of calib tools. The list and dictionary of sequences is in
  • configure  JetAlgTools with the sequences (using  getJetCalibrationTool)
  • add these JetAlgTools the global list of JetAlgTool inside a JetAlgorithm

The most updated recipes are obtaiend form the Jet/Et miss page on re-runninng jets. The implementation for Top Note II and the latest TopPhys dAOD jo are the most useful places to look up as far as implementation. The TopPhysStup studies provde the devlopment info for multiple jet containers and multiple top inputs objects to be built. The implementation is by now absorbed in Top

Pile-up issues are crucial for the 2010-2011 dataset so for jets one has etQualityAndSelectionForPileup and the older, but still interesting JetsWithPileUp. Notice that as of June 2010, the cell-ralted quality variables (n90 and jet quality) are not filled when re-running the container on the fly from AODS. A solution to this is under study.

Available Jets in AOD

The available collections are documented in te AOD CLass summary For release 15 it is here and the software status evolution is documented in the Jet Software status twiki

Consistent Jet and missing Et implementation for top


What are the first plots important things to see at 1pb? 10 pb^-1? 20 pb^-1? 50 pb ^{-1} ? 100pb^{-1}? How quickly are we supposed to get these values of lumi? What are the control samples?

The control samples

The pre-analysis validation plots

Criteria: they need to be important plots for the objcts we use in the final state and plots that tell us about every single cut we make they need to be important plots where there are still data-monte carlo discrepancies

Measurement of jet multiplicity ( top, Z+jets, W+jets, ratios)



What is the strategy for data-driven background determination?How do you deal with fakes?

Data Samples

What will 900 GeV data tell you about your analysis?

Monte Carlo Cross sections and Samples

Is the simulation adequate for your analysis? How will you handle the inevitable mismatch between our present simulation and real data?

Use the information present in the Standard model Cross Section Task Force. THe production of Monte Carlo samples can be checked at the Atlas Production Team Pages min bias @ 900 GeV

min bias @ 7 TeV

7 TeV Samples


Minimum bias samples (check line 24)

Jet /ETmiss MC samples

Top Analysis

We need to check updates in TopMC09

Most important datasets:


  • mc09_7TeV.105200.T1_McAtNlo_Jimmy.merge.AOD.e510_s624_s633_r1064_r1051/
  • mc09_7TeV.105861.TTbar_PowHeg_Pythia.merge.AOD.e505_s624_s633_r1064_r1051/

Single Top:

  • mc09_7TeV.108340.st_tchan_enu_McAtNlo_Jimmy.merge.AOD.e508_s625_s633_r907_r879/ (check for tag r1064_r1051)
  • mc09_7TeV.108346.st_Wt_McAtNlo_Jimmy.merge.AOD.e508_s624_s633_r1064_r1051





Single Top:

  • mc09_7TeV.108341.st_tchan_munu_McAtNlo_Jimmy.merge.AOD.e508_s624_s633_r1064_r1051
  • mc09_7TeV.108342.st_tchan_taunu_McAtNlo_Jimmy.merge.AOD.e508_s625_s633_r907_r879 (check for r1064_r1051)





Tools And Computing

Is enough effort going into validation? How will you handle time varying conditions? Does your analysis even run with Monte Carlo truth dropped? Does your code run on cosmic rays data?

Keep notice of this useful Jet validation facility

Simulation truth info

The findings of the MC Truth task force\ are important.

The software to use

__We need to be able to

  • run on the grid
  • read D2PDs
  • manipulating objects info: cuts/overlap/recalibration (re-do TopInputs)
  • dump info into a D3PD__

Working on the grid

It is important to be able to know what sites to submit to in case it is needed We submit jobs mainly using pathena.

Full Analysis

The code we use is base don the Top Reconstruction code. It is useful to keep an eye onThe WZBenchmark analysis

Ntuple Dumpers

The Top ntuples we use are based on the top PhysD3PD maker Jet D3PD maker


Computers to be used:

Reference Papers at Tevatron (CDF and D0)

Top discovery

very short summary

Observation of Top Quark Production in p ̄p Collisions by F. Abe et al. (CDF collaboration), Phys. Rev. Lett. 74, 2626–2631 (1995), also available as hep-ex/9503002v2

Observation of the Top Quark by S. Abachi, et al. (DØ collaboration), Phys. Rev. Lett. 74, 2632 (1995), also available as hep-ex/9503003

Performance info

Hadronic Status



Determination of the Jet Energy Scale at the Collider Detector at Fermilab

D0 jet energy scale at the JES group at D0

Studying the Underlying Event in Drell-Yan and High Transverse Momentum Jet Production at the Tevatron the CDF collaboration

Jimmy MPI generator

Integral cross sections

Differential cross sections

First Measurement of the tt Differential Cross Section dσ/dMtt in pp Collisions at sqrt(s)= 1.96 TeV by T. Aaltonen et al., the CDF Collaboration, PRL 102 222003

Mtt resonance search in all jets (2.8 fb-1)

Reference Papers in ATLAS

Performance papers

Reconstruction for Top

Jets and Missing Energy

Single Lepton Channel

Study on reconstructed object definition and selection for top physics by Abbott, B; Allwood-Spiers, S; Astalos, R; de Bell, M; Benekos, N; BoisVert, V; Bordoni, S; Brooijmans, G; De Bruyn, K; Cerrito, L et al. - ATL-COM-PHYS-2009-633.- Geneva : CERN, 2009 - 146 p.

[[][ Prospects for the Top Pair Production Cross-section at s = 10 TeV in the Single Lepton Channel in ATLAS, The ATLAS Collaboration -ATL-PHYS-PUB-2009-087 ; ATL-COM-PHYS-2009-404,Geneva : CERN,2009- 19 p.

Prospects for measuring the Top Quark Pair Production Cross-section in the Single Lepton Channel at ATLAS in 10 TeV p-p Collisions by Acharya, B; Bartsch, D; Besana, I; Bentvelsen, S; Bosman, M; Brock, I C; Cobal, M; Cristinziani, M; De~Sanctis, U; Doxiadis, A et al. ATL-COM-PHYS-2009-306.- Geneva : CERN, 2009 - 60 p.

Di-lepton Channel

Prospects for measuring top pair production in the dilepton channel with early ATLAS data at s = 10 TeV, the ATLAS Collaboration, ATL-PHYS-PUB-2009-086 ; ATL-COM-PHYS-2009-402- Geneva : CERN, 2009 - 20 p.

Sensitivity of the top dilepton cross-section measurement at sqrt{s} = 10 TeV by Cristinziani, M; Loginov, A; Adelman, J; Allwood-Spiers, S; Auerbach, B; Cranmer, K; Gellerstedt, K; Guo, B; Kaplan, B; Lockwitz, S et al. ATL-COM-PHYS-2009-307.- Geneva : CERN, 2009 - 54 p.


MInimum bias paper and Twiki

Study of fully hadronic ttbar decays and theirseparation from QCD multijet background events in the first year of the ATLAS experiment by Marion Lambacher - Ph.D. Thesis - LMU - Munich

Detector basic performance

Response and Shower Topology of 2 to 180 GeV Pions Measured with the ATLAS Barrel Calorimeter at the CERN Test–beam and Comparison to Monte Carlo Simulations by T. Carli et al, ATL-COM-CAL-2009-004.

Calibration of ATLAS Tile Calorimeter at Electromagnetic Scale, K. Anderson et al, ATL-TILECAL-PUB-2009-001

2004 ATLAS Combined Testbeam: Computation and Validation of the Electronic Calibration Constants for the Electromagnetic Calorimeter, ATLA-LARG-PUB-2006-003

Study of Energy Reconstruction Algorithms for Pions from 2 to 180 GeV Measured with the ATLAS Barrel Calorimeter at the CERN SPS Test-beam

Energy Linearity and Resolution of the ATLAS Electromagnetic Barrel Calorimeter in an Electron Test-Beam

Jets and Constituents

Calorimeter Clustering Algorithms: Description and Performance by W . Lampl et al., ATL-LARG-PUB-2008-002

Atlas Analysis Model

The AMFY document

Data Challenges in ATLAS computing

Short ATLAS Computing model summary

Computing TDR

Atlas Simulation

The ATLAS Simulation project , ATLAS -SOFT-INT-2010-002 The simulation of the ATLASLiquid Argon Calorimetry

Monte Carlo Truth info

Monte Carlo Truth Task force

Basics of Setup

Atlas software works with separate projects that can have their own development time and independent releases. This is properly described in Working with Project Builds. The general projects that incorporate all the others are AtlasProduction and AtlasOffline. Now each project has its own release cycle according to the numbering


Info to setup at Nevis are here.

-- FrancescoSpano - 30-Nov-2009 -- FrancescoSpano - 11-Feb-2010

Edit | Attach | Watch | Print version | History: r89 < r88 < r87 < r86 < r85 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r89 - 2011-01-12 - FrancescoSpano
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback