TopPhoton2012

Photon recommendations for 2012 analyses.

New recommendations for photon identification and calibration have been released by the E/gamma combined performance group. For all the details please refer to the message sent by the E/gamma conveners ( here). The main points to consider are the following (for 2012 analyses):

  • Photon Identification: new Fudge Factors and systematic uncertainties for the cut based selection have been made available : see here.
  • Photon Calibration: new preliminary scales factors and systematic uncertainties have been derived to be used on reprocessed 2012 data. All details can be found here. Systematic uncertainties have been slightly inflated to cover the observed difference in the pre-HCP and post-HCP datasets. The constant term smearing and the error on the constant term have been increased in some eta regions to cover for the well known Geant4 mismodelling of the calorimeter energy response in MC12

Prior to any photon selection, apply all object-level corrections described below in more detail:

  1. use EnergyRescalerUpgrade to correct the energy scale (in data) or smear the energy resolution (in MC)
  2. use ConvertedPhotonScaleTool for an additional energy scale correction for converted photon candidates (both data and MC)
  3. after applying these two calibrations, the corrected cluster energy is the calibrated photon energy E, and E/cosh(ph_etaS2) is the calibrated photon pT. Use the final calibration to rescale the photon shower shapes that are not ratios of ECAL energies
  4. on MC, use FudgeMCTool to apply fudge factors to the calorimeter shower shape variables
  5. on data and MC, use PhotonIDTool to compute the photon identification variables
  6. on data and MC, use CaloIsoCorrection to compute the corrected topological calorimeter isolation of the photon in a cone of radius 0.4

All the tools are contained in the egammaAnalysisUtils package, use version 00-03-50 or higher.

Photon calibration (EgammaCalibration)

The Egamma calibration procedure consists in:
  • applying corrections to photon and electron energy response in data, and
  • to smear their response on simulation to mimic the effect observed in the real detector.

For the moment we use the calibration recommendations for 2012 analyses using GEO20 MC samples and Calibration-Hits-Based calibration (EGammaCalibrationGEO20)

  • that is the geometry tag (ATLAS-GEO-20-00-01_VALIDATION) returned by the AMI tag interpreter on mc12_8TeV/mc12_8TeV.117050.PowhegPythia_P2011C_ttbar.merge.NTUP_TOP.e1728_s1581_s1586_r3658_r3549_p1400

We define an energy scale correction a restoring agreement between energy scales in Data and MC, defined such that Edata = (1+ a) Emc. The correction has a set of systematic uncertainties da. Two equivalent ways to propagate the energy scale correction and uncertainty to a measurement can be used:

  1. nominal corrections on data, systematic variations on MC (most common approach)
  2. nominal corrections and systematic variations applied on MC

Both energy corrections and resolution smearing are to be implemented in PhotonRescalingProc

Energy scale

Data

Nominal scale correction for electrons (and similarly for photons):

double ecorr = ers.applyEnergyCorrection(eta, energy, egRescaler::EnergyRescalerUpgrade::Electron, egRescaler::EnergyRescalerUpgrade::Nominal);

MC

Systematic variations include:

  1. Z scale uncertainties
    1. statistics (egRescaler::EnergyRescalerUpgrade::ZeeStatUp)
    2. method (egRescaler::EnergyRescalerUpgrade::ZeeMethodUp)
    3. choice of generator (egRescaler::EnergyRescalerUpgrade::ZeeGenUp)
  2. Presampler scale uncertainty (egRescaler::EnergyRescalerUpgrade::PSStatUp)
  3. Material uncertainty (egRescaler::EnergyRescalerUpgrade::R12StatUp)
  4. low-pt uncertainty (active only below pt=20 GeV) (egRescaler::EnergyRescalerUpgrade::LowPtUp)

NoteThe total Zee uncertainty variation can also be obtained directly (egRescaler::EnergyRescalerUpgrade::ZeeAllUp)

The uncertainties are symmetric, they need to be varied independently, and the variations should be summed quadratically:

double eNominal = ers.applyEnergyCorrection(eta, e, ptype, egRescaler::EnergyRescalerUpgrade::Nominal);

double eZeeStatUp = ers.applyEnergyCorrection(eta, e, ptype, egRescaler::EnergyRescalerUpgrade::ZeeStatUp);
double eZeeMethUp = ers.applyEnergyCorrection(eta, e, ptype, egRescaler::EnergyRescalerUpgrade::ZeeMethodUp);
double eZeeGenUp = ers.applyEnergyCorrection(eta, e, ptype, egRescaler::EnergyRescalerUpgrade::ZeeGenUp);
double eZeeAllUp = ers.applyEnergyCorrection(eta, e, ptype, egRescaler::EnergyRescalerUpgrade::ZeeAllUp); // shorthand; quad. sum of the above

double ePSUp = ers.applyEnergyCorrection(eta, e, ptype, egRescaler::EnergyRescalerUpgrade::PSStatUp);
double eMATUp = ers.applyEnergyCorrection(eta, e, ptype, egRescaler::EnergyRescalerUpgrade::R12StatUp);
double eLowUp = ers.applyEnergyCorrection(eta, e, ptype, egRescaler::EnergyRescalerUpgrade::LowPtUp);
where ptype can be:
egRescaler::EnergyRescalerUpgrade::Electron (electrons),
egRescaler::EnergyRescalerUpgrade::Unconverted (unconverted photons),
egRescaler::EnergyRescalerUpgrade::Converted (converted photons).

Note"Down" variations are defined too, for completeness.

Resolution smearing (MC)

Smearing corrections

Smeared energies with nominal, up and down variations of the resolution correction are obtained as follows:
double esmNom = e * ers.getSmearingCorrection(eta, e, egRescaler::EnergyRescalerUpgrade::NOMINAL);
double esmUp = e * ers.getSmearingCorrection(eta, e, egRescaler::EnergyRescalerUpgrade::ERR_UP);
double esmDown = e * ers.getSmearingCorrection(eta, e, egRescaler::EnergyRescalerUpgrade::ERR_DOWN);

Converted photons scale tool (ConvertedPhotonCalibrationTool)

  • The present MC electromagnetic calibration doesn't take into account directly the radius of conversion of converted photons. This is a useful information correlated with energy lost in the front of the calorimeter and out of the reconstructed cluster.
  • This tool provides a factor to correct the calibrated energy using the radius of conversion as additional information. The main use for this tool are analysis on ntuples.
  • The tool takes as input the eta of the cluster, the calibrated energy (in MeV) and the radius of conversion (in mm). It returns a multiplicative factor close to 1. The tool works only for converted photons, but it doesn't check it: it is your responsibility to apply the correction factor only to converted photons.
  • The tool can be applied on MC and on data in the same way.

Atlfast shower specific calibration corrections:

To bring Atlfast in agreement with G4 (affects only central electrons and photons):
double ETcorrected = EToriginal * ers.applyAFtoG4(cl_eta); 

Photon fudge factors (PhotonFudgeFactors)

  • The differences observed between data and MC in the isEM discriminating variables are measured comparing the shower shape distributions, and parametrized as simple shifts. These shifts (a.k.a. fudge factors) are computed as the difference between the means of a given variable in data and MC. The fudge factors are then applied to the photons discriminating variables of the signal MC to obtain the corrected efficiency.
  • Since September 2012 the FudgeMCTool lives in the egammaAnalysysUtils package (the standalone package is not supported anymore).
  • The FudgeMCTool class provides a collection of data/MC correction factors (a.k.a. "fudge factors") for several pre-selection of showers in data and MC. For a given collection, a new set of corrected shower shapes can be retrieved in order to estimate the photon identification efficiencies from MC simulated samples. The code relies on the PtEtaCollection class, and works both as ROOT CINT macros and compiled code.
  • Instances of the FudgeMCTool class are initialized with the calorimetric discriminating variable values, as well as as with the photon candidate eta in the calorimeter second layer, eta2, its cluster pt, and the candidate conversion flag.
  • A collection of fudge factor sets is available, corresponding to different preselections that have been applied to both data and MC to extract them. They can be set calling FudgeMCTool::SetPreselection(int ps), where
    • ps = 14: "2012" tight selection + isolation, improved at low ET. To be used for 2012 analyses. This pre-selection includes fudge factors down to 15 GeV.
  • Please note that the use of an isolation selection when computing the a fudge factor set is only meant to improve the photon purity in the data sample. The corresponding fudge factors are to be considered universal with respect to any isolation prescription you might use in your analysis. Any residual dependence of the photon identification efficiency on the specific isolation prescription must be accounted in the systematic uncertainties, as described here for 2011 analyses and here for 2012 analyses.
  • The most important method for all use cases is probably:

void FudgeMCTool::FudgeShowers( double  pt     ,
                           double  eta2   ,
                           double& rhad1  ,
                           double& rhad   ,
                           double& e277   ,
                           double& reta   ,
                           double& rphi   ,
                           double& weta2  ,
                           double& f1     ,
                           double& fside  ,
                           double& wtot   ,
                           double& w1     ,
                           double& deltae ,
                           double& eratio ,
                           int     isConv ,
                           int     preselection = -999);

which returns the new showers after applying the corrections (depending on pt, eta2, isConv and the preselection (0 by default)).

Photon identification

The efficiency of the photon identification criteria can be computed from MonteCarlo (MC) simulated sample, by correcting the shower shape distributions to account for the difference observed between candidates ion data and MC. The photon identification efficiency can also be measured from data using different approaches, described for instance in ATLAS-CONF-2012-123.

According to the different level of maturity of the data/MC corrections and of the data-driven measurements with 2011 and 2012 data, different recommendations are proposed to compute the photon identification efficiencies and its associates systematic uncertainties, according to which data sample is under study. The pages linked below provides instructions for the tools to be used to apply the identification criteria, to corrected the MC samples, and to compute the photon identification efficiencies values and uncertainties for the 2012 dataset.

Two tools are available for photon identification

Cut-based identification (isEM identification)

  • twiki: IsEMIdentification
  • For 2012 data, the photon PID menu has been reoptimised to mitigate the efficiency degradation at high pile up and to cope with the high trigger rate induced by the higher instantaneous luminosity and by the increased center of mass energy at 8 TeV. A medium operating point, used at trigger level, has been introduced.
  • Compared to the first optimisation, the first 2012 data have shown that a further tuning was necessary (mainly due to unexpected changes in some shower shapes variables at high eta implied by the use of new LAr optimal filtering coefficients) to preserve a high identification efficiency.
  • As a consequence, until an AODFix accounting for these changes will be deployed, the offline PID for photons in both AOD and D3PDs are not up-to-date and therefore should NOT be used. Instead, the correct photon identification MUST be retrieved with the PhotonIDTool (loose tune 4, medium tune 1 and tight tune 2012) applied at analysis level. This applies to both data and MC.
  • Note that the photon cluster level calibration constants applied using the EnergyRescalerTool are not tabulated for layer-dependent energy quantities such as shower shapes. For this reason, the recommended recipe is to not rescale the shower shape values that are the input to the PhotonIDTool (i.e. DeltaE). The cluster Et, however, should be rescaled using the EnergyRescalerTool, as this is a cluster-level quantity. This affects the variable Rhad1 (Rhad), which is calculated as the ratio of the Et in the 1st layer (all) of the hadronic calorimeter over the photon cluster Et.
  • If working on analyses where electrons faking photons are an important background, you can consider using the "Ambiguity Resolved" versions of cuts.
    • The AR isEM bit is in position 23, so the AR can be checked in association with the tight cuts by masking the isEM word:
            bool isPhotonAR = true;
            if ((ph_isEM & 0x800000) != 0) isPhotonAR = false;
            

Photon ID Tool (PhotonIDTool)

  • For photon PID, the two working point, loose and tight, are defined differently in 2012 w.r.t. the corresponding criteria used in 2011. In addition, a third working point is introduced in 2012 (not defined in 2011), called medium, with and efficiency/rejection in between the loose and tight working point (not considered here).
  • The instances of the PhotonIDTool class are initialized with the calorimetric discriminating variable values, as well as as with the photon candidate eta in the calorimeter second layer ("eta2"), its cluster p_T, and The conversion flag. We'll make of the alternative constructor, initializing on the derived EGamma variables that are used in the selection:
PhotonIDTool( double pt     ,
      double eta2   ,
      double rhad1  ,
      double rhad   ,
      double e277   ,
      double reta   ,
      double rphi   ,
      double weta2  ,
      double f1     ,
      double fside  ,
      double wtot   ,
      double w1     ,
      double deltae ,
      double eratio ,
      int conv      );
The above constructor is probably more suited to apply fudge factors to the derived discriminating variables.
  • PhotonIDTool implements several isEM menus:
    • TIGHT cuts: bool PhotonIDTool::Photoncutstight(2012);, where the tune=2012 is a pileup-optimized tight menu, to be used for 2012 data analyses.
    • LOOSE cuts: bool PhotonIDTool::PhotonCutsLoose(4);, where the tune=4 is a loose menu optimized for 2012 data analysis

Ambiguity resolution

  • If working on analyses where electrons faking photons are an important background, you can consider using the "Ambiguity Resolved" versions of cuts.
  • The AR isEM bit is in position 23, so the AR can be checked in association with the tight cuts by masking the isEM word:
       bool isPhotonAR = true;
       if ((ph_isEM & 0x800000) != 0) isPhotonAR = false;
       

systematic uncertainties (PhotonID2012)

  • The preliminary systematic uncertainty on efficiency of the cut-based tight selection (tune 2012) for photons with ET>15 GeV is:
    • Unconverted photons:
      • 2.5% below 40 GeV
      • 1.5% above 40 GeV for |eta|<1.81, 2.5% above 40 GeV for |eta|>1.81
    • Converted photons:
      • 2.5% below 40 GeV
      • 1.5% above 40 GeV
  • These values are based on the combination of the preliminary data-driven measurements using electron extrapolation with Z to ee Tag&Probe and Z radiative decays. More data-driven measurements with 2012 data are underway to better constrain these systematics and compute scale factors.

See a summary in this presentation (egamma meeting, January 23 2012).

Photon selection

Object and Event selection

  • In electron and photon analysis a selection has to be applied to reject bad quality clusters or fake clusters originating from calorimeter problems, e.g.
    • LAr noise bursts and data integrity errors flagging, which is an event by event cut.
    • Rejection of the bad quality clusters through the Object Quality Flag.
  • Twikis:

larError event flag

  • variable name in D3PD: larError
    • 0: ok
    • 1: noise Burst
    • 2: data Integrity Error (+time veto around identified noise bursts since rel. 17))
  • Arrow blue right for release 17 reconstruction it is recommended to remove events with larError>1 (or larError=2)

Object quality (OTX object quality)

  • For both data and Monte Carlo, the quality of the egamma object has to be checked using the Object Quality Flag
    • ALERT! these were the recommendations for 2011 analysis, but it seems they've not changed for 2012 analysis (or at least I could not find an indication for it). The Photon cross-section analysis 2012 twiki (PhotonCrossSection2012) points to LArCleaningAndObjectQuality...
  • at D3PD level, the variable of interest is ph_OQ:
       if ((ph_OQ&34214)==0) { 
       cout << "this is a good photon" << endl;
       }
       
  • Note for electrons, one should check:
       if ((el_OQ&1446)==0) { 
       cout << "this is a good electron" << endl;
       }
       

photon cleaning

Given the photon selection used to extract scale factors (PhotonEfficiencyCorrection) use photon cleaning, we assume the 2011 prescriptions (LArCleaningAndObjectQuality#Photon_Cleaning) are also valid for 2012:

In both data and MC one must apply the following:

if( !( (ph_OQ&134217728)!=0 && (ph_reta>0.98||ph_rphi>1.0||(ph_OQ&67108864)!=0) ) ){
   cout << "this a good photon wrt the new photon cleaning" << endl;
}
  • Note that the first requirement on ph_OQ is for the LArCleaning bit (bit 27) and the second one is for the timing bit (bit 26).
  • always use original reta & rphi

Rejection of incomplete events

In 2012 data-taking the TTC restart was developed to recover certain detector busy conditions without a run-restart. In the lumi-block after a TTC restart there can be events with incomplete events (where some detector information is missing from the event). People should check for and remove such events from their analysis using:

    if ((coreFlags&0x40000) != 0) { // This is an incomplete event remove from analysis }

kinematics

Et

From the validity of photon fudge-factors (pre-selection 14), down to 15 GeV, the minimum photon Et cut is set to 15 GeV.

eta

  • The region 1.37 < η < 1.52 is commonly excluded by analyses from studies using precise calorimetry. In 2012, the HtoGammaGamma decided to increase the size of the excluded region to 1.56...
  • According to CrackDefinition, it is recommended to stick with etas2 for all eta-related fiducial and identification cuts, and should extend the crack definition to 1.56 to avoid degraded calibration performance

Summary of recommendations for photon selection

  • Event selection
    LAr veto data only: remove all events that have LAr error flag: if (larError==2) { // reject event }
    bad/corrupted events rejection data only: if ((coreFlags&0x40000)!=0) { // This is an incomplete event, remove from analysis }

  • photon selection
    object quality require (ph_OQ&34214)==0
    photon cleaning reject if ( (ph_OQ&134217728!=0) AND ((ph_reta>0.98) OR (ph_rphi>1.0) OR ((ph_OQ&67108864)!=0))
    AR require (ph_isEM & 0x800000)==0
    photon Et 15 GeV
    photon eta remove crack region: abs(etas2)<1.37 or 1.56<abs(etas2)<2.37
    photon identification data and MC: tight: tune=2012 ; loose: tune=4

Open questions

  • que geometria son los samples de top ?

Electron identification efficiency

  • according to EfficiencyMeasurements2012#Bug_fixed_Recommendations_for_20, the tag to be used for analyses using the full 2012 dataset and the MC12a or MC12b Monte Carlo samples (ATLAS-GEO-20-00-01, simulation tags s1468 - s1669) should be ElectronEfficiencyCorrection-00-00-46
  • The new input files for trigger, reconstruction and identification for the new SF tool are tagged in version ElectronEfficiencyCorrection-00-00-46 with the names ending on v07 (v08 for reconstruction)

Electron and photon OQ

General useful twikis

-- SergioGonzalez - 13 Mar 2014

Edit | Attach | Watch | Print version | History: r7 < r6 < r5 < r4 < r3 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r7 - 2016-08-29 - unknown
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback