Data sample

  • Good Run List (GRL): TopGRLs
  • LArError
    • The GRL helps to remove bad luminosity blocks, but there might be isolated events in a good luminosity block that still suffer of dramatic problems in one or more detectors. Photon analyses would be affected by events with problems associated to noise bursts and data integrity errors in the LAr calorimeters.
    • LArEventVetoRel17.


  • Acceptance
    • require photon candidates with |eta| < 2.37 with the crack region excluded: 1.37 < |eta| < 1.52
    • use the eta computed in the 2nd sampling of the LAr calorimeter, etaS2
  • LAr Error
    • for release 17, it is recommended to remove events with larError>1 or equivalently larError==2 (instead of larError!=0 for release 16)
    • The cut can be applied anywhere in the cut flow, but it is strongly recommended to try to apply it after the final cut flow (even after jet/missing ET cleanings). This allows to check how many of these problematic events would be selected in the final data sample. That information has to be transmitted to egamma for studying the impact of this cut.
  • Object quality
          if ((ph_OQ & 34214)==0) { 
          cout << "this is a good electron" << endl;

Electrons and photons

Energy scale and resolution corrections


  • Energy scale corrections (data)
    • Residual energy scale corrections have been estimated using the full 2011 data set and are found to agree with the EPS recommendations with the quoted systematic uncertainties
    • The corrections factors were determined in 26 eta bins for central electrons (|eta|<2.47) and 6 for forward electrons (|eta|>2.5) selecting Zee events with 2011 data.
  • Energy resolution corrections (MC)
    • sInce the MC doesn't perfectly reproduce the energy resolution in data, a smearing procedure should be applied to MC.

The tool to used in both cases is the EnergyRescaler within the egammaAnalysisUtils package.

  • Technical details and implementation are found on the EnergyRescaler page.
  • The tool also handles the treatment of asscociated systematic uncertainties, provided for electrons and photons separately. The ET validity range of these systematics is [7 GeV,1!TeV]. The uncertainties are provided for different eta bins and they should be treated correlated across the eta-bins and along pt. The systematics can be propagated either to Monte Carlo or to data (only one option should be chosen). * An example on how to use this tool can be found at EnergyRescalerTool example

From the Recommendations for SM Direct Photon analyses with 2011 data: electrons and photons with > 10 GeV should be corrected with above scale factors.

  • These corrections to the photon energy scale should be applied before the use of PhotonIDTool. We should as well scale and (the only non-relative isEM variables): this will introduce some (tiny) discrepancies to the AOD/D3PD isEM values and the one computed by PhotonIDTool.
  • Please note that you should not apply the EnergyRescaler scale factor to the hadronic energies Ethad1 and Ethad


Photon identification


scale factors: ratio of data/MC identification efficiencies for electrons (and photons) from W,Z tag'n'probe as a function of eta, ET code in svn (Reconstruction/egamma/egammaAnalysis/egammaAnalysisUtils) as egammaSFclass.h/C to retrieve scale factors and uncertainties (values hard-coded)

Object definitions, quality and selection, corrections


  • IsEMIdentification
    • Electron identification for 2011 data analyses with release 17
    • Photon identification for 2011 data analyses with release 17 (and 16)
    • isem flag: This is the default method used in photon identification.
      • For all photon candidates the candidate has to pass a series of cuts based on the shower shape properties in different compartments of the calorimeter. If a cut is not passed, then a bit is set in the isEM flag (note the negative logic).
      • There are two main qualities: Loose and Tight, in various variants.
      • There is often some ambiguity as to whether an egamma object is an electron or a photon. From rel 16, each quality has an "AR" variant, which stands for Ambiguity resolved. egammaParameters::ambiguityResult stores the return value of this heuristic, which can be UNDEFINED = -999, ELECTRON = 0, LOOSE = 1, or something else (> 1). As far as the user goes, the "something else" are all to be considered "tight". The "AR" variant has the ambiguityResolution parameter greater than LOOSE, therefore this variant has a stricter electron/photon ambiguity resolution requirement, which decreases the number of electrons faking photons, at a very small decrease in the converted photon efficiency. By default, only objects that a heuristic determines to be photons (LOOSE or greater) are put in the photon container. If working on analyses where electrons faking photons are an important background, use AR variables.
      • There is also a variant of the tight qualities with isolation.
    • ElectronIdentification
    • PhotonIdentification



Photon identification

Procedure for photon identification (data / MC) for 2012 analyses

1.- Apply energy correction (data) / smearing (MC)

  • Determine if photon is converted / unconverted:
       egRescaler::EnergyRescalerUpgrade::ParticleType phtype = ph->isConv() ? 
          egRescaler::EnergyRescalerUpgrade::Converted : egRescaler::EnergyRescalerUpgrade::Unconverted;
  • Data: get corrected energy (MeV), using EnergyRescalerUpgrade::applyEnergyCorrection:
       newE = m_em_rescaler.applyEnergyCorrection(ph->cl_eta(), ph->cl_E(), phtype, egRescaler::EnergyRescalerUpgrade::Nominal);
  • MC: obtain smearing scale-factor, that multiplied to the original energy gives the new corrected energy:
       float scale = m_em_rescaler.getSmearingCorrection(ph->cl_eta(), ph->cl_E(), egRescaler::EnergyRescalerUpgrade::NOMINAL);
       newE = scale*ph->cl_E();

2.- For MC only, apply Fudge-Factors to shower-shape variables

  • Question do I need to apply the smearing factor to the electron Et (thus propagating to pt, rhad1 and rhad), and to ph_Emax2 and ph_Emins1 (propagating to deltae before fudging ? In other words, in the FudgeMCTool, was the uncorrected energy used to fill the fudging method or not ?
  • Question If the answer to above is no, I guess (Smearing+FF) is equivalent to (FF+Smearing) ?
The most important use-case is:
void FudgeMCTool::FudgeShowers(double pt,
                               double eta2,
                               double& rhad1,
                               double& rhad,
                               double& e277,
                               double& reta,
                               double& rphi,
                               double& weta2,
                               double& f1,
                               double& fside,
                               double& wtot,
                               double& w1,
                               double& deltae,
                               double& eratio,
                               int  isConv,
                               int preselection = -999);
double pt = E/cosh(ph_etas2); // is E the corrected or raw cluster energy ?
double rhad1 = ph_Ethad1 / pt; // note that any correction to E propagates to rhad1 (through pt)
double rhad = ph_Ethad / pt; // note that any correction to E propagates to rhad (through pt)
double deltae = ph_Emax2 - ph_Emins1; // do I need to correct ph_Emax2 and ph_Emins1 ?
double eratio = (ph_emaxs1 - ph_Emax2) / (ph_emaxs1 + ph_Emax2); // correcting all energies does not change the ratio, so no need here...

Above function will return the new (fudged) shower variables (depending on pt, eta2, isConv and the preselection)

3.- Apply PhotonIDTool to identify photon

  • two possible constructors:

Default constructor using base egamma variables:

PhotonIDTool(double pt,
             double eta2,
             double ethad1,
             double ethad,
             double e277,
             double e237,
             double e233,
             double weta2,
             double f1,
             double emax,
             double emax2,
             double emin2,
             double fracm,
             double wtot,
             double w1,
             int conv);
Alternative constructor using derived egamma variables:
PhotonIDTool(double pt,
             double eta2,
             double rhad1,
             double rhad,
             double e277,
             double reta,
             double rphi,
             double weta2,
             double f1,
             double fside,
             double wtot,
             double w1,
             double deltae,
             double eratio,
             int conv);

According to Note that the photon cluster level calibration constants applied using the EnergyRescalerUpgrade tool are not tabulated for layer-dependent energy quantities such as shower shapes. For this reason, the recommended recipe is to not rescale the shower shape values that are the input to the PhotonIDTool (i.e. DeltaE). The cluster Et, however, should be rescaled using the EnergyRescalerUpgrade tool, as this is a cluster-level quantity. This affects the variable Rhad1 (Rhad), which is calculated as the ratio of the Et in the 1st layer (all) of the hadronic calorimeter over the photon cluster Et.






  • HforTool: a tool to remove overlap of heavy flavor component between light jet inclusive and heavy flavor jet samples generated by Alpgen.
    • HforTool
    • The recommended scheme is the angular based method with a cone of 0.4 (default value).


  • Shown as changes in combined efficiency and acceptance on the ttbar+gamma signal sample

Systematic Electron channel (%) Muon Channel (%)
Lepton trigger efficiency 0.56 1.29
Lepton reconstruction efficiency 0.88 0.32
Lepton identification efficiency 2.20 0.74
Lepton scale 0.49 0.24
Lepton resolution 0.18 0.03
b-tagging efficiency 4.29 4.78
mistag rate 0.29 0.32
Jet reconstruction efficiency 0.07 0.12
Jet energy scale (JES) 8.41 8.36
Jet energy resolution 0.71 0.04
JVF 1.37 1.41
Photon energy scale 0.07 0
Photon energy resolution 0.16 0.01

Luminosity (2011 data-sample)

  • period B-K (beta*=1.5m) = 3.7 %
  • period L-M (beta*=1.0m) = 4.1 %

As these uncertainties are totally correlated, and approximately half of the total luminosity was taken before and after the beta* change, our best recommendation for the entire 2011 sample is 3.9%. For analyses only using data up through period K, they are free to use 3.7%. We do not believe it is necessary to split the data into two epochs and apply separate luminosity uncertainties to both, although analyses are free to do so if they wish. These two uncertainties should be treated as completely correlated.

Useful links

-- SergioGonzalez - 18 Mar 2014

Edit | Attach | Watch | Print version | History: r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r2 - 2016-08-29 - SergioGonzalez
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback