-- PabloMatorrasCuevas - 2023-05-18

MUO

Datasets:

2016 nanoAOD, (HIPM_)UL2016_MiniAODv2_NanoAODv9

2017 nanoAOD, UL2017_MiniAODv2_NanoAODv9

2018 nanoAOD, UL2018_MiniAODv2_NanoAODv9_GT36

Please summarize your basic event selection, especially muon pT thresholds. Describe also the types of muon topologies relevant in your analysis.

We select events with two oppositely charged isolated leptons and large missing transverse momentum. Thresholds on lepton pT’s are 25 GeV for the leading lepton and 20 GeV for the trailing one. We veto events with a third lepton with pT>=10 GeV.

What is the (dominant) pT range of the muons in your analysis? >=20 GeV

Do you use displaced muons? No

Are your analysis results dominated by statistical or systematic uncertainties, such as muon uncertainties? -> Both are of comparable size

Please point us to figures in your AN that should basic kinematic distributions for your muons

We put plots with pT and pseudorapidity of the leading and trailing muons in /eos/home-i04/p/pmatorra/www/Run2SUSY2LOS/MUOReview (cern.ch) . Events come from a sideband region with 100<MET<160 GeV, as our search regions are still blinded. Two subregions with (TAG) and without (VETO) b-tagged jets are defined, for both same-flavor (sf) and different-flavor (em) events.

What muon ID do you use? Select all that apply -> medium ID

Other ID Describe any other ID you are using, if any. How was it derived, where it is documented, was is presented to the POG?

We select leptons passing the medium ID and tight PF Iso requirements. On top of that, we require the following cuts on the impact parameters (IPs): |IP_xy|<0.05cm, |IP_z|<0.10cm and Significance(IP)<4. These correspond to the tight IP requirements developed to clean tails in SUSY analyses for Run2 ( SUSLeptonSF < CMS < TWiki (cern.ch)).

What muon Isolation do you use? Select all that apply -> PF Isolation

PF Isolation -> Which PF Isolation working point do you use? Select all that apply -> Tight

Do you use a trigger containing muons?

-> IsoMu24/27

-> Others: MuonEG: Mu(8)12_*_Ele23_*, Mu23_*_Ele12_* ; DoubleMu: Mu17_*_Mu8_*

Please check all efficiency SFs and corrections you apply.

https://docs.google.com/forms/d/e/1FAIpQLSfKuOdy_EyTusEpoyHwCtL-S8DHmlUDKFQrJY-_MuqSpBW2eQ/formResponse (te manda esto a mi doc?) es que me ha petado skype

Did you compute any SFs for your analysis yourself?

We compute SFs for the extra cut on the impact parameters. They are computed on top of the ID+Iso requirements using a Tag&Probe approach in DY events as described in Appendix D of AN-19-256 (In brief, we use DY events collected by single muon triggers (IsoMu24/27). “Tag” muons are required to match a firing trigger primitive and to pass the analysis ID+Iso requirements. SFs are computed for the impact parameters cuts on “probe” muons fulfilling the analysis ID+Iso requirements, so the purity of the selected samples is very high and a counting technique is used). We also compute trigger efficiencies for events passing our offline dilepton selection using cross triggers from the MET primary dataset, as described in Appendix B.2 of AN-19-256.

Were the SFs you computed yourself presented to the POG and blessed by the experts?

No, they were derived using common techniques (as described in the previous question, and in Appendices B2 and D of AN-19.256), but we are available to discuss them if needed.



Please briefly describe how uncertainties on efficiencies and scale factors are treated in your statistical analysis.

Signal extraction is based on a binned maximum likelihood (ML) fit to the observed mT2 observable distributions. We derive shape variations for the efficiencies and scale factors by varying them according to their uncertainties and noting the effect on the mT2 distributions. These shape uncertainties are included in the ML fit as Gaussian prior distributions (through the Higgs combine tool).

Do you apply any corrections to muon momentum?

Yes

If you answered yes to the previous question, please explain what corrections are applied and how you handle the associated uncertainties

We apply Rochester corrections. We neglect the effect of the uncertainty on the mT2 as it is found much smaller than the one due to the uncertainties on the missing pT (from JES, JER and unclustered energy).

EGM

Which dataset is used for the analysis? -> Run2

What is the format of the dataset? -> NanoAOD

List all the collection of the EGM objects you use from CMSSW -> Electrons collection in NanoAOD

Do you use BParking dataset? -> No

Final state in the analysis contains -> Only electrons

Do you use a dedicated collection of low pT electrons in the analysis? -> No

What is the pT range of the electrons/photon objects used in the analysis? -> >=20 GeV

Do you use high pT photons? (pT > 200 GeV) -> No

From which regions are the electron/photon objects selected from? We select electrons in events with missing pT>100 GeV.

Which E/Gamma HLT paths do you use? MuonEG: Mu(8)12_*_Ele23_*, Mu23_*_Ele12_* ; SingleElectron: Ele27/35/32_WPTight, DoubleEG: Ele23_Ele12

Do you use different paths depending on the data-taking period? Why? No

Do you match the RECO objects with the trigger objects? -> No

Which method do you use to derive the trigger SFs? -> Orthogonal dataset

Which samples are used for the SF derivation? Specify both data and MC -> We use a set of unprescaled triggers from the MET primary dataset (as described in Appendix B.2 of AN-19-256). We compute and apply trigger efficiencies, no scale factors (FastSim samples used for the signal do not have HLT information).

How are the systematic uncertainties derived for the SFs? -> A 2% systematic uncertainty is taken to cover possible correlations between the cross triggers and the analysis triggers.

If the analysis uses nanoAOD, are the variables in the ntuples sufficient for the trigger studies? Yes

Do you use the identification criteria officially recommended by the E/Gamma POG? Yes

If you use official IDs, please list them below. (i.e. cutBasedElectronID-Fall17-94X-V2-loose) Else answer "N/A". -> cutBasedElectronID-Fall17-94X-V2-medium

Do you use any custom electron or photon IDs in your analysis? -> No

If you use a custom ID, please justify the reason -> We use POG cut based medium ID for electrons, but we add cuts on the impact parameters (|d_xy|<0.05cm, |d_z|<0.10cm and S(d)<4) and require no lost hits. These cuts correspond to requirements developed to clean tails in SUSY analyses for Run2 ( SUSLeptonSF < CMS < TWiki (cern.ch)).

If use a custom Id, do you validate all the variable using an appropriate control region? We measure efficiency and scale factors for the extra cuts applied on top of the POG cut based medium ID in DY events.

Are you using official SFs (for official IDs)? Yes

If you are calculating your own SFs, are you using the official TnP tool provided by E/Gamma POG to derive the scale factors? No.

If you are calculating your own SFs, which sample do you use to derive the SFs? We use DY events collected by single electron triggers (Ele27/35/32_WPTight). “Tag” electrons are required to match a firing trigger primitive and to pass the cut-based medium ID. SFs are computed for the impact parameters + lost_hit=0 cuts on “probe” electrons fulfilling the cut-based medium ID requirements, so the purity of the selected samples is very high and a counting technique is used.

If you are calculating your own SFs, how do you derive the systematics? We vary the event selection (missing transverse momentum and jet multiplicity requirement) and the ID used for the “tag” electron (more details in Table 26 from Appendix D of AN-19-256). Uncertainties are dominated by statistics.

Do SFs cover the entire pT range of the electron/photon object(s) in your analysis? Yes

JME

Data analyzed and version

2016 nanoAOD, (HIPM_)UL2016_MiniAODv2_NanoAODv9

2017 nanoAOD, UL2017_MiniAODv2_NanoAODv9

2018 nanoAOD, UL2018_MiniAODv2_NanoAODv9_GT36

How do you deal with the L1 prefire issue in 2016 and 2017 data? We apply the centrally provided prefiring weights.

How do you deal with the HEM issue in 2018? We veto events with any electron with pT > 30 GeV, −3.0 < η < −1.4, and −1.57 < φ < −0.87 or any jet with pT > 30 GeV, −3.2 < η < −1.2, and −1.77 < φ < −0.67. This veto is applied to data taken starting from physics run 319077. In simulations, events failing the veto are weighted by the fraction of integrated luminosity of data taken before physics run 319077.

Does your analysis use jets? -> Yes

What jet sizes and algorithms do you use? -> anti-kT R=0.4 CHS

Do you apply veto maps? If yes, which version? No

Which version of jet energy corrections have you used for data and MC?*Which of these corrections are you using: https://twiki.cern.ch/twiki/bin/view/CMS/JECDataMC

Versions: 2016 -> Summer19UL16(APV)_V7_MC(DATA), 2017 -> Summer19UL17_V5_MC(DATA), 2018 -> Summer19UL18_V5_MC(DATA)

How do you take into account jet energy scale uncertainties?* -> As a single variation implemented as one nuisance parameter

How are you correlating 2016, 2017, 2018 JECs?* More information: https://twiki.cern.ch/twiki/bin/view/CMS/JECUncertaintySources

We keep them uncorrelated (just a single variation is used).

Does your analysis constrain jet energy scale or resolutions nuisance parameters to less than 70% of the initial uncertainty in a fit? (e.g. when computing CLS limits) -> No (bueno, para 2018 se reduce al 60-65%, no recuerdo si el questionnaire te deja poner un comentario)

Are you applying jet energy resolution scale factors? Which version?*More information: https://twiki.cern.ch/twiki/bin/viewauth/CMS/JetResolution

Yes

Versions: 2016 -> Summer20UL16(APV)_JRV3_MC, 2017 -> Summer19UL17_JRV2_MC, 2018 -> Summer19UL18_JRV2_MC

What procedure do you use for jet energy resolution smearing?*More information: https://twiki.cern.ch/twiki/bin/viewauth/CMS/JetResolution#Smearing_procedures -> Hybrid

How are you correlating 2016, 2017, 2018 JERs?*

We keep them uncorrelated across the Run2 data-taking years.

Do you apply noise jet id? Which working point?*More information here: https://twiki.cern.ch/twiki/bin/viewauth/CMS/JetID

We apply the tight jet ID (as defined in NanoAODv9) with jet lepton cleaning.

Does your analysis use a pileup jet ID to reject PU jets? Which method/version?*More information here: https://twiki.cern.ch/twiki/bin/view/CMS/PileupJetID

We apply the loose PU jet ID as defined in NanoAODv9.

Does your analysis differentiate between quark-jets and gluon-jets? Which method/version?*

More information here: https://twiki.cern.ch/twiki/bin/view/CMS/QuarkGluonLikelihood

No

Do you use for W/Z/H/top tagging?*More information here: https://twiki.cern.ch/twiki/bin/view/CMS/JetWtagging and https://twiki.cern.ch/twiki/bin/view/CMS/JetTopTagging

No

Does your analysis use MET?* ->yes

What kind of MET do you use?* -> PF CHS

How do you treat EE noise in 2017 data in MET?* -> We use UL reconstruction, which already contains a mitigation for the EE noise in 2017 data. Moreover, we studied the effectiveness of the mitigation and the residual data/MC disagreement, as detailed in Section 3 of AN-19-256 (line 235 in version 9): based on this, we veto events with sum of the jet pT for all jets in the noise region (2.650<|η|<3.139) and pT<50 GeV larger than 60 GeV for 2017 data and simulated samples.

What corrections do you apply to MET? And what versions?* -> Type-1, JER (no se si pide multiple choice o solo una)

Which MET uncertainties are used in your analysis?* -> JES, JER, Unclustered energy

Do you use the JEC and JER based MET uncertainties in a correlated manner with the JEC and JER uncertainties on the jets? -> yes

Does your analysis constrain unclustered energy nuisance parameters to less than 70% of the initial uncertainty in a fit? (e.g. when computing CLS limits) -> No

How are you correlating 2016, 2017, 2018 unclustered energy uncertainties?* We keep them uncorrelated across the Run2 data-taking year.

Do you apply MET Filters? Provide a list of filters used.*

More information here: https://twiki.cern.ch/twiki/bin/view/CMS/MissingETOptionalFiltersRun2

Flag_goodVertices, Flag_globalSuperTightHalo2016Filter, Flag_HBHENoiseFilter, Flag_HBHENoiseIsoFilter, Flag_EcalDeadCellTriggerPrimitiveFilter, Flag_BadPFMuonFilter, Flag_BadPFMuonDzFilter, Flag_eeBadScFilter, and Flag_ecalBadCalibFilter (for 2017 and 2018).

Do you assign an uncertainty for the pileup reweighting (it has a non-negligible effect on fake MET)? -> yes

Please give a link to your impact/pulls, and put in comments the JEC/JER/MET relevant nuisances names.

/eos/home-i04/p/pmatorra/www/Run2SUSY2LOS/JMEReview (cern.ch)

JEC -> CMS_jesTotal_2016HIPM(2016noHIPM/2017/2018)

JER -> CMS_jer_2016HIPM(2016noHIPM/2017/2018)

MET -> CMS_unclustEn_2016HIPM(2016noHIPM/2017/2018)



Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2023-05-18 - PabloMatorrasCuevas
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback