2018 CMS Muon HATS at the LPC

%COMPLETE5%

Contents:

Facilitators

Introduction

This Hands-on Advanced Tutorial will start with an overview of basic muon properties and how muons interact with matter. We'll follow this with a discussion of the muon detector technology employed by CMS and how we use this information to reconstruct muon candidates. The discussion will finish by briefly looking at where muons come from and why they are interesting for physics analysis. The remainder of the time will be spent developing familiarity with accessing muon information in miniAOD files. We'll use this to look in detail at identification and isolation criteria for muons and conclude with brief discussions of other relevant topics including momentum scale corrections and determining trigger and selection efficiency.

Introductory slides.

The color scheme of the Tutorial is as follows:

  • Commands will be embedded in a grey box:
    cmsRun testRun_cfg.py
  • Output and screen printouts will be embedded in a green box:
    Begin processing the 3506th record. Run 1, Event 1270974, LumiSection 6358 at 16-Dec-2015 21:45:08.969 CST 
  • C++ Code will be embedded in a pink box:
    if ( pt < 25 ) continue; 
  • Python configuration code will be embedded in a light blue box:
    process.source = cms.Source("PoolSource", fileNames = cms.untracked.vstring("file:/scratch0/inputfile.root") )
  • Questions, quizzes, assignments, etc. will be embedded in a yellow box:
    Run over the pat::Muon collection and plot the generated and reconstructed pT
    * Question 1: explain why the two distributions look different

  • Typically, everyone has a HOME directory in /uscms/home/. To avoid that you run out of disk space in your HOME directory you should set up your work area in /uscms/home/USERNAME/nobackup.
  • All data files and code used for this exercise is store in EOS at the LPC.
  • If some files are not accessible, please check if there is a typo of this sort:
    TFile file(“/eos/...”); 
    should be changed to
    TFile* file = TFile::Open("root://cmsxrootd.fnal.gov///store/user/…”); 

Technical Information

Connecting to the cmslpc cluster link

Create and set up a CMSSW working area. For (t)cshell:

kinit  USERNAME@FNAL.GOV
ssh -Y USERNAME@cmslpc-sl6.fnal.gov
source /cvmfs/cms.cern.ch/cmsset_default.csh 
setenv SCRAM_ARCH slc6_amd64_gcc630

or for bash:

kinit USERNAME@FNAL.GOV
ssh -Y USERNAME@cmslpc-sl6.fnal.gov
source /cvmfs/cms.cern.ch/cmsset_default.sh 
export SCRAM_ARCH=slc6_amd64_gcc630

On cmslpc, everyone has a HOME directory in /uscms/home/$USER/. If you do not already have one, create a HATS directory in your ~/nobackup directory (which has a more generous quota than your home directory):

mkdir ~/nobackup/HATS
cd ~/nobackup/HATS
cmsrel CMSSW_10_1_5
cd CMSSW_10_1_5/src
cmsenv

To access input files with LFN, use the following syntax in your configuration files:

process.source = cms.Source("PoolSource",
    fileNames = cms.untracked.vstring("root://cmseos.fnal.gov//store/..."),
To access a file interactively in ROOT, use the following commands:

[sh]>; root -l 
[root]> TFile* f = TFile::Open("root://cmseos.fnal.gov//store/user/...”) 

Member functions and other information available for the different muon objects can be found

  1. on the Doxygen documentation ( pat::Muon, reco::GenParticle)
  2. directly in the CMSSW code on GitHub ( pat::Muon, reco::GenParticle)
  3. on the Muon POG TWiki pages (e.g., ID and isolation)

Exercise 1: Introduction, muon object, main variables

In this exercise we will get familiar with the muon objects in a miniAOD file. We will mostly use the following miniAOD file, containing simulated Drell–Yan dimuon events generated at NLO with the program MadGraph:
root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/dymm.root

Step 1: Getting familiar with the muon object and miniAOD

To start, let's take a look at the content of the file:

edmDumpEventContent root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/dymm.root

You should see the following output:

Type                                  Module                      Label             Process    
-----------------------------------------------------------------------------------------------
GenEventInfoProduct                   "generator"                 ""                "SIM"      
LHEEventProduct                       "externalLHEProducer"       ""                "SIM"      
edm::TriggerResults                   "TriggerResults"            ""                "SIM"      
edm::TriggerResults                   "TriggerResults"            ""                "HLT"      
BXVector                "gtStage2Digis"             ""                "RECO"     
BXVector                "gtStage2Digis"             ""                "RECO"     
BXVector                 "caloStage2Digis"           "EGamma"          "RECO"     
BXVector                  "caloStage2Digis"           "EtSum"           "RECO"     
BXVector                    "caloStage2Digis"           "Jet"             "RECO"     
BXVector                   "gmtStage2Digis"            "Muon"            "RECO"     
BXVector                    "caloStage2Digis"           "Tau"             "RECO"     
HcalNoiseSummary                      "hcalnoise"                 ""                "RECO"     
L1GlobalTriggerReadoutRecord          "gtDigis"                   ""                "RECO"     
double                                "fixedGridRhoAll"           ""                "RECO"     
double                                "fixedGridRhoFastjetAll"    ""                "RECO"     
double                                "fixedGridRhoFastjetAllCalo"   ""                "RECO"     
double                                "fixedGridRhoFastjetCentral"   ""                "RECO"     
double                                "fixedGridRhoFastjetCentralCalo"   ""                "RECO"     
double                                "fixedGridRhoFastjetCentralChargedPileUp"   ""                "RECO"     
double                                "fixedGridRhoFastjetCentralNeutral"   ""                "RECO"     
edm::TriggerResults                   "TriggerResults"            ""                "RECO"     
reco::BeamHaloSummary                 "BeamHaloSummary"           ""                "RECO"     
reco::BeamSpot                        "offlineBeamSpot"           ""                "RECO"     
reco::CSCHaloData                     "CSCHaloData"               ""                "RECO"     
vector                   "scalersRawToDigi"          ""                "RECO"     
edm::SortedCollection >    "reducedEgamma"             "reducedEBRecHits"   "PAT"      
edm::SortedCollection >    "reducedEgamma"             "reducedEERecHits"   "PAT"      
edm::SortedCollection >    "reducedEgamma"             "reducedESRecHits"   "PAT"      
edm::TriggerResults                   "TriggerResults"            ""                "PAT"      
edm::ValueMap                  "offlineSlimmedPrimaryVertices"   ""                "PAT"      
pat::PackedTriggerPrescales           "patTrigger"                ""                "PAT"      
pat::PackedTriggerPrescales           "patTrigger"                "l1max"           "PAT"      
pat::PackedTriggerPrescales           "patTrigger"                "l1min"           "PAT"      
vector             "slimmedAddPileupInfo"      ""                "PAT"      
vector                 "slimmedElectrons"          ""                "PAT"      
vector                      "slimmedJets"               ""                "PAT"      
vector                      "slimmedJetsAK8"            ""                "PAT"      
vector                      "slimmedJetsPuppi"          ""                "PAT"      
vector                      "slimmedJetsAK8PFCHSSoftDropPacked"   "SubJets"         "PAT"      
vector                      "slimmedJetsAK8PFPuppiSoftDropPacked"   "SubJets"         "PAT"      
vector                      "slimmedMETs"               ""                "PAT"      
vector                      "slimmedMETsNoHF"           ""                "PAT"      
vector                      "slimmedMETsPuppi"          ""                "PAT"      
vector                     "slimmedMuons"              ""                "PAT"      
vector          "lostTracks"                ""                "PAT"      
vector          "packedPFCandidates"        ""                "PAT"      
vector        "packedGenParticles"        ""                "PAT"      
vector                   "slimmedPhotons"            ""                "PAT"      
vector                      "slimmedTaus"               ""                "PAT"      
vector                      "slimmedTausBoosted"        ""                "PAT"      
vector    "selectedPatTrigger"        ""                "PAT"      
vector             "reducedEgamma"             "reducedEBEEClusters"   "PAT"      
vector             "reducedEgamma"             "reducedESClusters"   "PAT"      
vector              "reducedEgamma"             "reducedConversions"   "PAT"      
vector              "reducedEgamma"             "reducedSingleLegConversions"   "PAT"      
vector                  "slimmedGenJets"            ""                "PAT"      
vector                  "slimmedGenJetsAK8"         ""                "PAT"      
vector             "prunedGenParticles"        ""                "PAT"      
vector         "reducedEgamma"             "reducedGedGsfElectronCores"   "PAT"      
vector              "reducedEgamma"             "reducedGedPhotonCores"   "PAT"      
vector            "reducedEgamma"             "reducedSuperClusters"   "PAT"      
vector                  "offlineSlimmedPrimaryVertices"   ""                "PAT"      
vector    "slimmedSecondaryVertices"   ""                "PAT"      
unsigned int                          "bunchSpacingProducer"      ""                "PAT"      
edm::TriggerResults                   "TriggerResults"            ""                "DIMUON"   

Question 1: Which collections contain muons or muon candidates? (At any level: generated, reconstructed, trigger, etc.)

Solution:

vector< pat::PackedGenParticle>      "packedGenParticles"       ""                "PAT"  
vector< pat::Muon>       "slimmedMuons"             ""                "PAT"    
vector< l1extra::L1MuonParticle>       "l1extraParticles"          ""                "RECO"  
edm::TriggerResults    "TriggerResults"           ""                "HLT"
vector< pat::TriggerObjectStandAlone> "selectedPatTrigger"       ""                "PAT" 

Step 2: Plot some muon variables

Now let's inspect the different muon objects and plot some muon variables (e.g. pT, η, φ, charge). The C++ methods to access these variables in the different muon formats can be found in the links reported above.

We can inspect the file interactively in ROOT, loading the FWLite packages (while in CMSSW environment, i.e. after running cmsenv):

[sh]> cd ~nobackup/HATS/CMSSW_10_1_5/src
[sh]> cmsenv
[sh]> root -l root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/dymm.root
[root]> gSystem->Load("libFWCoreFWLite.so");
[root]> FWLiteEnabler::enable();
[root]> gSystem->Load("libDataFormatsFWLite.so");
[root]> gSystem->Load("libDataFormatsPatCandidates.so");
[root]> gROOT->SetStyle("Plain");
[root]> gStyle->SetOptStat(111111);
[root]> TBrowser b;

Hints:

Events -> patMuons_slimmedMuons__PAT -> patMuons_slimmedMuons__PAT.obj -> patMuons_slimmedMuons__PAT.obj.m_state.p4Polar_.fCoordinates.fEta
Events -> patMuons_slimmedMuons__PAT -> patMuons_slimmedMuons__PAT.obj -> patMuons_slimmedMuons__PAT.obj.m_state.p4Polar_.fCoordinates.fPt
Events -> patMuons_slimmedMuons__PAT -> patMuons_slimmedMuons__PAT.obj -> patMuons_slimmedMuons__PAT.obj.m_state.pdgId_
etc.

You can print out multiple variables at the same time, using the method TTree::Scan. E.g., use the following syntax:

Events->Scan("recoGenParticles_prunedGenParticles__PAT.obj.pt():recoGenParticles_prunedGenParticles__PAT.obj.eta()", "recoGenParticles_prunedGenParticles__PAT.obj.isPromptFinalState() && abs(recoGenParticles_prunedGenParticles__PAT.obj.pdgId())==13")

The first argument is the object collection and variables you want to print, separated by colon signs :. The second, optional argument can be used to select the objects to output by imposing logical conditions.

Question 2: Using the syntax above, print the main kinematic observables for the following objects and selections (only the first few events):
  1. all pat::Muon objects
  2. the pat::Muon objects with pT > 20 GeV and |η| < 2.4
  3. all final-state muons in the reco::GenParticle collection
  4. all final-state muons in the reco::GenParticle collection with pT > 20 GeV and |η| < 2.4
  5. all final-state muons in the reco::GenParticle collection with pT > 20 GeV and |η| < 2.4, and not coming from a hadron or τ decay
Hint. Final-state particles: status()==1. Final-state particles not from hadron or τ decays: isPromptFinalState().

Using the method TTree::Draw you can also plot these variables in TH1 (or TH2 or TH3) histograms. E.g.:

Events->Draw("recoGenParticles_prunedGenParticles__PAT.obj.pt()", "recoGenParticles_prunedGenParticles__PAT.obj.isPromptFinalState() && abs(recoGenParticles_prunedGenParticles__PAT.obj.pdgId()==13)", "lep")

The first two arguments have similar meanings as in TTree::Scan. If you enter two or three colon-separated variables in the first argument, they will be plotted as 2D or 3D scatter plots. The third, optional argument is for drawing options.

Question 3: Draw muon pT distributions for the same categories as in the previous assignment, but only up to pT < 500 GeV. Overlay distributions 1 and 2, and distributions 3, 4, and 5 in the same frame.

Step 3: Write and run an EDAnalyzer

Next create an EDAnalyzer in CMSSW. The analyzer should iterate over the MC generator level and the reconstructed muons stored in the miniAOD and fill histograms of the main kinematic variables of generated and reconstructed muons.

Start with creating an empty analyzer:

cd $CMSSW_BASE/src/
mkdir HATSExercises
cd HATSExercises
mkedanlzr MuonExercise1
cd MuonExercise1

Then copy the template code MuonExercise1.cc, the BuildFile and the configuration file MuonExercise1.py:

xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise1/plugins/MuonExercise1.cc plugins/MuonExercise1.cc
xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise1/MuonExercise1.py MuonExercise1.py
xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise1/plugins/BuildFile.xml plugins/BuildFile.xml

Now start to edit the analyzer code plugins/MuonExercise1.cc.

First, fill a few histograms with the main kinematic variables of generated and reconstructed muons (e.g. number of muons, muon pT, η, φ). In the declaration and constructor of your analyzer you find the code to retrieve the necessary collections and book two histograms
Use the TFileService object to store the histograms.

Declaration part:

edm::EDGetTokenT<std::vector<reco::GenParticle> > genCollToken;
edm::EDGetTokenT<std::vector<pat::Muon> > muonCollToken;
edm::Service<TFileService> fs;
TH1D *h_genpt;
TH1D *h_recpt;
Constructor:
edm::InputTag genPartTag("prunedGenParticles");
edm::InputTag muonTag("slimmedMuons");
genCollToken = consumes(genPartTag);
muonCollToken = consumes(muonTag);
h_genpt = fs->make("genpt", "GEN pt", 100, 0.0, 200.0);
h_recpt = fs->make("recpt", "PAT pt", 100, 0.0, 200.0);

In the analyze function, which is run at every event, implement the code to fill the histograms, following the example below. To access the generated muon information, you need to loop over all the generated final-state particles (status()==1) and select the muons by their PDG ID (± 13).

// Retrieve the GenParticle collection and loop over it
edm::Handle genColl;
iEvent.getByToken(genCollToken, genColl);
for (auto&& genPart : *(genColl.product())) {
   if (genPart.status()==1 && std::abs(genPart.pdgId()) == 13) { ... }
}

// Retrieve the pat::Muon collection and loop over it
edm::Handle muonColl;
iEvent.getByToken(muonCollToken, muonColl);
for (auto&& muon : *(muonColl.product())) { ... }

Note that this compact syntax:

for (auto&& genPart : *(genColl.product())) { ... }
is equivalent to the more explicit form:
for (auto it = genColl->cbegin(); it != genColl->;cend(); ++it) {
   const pat::GenParticle& genPart = (*it);
   ...
}
What about the following compact syntax? How would you expand it? In what does it differ from the previous version?
for (auto genPart : *(genColl.product())) { ... }

// run over pat muons
    for (auto it = muonColl->cbegin(); it != muonColl->cend(); ++it) {
        h_recpt->Fill((*it).pt());
    }

// run over all gen particles and keep the ones that are muons
    for (auto it = genColl->cbegin(); it != genColl->cend(); ++it) {
        const pat::PackedGenParticle& mcParticle = (*it);
        if ( abs(mcParticle.pdgId()) != 13 ) continue; // skip this particle if it isn't a muon
        h_genpt->Fill(mcParticle.pt());
    }

Once you are done implementing the analyzer, check the "build file" BuildFile.xml in your local plugins folder. It should contain the following:

<library file="MuonExercise1.cc" name="MuonExercise1">
  <flags EDM_PLUGIN="1"/>
</library>

Now you can compile:

cd $CMSSW_BASE/src/
scram b

Finally, check the python configuration file MuonExercise1.py. The most important parameters of the configuration file MuonExercise1.py are as follows:

  • Initialization of the process:
process = cms.Process("MuonExercise1")

  • The max number of events to run on (-1 means no limit):
process.maxEvents = cms.untracked.PSet( input = cms.untracked.int32(50000) )

  • Where to find the input file(s):
process.source = cms.Source("PoolSource", fileNames = cms.untracked.vstring( 'root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/dymm.root' ) )

  • The name of the EDAnalyzer:
process.demo = cms.EDAnalyzer('MuonExercise1')

  • The name of the output file:
process.TFileService = cms.Service("TFileService", fileName = cms.string('histos1.root') )

Now you can run your analyzer and produce the plots:

cd $CMSSW_BASE/src/HATSExercises/MuonExercise1
cmsRun MuonExercise1.py

Hints:

  • If you want to keep your terminal from filling up, you can pipe the output to a text file, which you can then read after the job has completed. The > sign redirects the standard output to file output.log, while the & sign redirects the standard error stream.
cmsRun MuonExercise1.py > output.log 

  • You can also run the process in background (adding a & sign at the end of the command line) and read the output from output.log on the fly, while it gets written:
cmsRun MuonExercise1.py > output.log &
tail -f output.log

The program will produce a file (e.g. histos1.root), which you can open with ROOT to visualize the histograms.

root -l histos1.root

Now try to apply various quality criteria to the muons using the MuonId information and fill histograms for pT, eta and phi. For example:

h_pt_looseMuons = fs->make("pt_looseMuons", "Loose muon pT", 100, 0.0, 200.0);
if ( fabs((*it).eta()) < 2.4 && (*it).pt() > 8. && (*it).isLooseMuon() ) { h_pt_looseMuons->Fill((*it).pt()); }

Note: Unlike isLooseMuon (and isMediumMuon and isVetoMuon), using isTightMuon (and isSoftMuon and isHighPtMuon) requires information about the primary vertex (PV) which we haven't implemented yet. We will do this in Exercise 3.

Now modify the analyzer and re-run it, this time applying various quality criteria to the muons as done before.

Question 4: Compare the numbers of PAT and GEN muons and their pT spectra at different selection stages, by overlaying the distributions in the same frame.
  1. Since you are running on a Drell–Yan sample, you would expect two muons per event. Explain why there are sometimes more than two muons in an event.
  2. Look at the full distributions, without η and pT cuts. Can you explain the differences?

  • To show two histograms on top of each other: double click the genpt histogram, then type "same" into the Draw Option box at the top, then double click the pt histogram. You can then choose "View->Editor" from the top menu and select one of the curves to change its color.

Step 4: Gen matching

When working with simulated data, it is sometime useful to pair each reconstructed muon to its corresponding generated particle, e.g. for efficiency or resolution studies. In this part of the exercise we will learn possible ways to do this.

To start, we can use the function deltaR as defined in DataFormats/Math/interface/deltaR.h (ΔR = (Δη2 + Δφ2)1/2) to perform a geometrical matching between generated and reconstructed muons. Each event has a collection of reconstructed muons and a collection of generated particles. For each reconstructed muon:

  • loop over all of the generated particles,
  • check whether the generated particle is a generated muon, and
  • find which generated muon is closest in ΔR to the reconstructed muon.
We will say that a reconstructed muon matches to the generated muon if they are the closest in ΔR to each other, and generally inside a cone of ΔR < 0.3.
See the following snippet as an example:
for (auto&& muon : *(muonColl.product())) {
   int idx(-1), bestidx(-1);
   float bestdr(9999.);
   for (auto&& genPart : *(genColl.product())) {
      idx++;
      // Only final-state muons
      if (std::abs(genPart.pdgId()) != 13 || genPart.status() != 1) continue;
      float dr = deltaR(muon, genPart);
      if (dr < 0.3 && dr < bestdr) {
         bestidx = idx;
         bestdr = dr;
      }      
   }
}

Question 5: Are there reconstructed muons that do not match any generated muons? Why?

Note that in pat::Muon candidates, in general, this matching is already done for you: all PAT candidates come with an embedded link to the reco::GenParticle associated to it. You can retrieve it with the function pat::Muon::genParticle() (see link). Using this function in your code, check how often the generated particle matched by ΔR coincides with the embedded generated particle.

Question 6: Are there cases in which the ΔR matching fails to find the correct generated particle? Why?

Step 5: Trigger matching

Now let's take a look at the Level-1 (L1) trigger and High-Level Trigger (HLT) muon candidates. Follow the recipe from above, loop over all the muon trigger candidates and fill histograms with their transverse momenta.

You have to add the following to the class declaration of your analyzer (under "member data"):

edm::EDGetTokenT trigResultsToken;
edm::EDGetTokenT trigObjCollToken;

You have to add the following to the constructor of your analyzer:

edm::InputTag triggerTag("TriggerResults", "", "HLT");
edm::InputTag trigObjTag("selectedPatTrigger");
trigResultsToken = consumes(triggerTag);
trigObjCollToken = consumes(trigObjTag);

In the method MuonExercise1::analyze, add:

edm::Handle triggerBits;
iEvent.getByToken(trigResultsToken, triggerBits);
const edm::TriggerNames& names = iEvent.triggerNames(*triggerBits);

edm::Handle triggerObjects;
iEvent.getByToken(trigObjCollToken, triggerObjects);

for (pat::TriggerObjectStandAlone obj : *triggerObjects) {
   obj.unpackPathNames(names);
   if ( obj.hasTriggerObjectType(trigger::TriggerL1Mu) ) { ... } // it's a L1 muon
   if ( obj.hasTriggerObjectType(trigger::TriggerMuon) ) { ... } // it's a muon HLT candidate
}

Now you can try looping over the trigger candidates and matching them to our reconstructed muons by finding the one that is closest in ΔR, in much the same way you did for GEN matching.

// Iterate over the collection of trigger candidates:
for (pat::TriggerObjectStandAlone obj : *triggerObjects) {
    obj.unpackPathNames(names);
    // This line makes sure that our trigger object is a HLT or L1 muon candidate
    if ( !(obj.hasTriggerObjectType(trigger::TriggerMuon) || obj.hasTriggerObjectType(trigger::TriggerL1Mu) )) continue;
    cout << "This trigger candidate could be a muon!" << endl;
}

Solutions

Answers to the questions raised during the exercise can be found in this slides.
You can also find the complete analyzer for the exercise here:
/uscms_data/d1/hats/Muon2018/Exercises/MuonExercise1/plugins/MuonExercise1_solutions.cc

Exercise 2: Muon momentum scale and resolution corrections

The measurement of the muon transverse momentum of muons is sensitive to several detector conditions:

  • the alignment of the tracker and of the muon chambers
  • the composition and distribution of the material inside the tracking volume
  • the knowledge of the magnetic field inside and outside the solenoid volume.
All these conditions affect differently the momentum measurement and can produce biases. In particular, the detector misalignment produces a relative bias that generally increases linearly with the momentum. For this reason it is extremely important to have an accurate knowledge of the tracker and muon spectrometer alignment, and a detailed mapping of the detector material and of the magnetic field. Residual biases can be corrected a posteriori, using calibration techniques that generally exploit data from very well-known processes, such as J/ψ→μμ or Z→μμ decays.

In this exercise we will examine the effects produced by these biases in the momentum measurement and we will use correction factors to mitigate them. We will mainly use the following MINIAODSIM sample of simulated Drell–Yan events:
root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/dymm.root

If you need more statistics, you can integrate the sample with the following files:
root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/dymm_1.root.

Step 1: Generated and reconstructed dimuon Z mass

To start, we will plot the generated and reconstructed invariant mass of muon pairs from Z→μμ decays. In Exercise 1 you already worked with reconstructed and generated muons. First create an EDAnalyzer within your CMSSW work area just as in Exercise 1:

cd $CMSSW_BASE/src/HATSExercises
mkedanlzr MuonExercise2
cd MuonExercise2

Then copy the template code MuonExercise2.cc, the BuildFile and configuration file MuonExercise2.py:

First create an EDAnalyzer within your CMSSW work area just as in Exercise 1.

cd $CMSSW_BASE/src/HATSExercises
mkedanlzr MuonExercise2
cd MuonExercise2

Then copy the template code MuonExercise2.cc, the BuildFile and configuration file MuonExercise2.py:

xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise2/plugins/MuonExercise2.cc plugins/MuonExercise2.cc
xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise2/MuonExercise2.py MuonExercise2.py
xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise2/plugins/BuildFile.xml plugins/BuildFile.xml

Now, edit the analyzer plugins/MuonExercise2.cc: You will find the following data members in your class header for the two histograms for the two invariant mass distributions:

TH1F* h_RecDiMuonM;
TH1F* h_GenDiMuonM;

In the constructor of your analyzer you will find:

h_RecDiMuonM = fs->make("h_RecDiMuonM",";m_{#mu^{+}#mu^{-}};",80,70,110);
h_GenDiMuonM = fs->make("h_GenDiMuonM",";m_{#mu^{+}#mu^{-}};",80,70,110);

We will loop over the positive muons, make sure they are "good" muons, match them with a "good" negative muon, and construct the invariant mass of the two. Then, we'll check each muon for a gen match. Select events with two well-identified and isolated muons. For instance, you can require that both muons pass:

  • mu->genParticle()!=0 (each muon must have an associated generated particle)
  • mu->isGlobalMuon()
  • mu->pt() > 20.
  • std::abs(mu->eta()) < 2.4
  • mu->chargedHadronIso() < 0.15
In addition, the two selected muons must have opposite electric charge and invariant mass close to the nominal Z boson mass, e.g. 70-110 GeV. For such events, compute the dimuon invariant mass starting from the generated and the reconstructed muons, then fill the two plots and save them to an output file, using the usual TFileService.

// find mu+
for (auto mup = muons->cbegin(); mup != muons->cend(); ++mup) {
    if ( not (mup->charge() > 0 ) ) continue;
    if ( not (mup->isGlobalMuon()) ) continue;
    if ( not (mup->pt() > 20.0 ) ) continue;
    if ( fabs(mup->eta()) > 2.4 ) continue;
    if ( not (mup->chargedHadronIso() < 0.15 ) ) continue;

    // find mu-
    for (auto mum = muons->cbegin(); mum != muons->cend(); ++mum) {
        if ( not (mum->charge() < 0 ) ) continue;
        if ( not (mum->isGlobalMuon()) ) continue;
        if ( not (mum->pt() > 20.0 ) ) continue;
        if ( fabs(mum->eta()) > 2.4 ) continue;
        if ( not (mum->chargedHadronIso() < 0.15 ) ) continue;

        double diMuonRecMass = (mup->p4() + mum->p4()).M();
        if ( diMuonRecMass < 70 || diMuonRecMass > 110) continue; // only look around the Z peak
        h_RecDiMuonM->Fill(diMuonRecMass);

        int idxmup_Gen = -1;
        int idxmum_Gen = -1;

       // Gen matching
        for (auto genParticle = genColl->cbegin(); genParticle != genColl->cend(); ++genParticle) {
            const pat::PackedGenParticle& mcMuon = (*genParticle);
            if ( not (abs(mcMuon.pdgId()) == 13 ) ) continue; // make sure it is a muon
            if ( fabs(mcMuon.eta()) > 2.4 ) continue;
            if ( not (mcMuon.pt() > 1.5 ) ) continue;
            if ( deltaR(mcMuon, *(mup->innerTrack())) < 0.1 && mcMuon.charge() > 0 ) idxmup_Gen = std::distance(genColl->cbegin(), genParticle);
            if ( deltaR(mcMuon, *(mum->innerTrack())) < 0.1 && mcMuon.charge() < 0 ) idxmum_Gen = std::distance(genColl->cbegin(), genParticle);
        }
        if ( idxmup_Gen > -1 && idxmum_Gen > -1) {
            double diMuonRecMassGen = (genColl->at(idxmup_Gen).p4() + genColl->at(idxmum_Gen).p4()).M();
            h_GenDiMuonM->Fill(diMuonRecMassGen);
        }
    }
}

(Note that this method has the possibility of using the same muon in more than one pair. Normally we would code around this...)

When you are done compile your analyzer and run the program:

cd $CMSSW_BASE/src/HATSExercises/MuonExercise2
scram b
cmsRun MuonExercise2.py

The program will produce a file histos2.root. Compare the mean and the sigma of the two distributions.

Open the output file in ROOT and fit the two mass distributions with Gaussian functions, using the FitPanel window:

  • right-click on the histogram to fit and select "FitPanel" from the drop-down menu
  • in the panel that pops up, choose a fit function (gaus)
  • restrict the fit range by pulling the borders of the "x" bar at the bottom of the panel
  • check "SAME" among the Drawing options
  • finally, push "Fit"
The best-fit function will be drawn on top of the histogram, while the fit parameters will be printed out on the screen. If you check the "Fit Parameters" option in the "Options" menu of the TCanvas, a box containing the fitted parameters will also appear on the histogram frame.
Note: keep in mind that the Z line shape is not Gaussian, so the Gaussian fit might not be perfect. Restrict the fit range enough around 91 GeV, so to fit only the peak region. If you have time, you could also try and fit the line shapes with more suitable functions for each case, e.g. a Breit-Wigner function for the generated mass and a Voigt function (i.e. a convolution of a Breit-Wigner with a Gaussian) for the reconstructed mass (see TMath documentation). You can easily do this from the ROOT command line, e.g.:

TF1 *f1 = new TF1("f1", "[0]*TMath::BreitWigner(x, [1], [2])", 86., 96.);
f1->SetParameter(1, 91.1876) 
f1->SetParameter(2, 2.4952) 
h_GenDimuonMass->Fit("f1", "", "", 86., 96.)

and likewise with the Voigt function for the reconstructed distribution.

Questions 1:
  • Compare the mean values obtained from the fits to the generated and reconstructed distributions. Are they compatible or do they differ significantly? How do they compare to the nominal Z mass value, mZ = 91.1876 GeV? Which distribution has the larger shift with respect to mZ?
  • Compare the widths of the two distributions (GEN and RECO). Which one is larger? Can you explain why?
  • Considering that the natural width of the Z boson is ΓZ = 2.4952 GeV, can you roughly estimate the typical dimuon mass resolution of Z→μμ events measured with the CMS detector?
    (Note: with a Voigtian fit, you would get an estimate of the mass resolution directly from the fit)
  • Why do you need different functions to better fit the generated and reconstructed mass distributions? What differs between the two? If you had to fit the peak of a reconstructed J/ψ instead, what function would you use, and why?

Step 2: Investigate the shape of the dimuon mass spectrum

Next, we will create TProfile histograms of the reconstructed dimuon invariant mass vs azimuthal coordinate of the muon direction (φ), separately for positive and negative muons, as well as the dimuon mass vs pseudorapidity (η) of the muon. E.g.:

The histograms are already initialized for you:

In the class declaration:

// ProfileHistograms declaration for scale estimation
TProfile* prof_MuPlusPhivsDiMuonM;
TProfile* prof_MuMinusPhivsDiMuonM;
TProfile* prof_MuEtavsDiMuonM;

In the constructor:

prof_MuPlusPhivsDiMuonM = fs->make<TProfile>("prof_MuPlusPhivsDiMuonM","#mu^{+} #phi vs m_{#mu^{+}#mu^{-}};Reco muon(+) #phi[rad]; Z peak position [GeV/c^2]",16,-3.14,3.14,88,93);
prof_MuMinusPhivsDiMuonM = fs->make<TProfile>("prof_MuMinusPhivsDiMuonM","#mu^{-} #phi vs m_{#mu^{+}#mu^{-}};Reco muon(-) #phi[rad];Z peak position [GeV/c^2]",16,-3.14,3.14,88,93);
prof_MuEtavsDiMuonM = fs->make<TProfile>("prof_MuEtavsDiMuonM","Muon #eta vs m_{#mu^{+}#mu^{-}};Reco Muon #eta; Z peak position [GeV/c^2]",50,-2.4,2.4,88,93);

In the analyze function, add the code to fill the profiles. For each muon pair, the invariant mass will enter both plots prof_MuPlusPhivsDiMuonM and prof_MuMinusPhivsDiMuonM — one as a function of φ(μ+) and the other as a function of φ(μ) —, while prof_MuEtavsDiMuonM will be filled twice. After running the code again, look at the profile plots.

prof_MuPlusPhivsDiMuonM->Fill(mup->phi(),diMuonRecMass,1); // etc...

Questions 2:
  • What do they look like? Is the reconstructed mass flat vs φ and η or are there deviations?
  • Are the deviations compatible with statistical fluctuations or do you notice a trend?
  • What could explain this behavior?

Step 3: Investigate the muon transverse momentum resolution

Now we will study the muon transverse momentum resolution. We will consider distributions of the transverse momentum residuals, defined as R(1/pT) = (1/pTREC – 1/pTGEN)/(1/pTGEN). In the ideal case, the distribution of the residuals is expected to be Gaussian and its standard deviation is the pT resolution. Likewise, we can define the dimuon invariant mass residuals as R(M) = (MREC – MGEN)/MGEN, and the standard deviation of the R(M) distribution defines the mass resolution.

We will create the following histograms:

  • 1D histogram of the muon transverse momentum residuals R(1/pT): A histogram (h_MupTRes) of the muon transverse momentum residual using the PAT muons and GEN muons. Fit this distribution to a Gaussian function using the FitPanel. The fit parameter sigma gives the muon transverse momentum resolution in the chosen pT range.
  • 1D histogram of the dimuon invariant mass residuals R(M): A histogram (h_MassRes) for the invariant mass resolution residual, which is defined as (Mrec - Mgen )/Mgen. Fit this distribution to a Gaussian.
  • Profile histograms (prof_MupTvspTRes and prof_MuEtavspTRes) of muon transverse momentum residuals as a function of muon pT and η.
Note that, by default, a TProfile will display in each bin the mean value ± mean value error (i.e. RMS/√N). Initializing the TProfile with option "s" instead, it will display mean value ± RMS. This is more useful for us, as the RMS estimates the standard deviation, i.e. the resolution of each bin.

The histograms are already initialized for you:

In the class declaration:

TH1F* h_MassRes;
TH1F* h_MupTRes;

// ProfileHistograms declaration for resolution study
TProfile* prof_MuEtavspTRes;
TProfile* prof_MupTvspTRes;

In the constructor:

h_MassRes = fs->make("h_MassRes","Mass Resolution Residual",80,-0.15,0.15);
h_MupTRes = fs->make("h_MupTRes","Muon p_{T} resolution;#Delta p_{T}/p_{T};",80,-0.2,0.2);

prof_MuEtavspTRes = fs->;make<TProfile>("prof_MuEtavspTRes",";Gen Muon #eta;#Delta p_{T}/p_{T}",50,-2.4,2.4,0,1);
prof_MupTvspTRes = fs->make<TProfile>("prof_MupTvspTRes",";Gen Muon p_{T} [GeV];#Delta p_{T}/p_{T}",25,20,100,0,1);

Now you need to add code to analyze which fills the histograms.

  • Fill the histograms in the same loop as when you find the dimuon mass - while idxmup_Gen and idxmun_Gen are still defined.
</div>

prof_MuEtavspTRes->Fill(genColl->at(idxmup_Gen).eta(),(1/mup->pt()-1/genColl->at(idxmup_Gen).pt())/(1/genColl->at(idxmup_Gen).pt())); // etc...

Re-run your analyzer and produce the new plots. Then fit the R(1/pT) distribution and the R(M) distribution to a Gaussian function, using the FitPanel. The standard deviation σ of the fit gives you the muon transverse momentum resolution and the dimuon mass resolution in the chosen pT range.

Now open the two TProfile. If you used the option "s" in the initialization, the error bars represent the RMS of the distribution in that bin. This can be taken as an approximate estimate of the resolution in that given pT or η bin. It is advisable to restrict the y-range of the TProfile (i.e. the range where the RMS is computed), to avoid that outliers might blow up the RMS.

Questions 3:
  • What is the typical 1/pT resolution of a 50 GeV muon measured in the CMS detector?
  • How does the 1/pT resolution scale with transverse momentum? And with the pseudorapidity?
  • Why is the mass residual not a perfect Gaussian distribution?
  • How is the 1/pT resolution related to the invariant mass resolution?

Step 4: Momentum scale corrections

As mentioned in the introduction, small biases in the muon momentum measurement can be recovered by applying specific corrections. In CMS we have several algorithms to compute such corrections to the momentum scale and/or resolution. They are documented in this MUO POG TWiki page. In the following, we will demonstrate the use of one of these correction strategies, called Rochester algorithm. The details of the algorithm and how it computes the corrections are beyond the goals of this exercise. Suffice it to say that it extracts correction factors by "forcing" muons from (mostly) Z→μμ decays to coincide with reference distributions obtained from Monte Carlo generated muons. For more details, check out EPJC V72, 10.2194 (2012) ( arXiv:1208.3710) or the link above.

Copy the Rochester class and correction files to the plugins folder of your local working area:

xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise2/plugins/RoccoR.h plugins/RoccoR.h
xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise2/plugins/RoccoR.cc plugins/RoccoR.cc

Edit your analyzer MuonExercise2.cc and add the following lines:

// Among the include files
#include "TRandom3.h"
#include "RoccoR.cc"

// In the class declaration
RoccoR rc("/root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise2/plugin/rcdata.2016.v3");
TRandom3 rnd(1234);

// In the analyze function, after the selection of both leptons
/// Compute correction factor for mu+ (mup)
double mupSF = rc.kScaleFromGenMC(mup->charge(),
				  mup->pt(),
				  mup->eta(),
				  mup->phi(),
				  mup->innerTrack()->hitPattern().trackerLayersWithMeasurement(),
				  mup->genParticle()->pt(),
				  rnd.Rndm()); 
/// Compute correction factor for mu- (mun)
double munSF = rc.kScaleFromGenMC(mun->charge(),
				  mun->pt(),
				  mun->eta(),
				  mun->phi(),
				  mun->innerTrack()->hitPattern().trackerLayersWithMeasurement(),
				  mun->genParticle()->pt(),
				  rnd.Rndm()); 

/// Correct the muon four-momenta 
auto mupp4 = mup->p4() * mupSF; 
auto munp4 = mun->p4() * munSF;

The function above assumes that a matching reco::GenParticle is available for every pat::Muon. This should be the case in our exercise. If the gen-particle is not available instead, you can use the following version of the correction function:

double mupSF = rc.kScaleAndSmearMC(mup->charge(),
				  mup->pt(),
				  mup->eta(),
				  mup->phi(),
				  mup->innerTrack()->hitPattern().trackerLayersWithMeasurement(),
				  rnd.Rndm(),
				  rnd.Rndm()); 
where the =reco::GenParticle is replaced by a random number, with uniform distribution between 0 and 1.

Now that the correction is in place, re-run the analyzer, making sure to change the name of the output file, so not to accidentally overwrite the old, uncorrected results. Compare the distributions and profiles of reconstructed muon variables before and after the application of the scale+resolution corrections. Fit the mass spectra and the residuals distributions with Gaussian functions, just like you did for the uncorrected results.

Questions 4:
  • Describe the main differences that you observe between the uncorrected and corrected distributions.
  • Does the corrected mass spectrum get closer to the generated mass spectrum, and the peak closer to the PDG value?
  • Do the corrections help restore a flat distribution of the dimuon mass vs φ and η?
  • How did the resolution change after the corrections? How would you explain it?

Solutions

Answers to the questions raised during the exercise can be found in this slides.
If you are running out of time, you may copy a solution from below, look at the implementation, run it, and look at the histograms. You can also find the complete analyzer for the exercise here:
/uscms_data/d1/hats/Muon2018/Exercises/MuonExercise2/plugins/MuonExercise2_solutions.cc

Exercise 3: Muon identification and isolation

In the previous exercises we applied cuts on various quantities embedded in the PAT muon object without understanding what they were or why we were imposing the suggested requirements. In this exercise, we will build on the discussion in the introduction and the practice with the miniAOD in the previous exercise to analyze the main properties and quality variables of muon tracks and how they can be used to identify muons from different sources. We will use the following MINIAODSIM files, containing simulated Drell–Yan, top-pair, and QCD multijet events:
root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/dymm.root
root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/TTJets_DiLept_MadGraph_LO.root
root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/QCD_Pt-20toInf_MuEnrichedPt15_Pythia.root

For more statistics, you can also integrate with the following files:
root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/dymm_1.root
root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/TTJets_DiLept_MadGraph_LO_2.root
root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/TTJets_DiLept_MadGraph_LO_3.root
root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/TTJets_DiLept_MadGraph_LO_4.root

Step 1: Write and run an EDAnalyzer

We will study the muon identification variables using a new EDAnalyzer in CMSSW. Starting to create a new EDAnalyzer in you CMSSW area:.

Start with creating an empty analyzer:

cd $CMSSW_BASE/src/HATSExercises
mkedanlzr MuonExercise3
cd MuonExercise3

Then copy the template code MuonExercise3.cc, the BuildFile, the configuration file MuonExercise3.py and some prepared tools:

xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise3/plugins/MuonExercise3.cc plugins/MuonExercise3.cc
xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise3/MuonExercise3.py MuonExercise3.py
xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise3/plugins/BuildFile.xml plugins/BuildFile.xml
xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise3/test/overlayHistos.cc test/overlayHistos.cc

We will consider muons satisfying some very basic criteria:

  • pT > 20 GeV
  • |η| < 2.4, i.e. within the tracker acceptance
  • the muon must have an inner-tracker track
We will also retrieve and select the primary proton-proton vertices with some very basic requirements:
if (!vertex->isFake() && vertex->ndof()>4 && vertex->position().Rho()position().Z())<24) { ... }
In order to proceed, at least one vertex must pass this selection. If more than one vertex is selected, we will choose the first one as ordered in the original collection, i.e. the vertex having the highest sum of squared pT of the associated tracks.

To better understand the selection criteria, it is useful to classify each muon according to how it was produced:

  • prompt, i.e. from the decay of a W or Z boson or from a τ lepton produced in the hard proton-proton interaction
  • heavy flavor decay, i.e. from the decay of a b-quark or c-quark hadron
  • light flavor decay, i.e. from the decay of a light-quark hadron, such as a pion or a kaon
  • fake muons, such as punch-through or matching of random tracks and muon segments.
In order to make this classification, a rudimental function is provided in the analyzer template. It accesses the reco::GenParticle objects embedded in pat::Muon and uses some gen-status flags and the PDG ID of the muon's ancestors.

These are some possible variables that we will investigate:

  1. whether the muon is a global muon, i.e. it was reconstructed with a combined fit of tracker and muon chambers measurements
    • muon->isGlobalMuon()
  2. whether the muon is a tracker muon, i.e. it was identified by geometrically matching an inner track with segments in the muon chambers
    • muon->isTrackerMuon()
  3. Normalized χ2 of the global track fit
    • if(muon->isGlobalMuon()) muon->globalTrack()->normalizedChi2()
  4. Number of muon chamber hits included in the global-muon track fit
    • if(muon->isGlobalMuon()) muon->globalTrack()->hitPattern().numberOfValidMuonHits()
  5. Number of muon stations with matched segments
    • muon->numberOfMatchedStations()
  6. Number of hits in the pixel detector
    • muon->innerTrack()->hitPattern().numberOfValidPixelHits()
  7. Number of hits in the tracker layers
    • muon->innerTrack()->hitPattern().trackerLayersWithMeasurement()
  8. Transverse impact parameter of the track with respect to the vertex from which the muon originated
    • muon->muonBestTrack()->dxy(firstGoodVertex->position())
  9. Isolation based on the sum of pT of charged-hadron PFCandidates from the leading primary vertex in the event, in a cone of ΔR < 0.4 around the muon
    • muon->pfIsolationR04().sumChargedHadronPt
  10. Isolation calculated with neutral-hadron PFCandidates in a cone of ΔR < 0.4 around the muon
    • muon->pfIsolationR04().sumNeutralHadronEt
  11. Isolation calculated with photon PFCandidates in a cone of ΔR < 0.4 around the muon
    • muon->pfIsolationR04().sumPhotonEt
  12. Isolation calculated with all charged particles in a cone of ΔR < 0.4 around the muon, but not from the leading primary vertex (i.e. pileup contribution to the isolation sum)
    • muon->pfIsolationR04().sumPUPt
  13. PF-based combined relative isolation, Δβ-corrected for pileup
    • (muon->pfIsolationR04().sumChargedHadronPt + max(0., mu->pfIsolationR04().sumNeutralHadronEt + mu->pfIsolationR04().sumPhotonEt - 0.5*mu->pfIsolationR04().sumPUPt)) / muon->pt()
  14. Tracker-based relative isolation
    • muon->isolationR03().sumPt / muon->pt()
More information about these functions can be found at the links listed above.

The methods to access these informations you can see in the list above and on the Muon ID page (look in the table of tight id requirements). For an even more in depth look you can see the documentation of pat::Muon. More information about the types of isolations can be found here.

Now we need to plot all these variables in histograms. Given the large number of variables and muon classes, a convenient way to go is to use the TFileService utility — we already tried it before — and create an array of histograms, one for each category: prompt muons, heavy flavor decays, light flavor decays, and fake muons. E.g., see the example below for the case of muon pT.
In the class declaration:

enum MuonParentage {PROMPT, HF, LF, OTHER, N_MUPAR_TYPES};
const char* enumNames[MuonParentage::N_MUPAR_TYPES] = {"prompt", "hf", "lf", "other"};
TH1F* h_pt[MuonParentage::N_MUPAR_TYPES];
In the constructor:
for (size_t idx=0; idx < MuonParentage::N_MUPAR_TYPES; idx++)
    h_pt[idx] = fs->make(Form("pt_%s", enumNames[idx]), Form("pt_%s", enumNames[idx]), 20,  0., 100.);
In the analyze function:
// After determining the parentage of the muon
h_pt[parentage]->Fill(muon->pt());

Complete the analyzer adding histograms for all the identification variables, then run it separately on the three samples. Then merge the three output files using the following command:

mergeTFileServiceHistograms -i histos1.root -i histos2.root -i histos3.root -o merged_histos.root

Finally, for each variable, overlay the normalized distributions from each source onto the same plot, for comparison. You can use the provided tool overlayHistos.cc, as follows:

root -l -b -q 'overlayHistos.cc("merged_histos.root")'

Questions 1:
  • What types of muons would you expect from each sample?
  • For each variable, overlay the histograms from different sources. Can you explain the differences you observe?
  • E.g. explain the differences in the number of pixel hits, normalized χ2, number of matched muon stations, and isolation variables.
  • By comparing the various distributions, could you come up with a possible set of cuts to isolate each source of muons?

Step 2: Standard muon definitions in CMS

The CMS Muon Physics Object Group (MUO POG) takes care of everything that concerns muon reconstruction, identification, high-level triggering, performance evaluation and monitoring, corrections, use in physics analysis, etc. Among other tasks, it develops and maintains a number of standard identification and isolation criteria, which are broadly used in analysis across all CMS. The full list of official definitions can be found on this TWiki page. Here you can find a summary of the most common criteria. These definitions are conveniently implemented as selectors for reco::Muon objects in the MuonSelectors class, as well as functions in the pat::Muon class.
  • Loose Muons
bool pat::Muon::isLooseMuon() 
bool muon::isLooseMuon(const reco::Muon& muon) {
    return muon.isPFMuon() && (muon.isGlobalMuon() || muon.isTrackerMuon());
}

  • Medium Muons
bool pat::Muon::isMediumMuon() 
bool muon::isMediumMuon(const reco::Muon& muon) {
    if( !( isLooseMuon(muon) && muon.innerTrack()->validFraction()>0.8 )) return false; 

    bool goodGlb = muon.isGlobalMuon() && 
        muon.globalTrack()->normalizedChi2()<3. && 
        muon.combinedQuality().chi2LocalPosition<12. && 
        muon.combinedQuality().trkKink<20.; 

    return (segmentCompatibility(muon) > (goodGlb ? 0.303 : 0.451)); 
}

  • Tight Muons
bool pat::Muon::isTightMuon(const reco::Vertex& vtx) 
bool muon::isTightMuon(const reco::Muon& muon, const reco::Vertex& vtx) {
    if(!muon.isPFMuon() || !muon.isGlobalMuon()) return false;
    bool muID = muon.isGlobalMuon() && muon.globalTrack()->normalizedChi2()<10. && muon.globalTrack()->hitPattern().numberOfValidMuonHits()>0 && muon.numberOfMatchedStations()>1;  
    bool hits = muon.innerTrack()->hitPattern().trackerLayersWithMeasurement()>5 && muon.innerTrack()->hitPattern().numberOfValidPixelHits()>0; 
    bool ip = fabs(muon.muonBestTrack()->dxy(vtx.position()))<0.2 && fabs(muon.muonBestTrack()->dz(vtx.position()))<0.5;
  
    return muID && hits && ip;
} 

  • Soft Muons
bool pat::Muon::isSoftMuon(const reco::Vertex& vtx) 
bool muon::isSoftMuon(const reco::Muon& muon, const reco::Vertex& vtx) {
    bool muID = muon::isGoodMuon(muon, TMOneStationTight);
    if(!muID) return false;
  
    bool layers = muon.innerTrack()->hitPattern().trackerLayersWithMeasurement()>5 &&
        muon.innerTrack()->hitPattern().pixelLayersWithMeasurement()>0;
    bool ishighq = muon.innerTrack()->quality(reco::Track::highPurity); 
    bool ip = fabs(muon.innerTrack()->dxy(vtx.position()))<0.3 && fabs(muon.innerTrack()->dz(vtx.position()))<20.;
  
    return layers && ip && ishighq;
}

  • HighPt Muons
bool pat::Muon::isHighPtMuon(const reco::Vertex& vtx) 
bool muon::isHighPtMuon(const reco::Muon& muon, const reco::Vertex& vtx){
    bool muID = muon.isGlobalMuon() && muon.globalTrack()->hitPattern().numberOfValidMuonHits()>0 && (muon.numberOfMatchedStations()>1);
    if(!muID) return false;

    bool hits = muon.innerTrack()->hitPattern().trackerLayersWithMeasurement()>5 &&
        muon.innerTrack()->hitPattern().numberOfValidPixelHits()>0; 
    bool momQuality = muon.tunePMuonBestTrack()->ptError()/muon.tunePMuonBestTrack()->pt()<0.3;
    bool ip = fabs(muon.innerTrack()->dxy(vtx.position()))<0.2 && fabs(muon.innerTrack()->dz(vtx.position()))<0.5;
  
  return muID && hits && momQuality && ip;
}

Questions 2:
  • Given the implementation of the functions above, can you guess which types of muons each selection is meant to address, and why?
  • The HighPt muon definition is very similar to the Tight muon one, but it adds a requirement on the transverse momentum resolution, σ(pT)/pT < 0.3. Can you find an explanation?
  • Verify your guesses by adding these selections in your analyzer. Produce distributions for some variables (e.g. pT) for each muon category, after applying each of the selections above, one at a time.

Step 3: Muon isolation

Let's now take a detailed look at the isolation variables, already introduced in Part 1. The most common muon isolation algorithm in CMS makes use of the PF candidates found in a region of ΔR < 0.4 around the muon track:
mu->pfIsolationR04().sumChargedHadronPt   // pT sum of charged hadrons from the main primary vertex of the event
mu->pfIsolationR04().sumNeutralHadronEt   // pT sum of neutral hadrons
mu->pfIsolationR04().sumPhotonEt          // pT sum of photons
In order to exploit the full-detector information, these variables can be combined in a single isolation variable:
const reco::MuonPFIsolation &pfR04 = mu->pfIsolationR04();
double combRelIso = (pfR04.sumChargedHadronPt + pfR04.sumNeutralHadronEt + pfR04.sumPhotonE) / mu->pt();   // combined relative isolation
The combined isolation turns out to perform better than the individual components separately in terms of efficiency vs background rejection.
Note that for neutral particles (photons and neutral hadrons) it is impossible to determine the vertex they originated from, since they don't have a track. Therefore neutral particles from pileup vertices contribute to the pT sum, and the performance of the combined isolation results to be strongly dependent on the pileup level. Corrections are available to mitigate such effect. The most common in CMS is called "Δβ correction": it estimates the ΣpT of neutral particles coming from pileup vertices using
  1. the ΣpT of charged particles from pileup vertices (mu->pfIsolationR04().sumPUPt), and
  2. the ratio of neutral-to-charged particles expected in LHC proton-proton collisions. From simulation studies, this ratio results to be about 0.5.
We can now define a Δβ-corrected combined relative isolation, less sensitive to the number of pileup vertices:
const reco::MuonPFIsolation &pfR04 = mu->pfIsolationR04();
double corrCombRelIso = (pfR04.sumChargedHadronPt + std::max(0.0, pfR04.sumNeutralHadronEt + pfR04.sumPhotonEt - 0.5*pfR04.sumPUPt)) / mu->pt();

Questions 3:
  • In your analyzer, add distributions of the number of reconstructed primary vertices at different stages of the selection:
    1. for all muons,
    2. for muons passing the Tight ID,
    3. for muons passing the Tight ID plus a maximum cut on combRelIso, e.g. combRelIso < 0.15
    4. for muons passing the Tight ID plus the same cut on corrCombRelIso, e.g. corrCombRelIso < 0.15
  • Dividing the distributions above, compute the efficiency of the combRelIso and corrCombRelIso selections vs number of reconstructed primary vertices with respect to Tight muons.
  • Overlay the two efficiencies. What do you notice?
  • Aside from these pileup corrections, can you think of alternative isolation requirements that are not pileup-dependent? Try and implement one, then measure its efficiency vs number of reconstructed primary vertices.

Solutions

Answers to the questions raised during the exercise can be found in this slides.
If you are running out of time, you may copy a solution from below, look at the implementation, run it, and look at the histograms. You can also find the complete analyzer for the exercise here:
/uscms_data/d1/hats/Muon2018/Exercises/MuonExercise3/plugins/MuonExercise3_solutions.cc

Exercise 4: Muon Efficiencies

Tag-and-Probe is a data-driven technique used to calculate lepton reconstruction, identification, and trigger efficiencies. This technique uses narrow dilepton resonances, such as Z (for muons with relatively high pT) or J/ψ (for muons with lower pT). Almost-unbiased estimates of the efficiencies can be obtained at the different stages of muon trigger and offline reconstruction. Events are selected with strict requirements on one muon (the tag), and with a more relaxed selection on the other muon (the probe), such that the probe muon can be used to measure the efficiency in question without large biases. The fraction of probe muons that pass the selection under study gives an estimate of its efficiency. The invariant mass of the tag-probe pair is used to select Z→μμ or J/ψ→μμ events.

The Tag-and-Probe technique is generally used to measure and compare efficiencies in data and in MC simulation, and thus to compute a correction scale factor that can be applied to MC events to match the efficiency observed in data. These scale factors are typically determined as functions of pT and η. If necessary, their dependence on other kinematic variables can be investigated too — e.g. vs the number of vertices, in case of strong pileup dependence. In some cases, customized scale factors are necessary for some analyses, depending on their specific trigger and offline thresholds.

Despite the tight selection on the tag muon and the invariant mass constraints, the selected Z→μμ or J/ψ→μμ sample generally contains background events, which appear as a nonresonant continuum underneath the resonance peak. Therefore the background must be subtracted, to ensure that the efficiency is measured with signal muons only. This can be achieved by fitting the invariant mass spectrum to signal + background shapes (e.g. analytical functions or MC templates). Finding proper functions or templates for signal and background is often the most challenging part of the process.

The total lepton efficiency is generally factorized in multiple steps as follows:
total lepton efficiency = (tracking) × (reconstruction/tracking) × (ID/reconstruction) × (isolation/ID) × (trigger/isolation)
In each efficiency step, the denominator determines the selection of the probe. In the last steps (in part. isolation and trigger), the probe selection is tighter and, therefore, the background level is quite low and the background subtraction is generally easier — or not even needed, in some cases. In this exercise, you will measure efficiencies using simulated Z→μμ events:
root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/dymm.root
root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/dymm_1.root
Since you are using a pure Z sample, you won't need background subtraction nor fitting. The efficiency will simply be computed by counting probes before and after the selection under study.

You will start by computing "true" efficiencies using the generator-level information. This will be your reference efficiencies. Next, you will implement a simple tag-and-probe algorithm to measure the efficiencies with a data-driven approach, and you will compare your results to the "true" efficiencies. Finally, you will try to use the same algorithm on a real single-muon data sample, taken from the 2016G CMS data set, SingleMuon stream:
root://cmseos.fnal.gov///store/user/hats/2018//Muon/Samples/SingleMuon_Run2016G_18Apr2017_3.root
root://cmseos.fnal.gov///store/user/hats/2018//Muon/Samples/SingleMuon_Run2016G_18Apr2017_4.root
root://cmseos.fnal.gov///store/user/hats/2018//Muon/Samples/SingleMuon_Run2016G_18Apr2017_9.root
root://cmseos.fnal.gov///store/user/hats/2018//Muon/Samples/SingleMuon_Run2016G_18Apr2017_10.root

Step 1: Create and EDAnalyzer for "MC-true" efficiency

Create a new analyzer package:

cd $CMSSW_BASE/src/HATS
mkedanlzr MuonExercise4
cd MuonExercise4

Then copy the template code MuonExercise4.cc, the BuildFile and the configuration file MuonExercise4.py:

xrdcp -f root://cmseos.fnal.gov///store/user/cmsdas/2018/short_exercises/Muons/Exercises/MuonExercise4/plugins/MuonExercise4McTruth.cc plugins/MuonExercise4McTruth.cc
xrdcp -f root://cmseos.fnal.gov///store/user/cmsdas/2018/short_exercises/Muons/Exercises/MuonExercise4/MuonExercise4.py MuonExercise4.py
xrdcp -f root://cmseos.fnal.gov///store/user/cmsdas/2018/short_exercises/Muons/Exercises/MuonExercise4/plugins/BuildFile.xml plugins/BuildFile.xml
xrdcp -f root://cmseos.fnal.gov///store/user/cmsdas/2018/short_exercises/Muons/Exercises/MuonExercise4/test/muonExercise4mcTruth_cfg.py test/muonExercise4mcTruth_cfg.py
xrdcp -f root://cmseos.fnal.gov///store/user/cmsdas/2018/short_exercises/Muons/Exercises/MuonExercise4/test/muonExercise4TagProbe_cfg.py test/muonExercise4TagProbe_cfg.py
xrdcp -f  root://cmseos.fnal.gov///store/user/cmsdas/2018/short_exercises/Muons/Exercises/MuonExercise4/test/muonExercise4TagProbe_data_cfg.py test/muonExercise4TagProbe_data_cfg.py

Edit the analyzer MuonExercise4McTruth.cc .

First, loop over the reco::GenParticle collection, select the prompt muons with pT > 20 GeV and |η| < 2.4, and fill histograms with the main kinematic variables. Include a distribution of the number of reconstructed vertices, to check the pileup dependence of the efficiencies. E.g.

for(auto&& genPart : *(genColl.product())) {
  // Check if it's a muon from Drell-Yan process
  if(genPart.isPromptFinalState() && std::abs(genPart.pdgId()) == 13) {
    // Only muons within acceptance and pt>20
    if(genPart.pt() > 20. && std::abs(genPart.eta())<2.4) {
      h_pt->Fill(genPart.pt());
      h_nvtx->Fill(nGoodVtx);
    }
  }
}
Next, loop over the pat::Muon collections and check that they are matched to one of the reco::GenParticle muons used in the previous plots (see example in the template). Then you can start filling histograms (using the generator-level information from the associated reco::GenParticle) at different selection stages:
  • all reconstructed muons with a valid inner track (i.e. skip muons that are reconstructed exclusively as standalone muons)
  • reconstructed muons passing some standard ID: e.g. loose, medium, tight
  • tight muons that pass a requirement on the Δβ-corrected combined relative isolation (e.g. < 0.15)
  • isolated muons that pass a single-muon trigger or a combination of single-muon triggers, e.g. HLT_IsoMu24 || HLT_IsoTkMu24. This step requires matching the muon candidate to a trigger object (see example in the template file).
Finally, in the endJob function or in the destructor, create efficiency graphs by computing the ratio of the histograms above. You can use the TGraphAsymmErrors class, which gives you different options to compute the errors on the efficiency. You can follow this example:
// In the class definition
TGraphAsymmErrors *gae_rec_pt;

// In the constructor
gae_rec_pt = fs->make<TGraphAsymmErrors>();

// In the endJob or destructor
gae_rec_pt->Divide(h_rec_pt, h_pt, "cl=0.683 b(1,1) mode"); // 1-sigma errors computed with a Bayesian approach
If there are empty bins in the denominator histogram, an exception might be thrown by CMSSW. To keep the program from exiting with an error message, you can use try{...} catch(...) {}:
try{gae_rec_pt->Divide(h_rec_pt, h_pt, "cl=0.683 b(1,1) mode");} catch(cms::Exception& ex) {}  
Compute the following factorized efficiencies:
  • reconstructed muons / generated muons
  • muon ID / reconstructed muons
  • isolation / muon ID
  • single-muon trigger / isolation

Step 2: Create and EDAnalyzer for "Tag-and-Probe" efficiency

Now create a new analyzer following the example file:

cd $CMSSW_BASE/src/CMSDASExercises/MuonExercise4
cmsenv
xrdcp -f root://cmseos.fnal.gov///store/user/cmsdas/2018/short_exercises/Muons/Exercises/MuonExercise4/plugins/MuonExercise4TagProbe.cc plugins/MuonExercise4TagProbe.cc

analyzer MuonExercise4TagProbe.cc .
Note that, in this case, you will not be accessing any generator-level information. This analyzer could be used on real data.

You need two nested loops on the pat::Muon collection, one to select the tag and one to select the probe. First, select the tag with the following cuts:

  • Tight muon ID
  • pT > 25 GeV and |η| < 2.4
  • Combined relative PF isolation < 0.15 (in a cone ΔR < 0.4)
  • Match a trigger object passing a single-muon trigger (e.g. HLT_IsoMu24 || HLT_IsoTkMu24)

Once you have found a tag, you can search for a probe. To be consistent with the previous part of the exercise, the probe should be any muon having pT > 20 GeV and |η| < 2.4 with a valid inner track. Additionally, to ensure that the tag-probe pair is compatible with a Z→μμ decay, require that tag and probe have opposite electric charges, and that their invariant mass is within a narrow window around the nominal Z mass. Since you are not performing any background subtraction here, you should choose a very narrow window, to reduce the level of background (in real data). E.g. you can choose (91 ± 5) GeV.

Once you have found both a tag and a probe, you can proceed to fill the plots using the probe information, at different stages of probe selection. Make the same plots as in Part 1, starting from the reconstructed muons with a valid track — obviously you don't have a collection of "all the generated muons" in this case! In the endJob or in the destructor, create the efficiency TGraphAsymmErrors as in Part 1, starting from the ID efficiencies.

Finally, run the code and compare the tag-and-probe efficiencies with the MC-truth efficiencies.

Questions 1:
  • In the tag selection, you requested that the tag muon be matched to a trigger object. Why is it important to make sure that the tag has fired a trigger?
  • In the MC-truth approach, you can use both muons in the event to measure the efficiency. In the tag-and-probe, you only use the probe. Does this mean that you can use only one muon per event?
  • When you factorize the efficiency measurement as explained above, the sample you use to measure each efficiency is a subset of the samples used in the previous steps. Therefore the statistical uncertainties on the various efficiencies are correlated among each other. Does this mean the total uncertainty on the full efficiency becomes larger when you factorize? Why?
  • Do you observe significant differences between the "true" efficiencies and the tag-and-probe ones? In what plots and in what regions? What is the maximum size of the differences?
  • What factors might impact or bias the tag-and-probe efficiency estimate?

Step 3: Run the "Tag-and-Probe" analyzer on real data

As already mentioned, the tag-and-probe approach is data driven. The analyzer does not access any generator-level information, thus you can safely run it on real data. The following sample is taken from the 2016G CMS data set, SingleMuon stream, requiring at least a good tag candidate and a good probe candidate per event (with tags firing " HLT_IsoMu24 || HLT_IsoTkMu24 "):
root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/SingleMuon_Run2016G_18Apr2017_3.root
root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/SingleMuon_Run2016G_18Apr2017_4.root
root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/SingleMuon_Run2016G_18Apr2017_9.root
root://cmseos.fnal.gov///store/user/hats/2018/Muon/Samples/SingleMuon_Run2016G_18Apr2017_10.root
You will need to check out a new CMSSW release, the same used to process the data:

cd $CMSSW_BASE/src/CMSDASExercises/MuonExercise4
xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise4/plugins/MuonExercise4McTruth.cc plugins/MuonExercise4McTruth.cc

Remove the following lines from the BuildFile.xml:

<library   file="MuonExercise4McTruth.cc" name="MuonExercise4McTruth">
  <flags   EDM_PLUGIN="1"/>
</library>
Now you can compile. Then, copy the following configuration file to your test folder:

Finally, run your analyzer on the data skim above, check the efficiency plots, and compare them to the efficiencies in the simulated sample. Check also the invariant mass distributions at the different stages of the probe selection.

Questions 2:
  • Are there very significant differences between the data and MC efficiencies? In what plots and regions?
  • Based on the invariant mass distributions for different probe selections, which efficiencies are reliable in your opinion? Which are affected by too much background?
  • What other factors may cause differences between data and MC efficiencies? Is it possible to eliminate these differences?

Exercise 5: Cosmic Muon pT scale bias studies

Instruction slides.

To detemine if the detector has a bias, towards either positively or negatively charged muons, we take an MC sample of cosmic muons and inject an artificial bias to it. We then compare each of these "new MC bias" samples with the CRAFT or Interfill data. From this we generate a chi^2 distribution vs injected bias. We look for the minimum of this distribution to occur where the bias of the detector is and look to see if the error is consistent with zero bias. We generate a program to do just this.

Step 1: Collect Your Data

Copy the two datasets and the four MC samples, as well as the first script to apply the injected bias, to your working directory using the following commands:

xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise5/CRAFT15.root .
xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise5/Startup100.root .
xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise5/Startup500.root .
xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise5/Interfill.root .
xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise5/Asym100.root .
xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise5/Asym500.root .
xrdcp -f root://cmseos.fnal.gov///store/user/hats/2018/Muon/Exercises/MuonExercise5/muonExercise5a.cc .

Step 2: Setup Your Analyzer

The file paths are hard coded. To open the proper files, change the path names to the appropriate paths in your working directory. The lines to be changed are:
TFile *f = TFile::Open(TString("$CMSSW_BASE/src/WSUDiLeptons/CosmicEndpoint/test/CRAFT15.root"));
TFile *g = TFile::Open(TString("$CMSSW_BASE/src/WSUDiLeptons/CosmicEndpoint/test/Startup100.root"));
TFile *h = TFile::Open(TString("$CMSSW_BASE/src/WSUDiLeptons/CosmicEndpoint/test/Startup500.root"))

First use CRAFT and the two MC_Startup files.

Step 3: Make Curvature Plots

With the proper files specified run the code using the following command:

root -x -b -q muonExercise5a.cc++()

This will output a .root file containing all of the histograms that we are interested in.

The name of the file should be: ROOTHistograms.root

Look in the chi^2 plot to see if you notice anything interesting.

Note what value of bias the minimum of this distribution is.

Step 4: Compare Datasets' Curvature Bias

Repeat steps 2 and 3 of this exercise using the Interill dataset and the MC_Asym datasets. Be sure to change the name of the output .root file before running to prevent overwriting your previously created .root file. There is a string variable that holds the name without the file extension. This is important as you will use each file in the next exercise. Look for a line:
std::string const& outFile = "ROOTHistograms";

When you have completed this step, you should have two .ROOT files, one for CRAFT vs Startup MC and one for Interfill vs Asym MC.

Step 5: Get A New Analyzer

We would now like to quantify how well we know the bias. To do this, we write a program to fit the chi^2 versus injected bias distribution to various polynomials. The polynomials that we use are qudratic, quartic, and 8th degree for fitting. We then estimate the uncertainty to be the value of injected bias where chi^2 = (min(chi^2) +1) on both sides of the minimum. In principle, the uncertainty is asymmetric, but we take the quadratic fit uncertainty since the chi^2 should be a 2nd order polynomial.

To start, copy the muonExercise5b.cc file to your working area:

xrdcp -f root://cmseos.fnal.gov///store/user/hats/2017/muon/Exercises/MuonExercise5/muonExercise5b.cc .

Step 6: Setup The Analyzer (Again)

We have two different input files to use depending on whether we are using CRAFT and StartupMC or Interfill and AsymMC. The paths to the files are hardcoded, so you will need to change the path to match your working directory. :
TFile *file = new TFile(TString("$CMSSW_BASE/src/WSUDiLeptons/CosmicEndpoint/test/"+rootFile+".root"), "READ");

Also change the input file name to match the file you would like to analyze:

<pre>std::string const& rootFile = "ROOTHistograms";

Start with CRAFT and MC Startup.

Step 7: Find the Uncertainty in the Bias

With the proper files specified, run the code using the following command:

root -x -b -q muonExercise5b.cc++()

This produces an output .root file called fittedHistograms.root. Open this with a TBrowser and look for a TCanvas called "Canvas" and check to see if the bias is consistent with zero bias.

Step 8: Compare DataSets' Final Results

Repeat steps 2 and 3 of this exercise using the Interfill vs MC_Saym. Make sure to change the code to include the proper input file. Also, be sure to change the name of the ouput ROOT file before running to prevent overwriting your previously created ROOT file.

How do the plots for the two different configurations compare? Do they both suggest similar bias, if not what could be the cause?

Exercise 6: Tracking Muon Efficiency via T&B method

The Tag and Probe tool is a generic tool developed to measure any user defined object efficiency from data at CMS by exploiting di-object resonances like Z or J/Psi. the efficiency is measured as following:

  • Resonances are reconstructed as pairs with one leg passing a tight selection which called (tag) and the other one passing a loose selection (probe).
  • Passing probes are defined according to whatever is the efficiency to measure.
  • the (tag + passing probe) and (tag + failing probe) lineshapes are fit separately with a signal + background model.
  • then the efficiency is computed from the ratio of the passing probe and all probe.
  • the procedures can be done for any probe variable (e.g. pT, η ...) to compute efficiency histograms as a function of those variables.

Question 1: What do you think the probe and the passing probe should be defined for measuring the tracking efficiency?

Step 1: Create and set up a CMSSW working area

set up the release:

cmsrel CMSSW_9_4_0_pre3
cd CMSSW_9_4_0_pre3/src
cmsenv

get core packages:

git cms-merge-topic HuguesBrun:updateL3MuonCollectionsToMatch
git clone git@github.com:GLP90/Tnp94.git MuonAnalysis/TagAndProbe -b 94_newSelector
git cms-addpkg PhysicsTools/PatAlgos

compile the package

cmsenv
scramv1 b -j 5

To produce the Ntuples, put bool simInfoIsAvailalbe=false in link

cmsenv
git cms-addpkg PhysicsTools/PatAlgos
vim PhysicsTools/PatAlgos/plugins/PATMuonProducer.cc
(modify the line mentioned above)

For adding the standalone muon to the tree, replace the files in your the folder of your local working area by those ones

xrdcp -f root://cmseos.fnal.gov///store/user/cmsdas/2018/short_exercises/Muons/Exercises/MuonExercise6/tp_from_aod_Data.py    MuonAnalysis/TagAndProbe/test/zmumu/tp_from_aod_Data.py
xrdcp -f root://cmseos.fnal.gov///store/user/cmsdas/2018/short_exercises/Muons/Exercises/MuonExercise6/tp_from_aod_MC.py    MuonAnalysis/TagAndProbe/test/zmumu/tp_from_aod_MC.py

Assignment 1: produce the TP tree for data and MC

cmsRun MuonAnalysis/TagAndProbe/test/zmumu/{tp_from_aod_Data/MC}.py 

Step 2: Skim the T&P trees

With the program skimTree, you can reduce 'T&P ntuples' size by applying cuts on a specified input tree and copy the result to an output ROOT file. Moreover, you can remove branches from the tree completely. This is mainly done to reduce the file size and therefore to reduce the memory consumption and processing time for T&P studies. Note, that you can load multiple files at once from your local storage device or EOS and merge them together in the output.

The tool can be found as part of the cms-MuonPOG/TnPUtils package

git clone https://github.com/cms-MuonPOG/TnPUtils.git

Then copy the template file "skimTree" to the folder of your local working area

xrdcp -f root://cmseos.fnal.gov///store/user/cmsdas/2018/short_exercises/Muons/Exercises/MuonExercise6/skimTree TnPUtils/skimTree

Both data and MC have been skimmed using

./skimTree root://cmseos.fnal.gov///store/user/cmsdas/2018/short_exercises/Muons/Exercises/MuonExercise6/TnPTree_17Nov2017_SingleMuon_Run2017Bv1_Full_GoldenJSON.root  TnPTree_17Nov2017_SingleMuon_Run2017Bv1_Full_GoldenJSON.root  -r "all" -k "mass pt eta abseta phi staValidStations staQoverPerror tk_deltaR tk_deltaEta tk_deltaR_NoZ tk_deltaEta_NoZ tk0_deltaR tk0_deltaEta tk0_deltaR_NoZ tk0_deltaEta_NoZ tag_nVertices tag_vz tag_bx tag_combRelIsoPF04dBeta run lumi tag_instLumi tag_eta tag_IsoMu24_eta2p1 outerValidHits StaTkSameCharge" -c "(tag_combRelIsoPF04dBeta < 0.15 && tag_combRelIsoPF04dBeta > -0.5 && mass > 69.5 && mass < 130.1 && tag_pt > 23 && (tag_IsoMu22 == 1|| tag_IsoMu24 == 1) && abseta <2.401)"
./skimTree root://cmseos.fnal.gov///store/user/cmsdas/2018/short_exercises/Muons/Exercises/MuonExercise6/DYJetsToLL_M-50_TuneCP5_13TeV-madgraphMLM-pythia8.root  DYJetsToLL_M-50_TuneCP5_13TeV-madgraphMLM-pythia8.root  -r "all" -k "mass pt eta abseta phi staValidStations staQoverPerror tk_deltaR tk_deltaEta tk_deltaR_NoZ tk_deltaEta_NoZ tk0_deltaR tk0_deltaEta tk0_deltaR_NoZ tk0_deltaEta_NoZ tag_nVertices tag_vz tag_bx tag_combRelIsoPF04dBeta run lumi tag_instLumi tag_eta tag_IsoMu24_eta2p1 outerValidHits StaTkSameCharge mcTrue weight" -c "(tag_combRelIsoPF04dBeta < 0.15 && tag_combRelIsoPF04dBeta > -0.5 && mass > 69.5 && mass < 130.1 && tag_pt > 23 && (tag_IsoMu22 == 1|| tag_IsoMu24 == 1) && abseta <2.401)"

Step 3: Running the fits on your data and MC files

The configuration file used to perform the fit is MuonAnalysis /TagAndProbe/test/zmumu/fitTracking.py

Copy the python file to the folder of your local working area:

xrdcp -f root://cmseos.fnal.gov///store/user/cmsdas/2018/short_exercises/Muons/Exercises/MuonExercise6/fitTracking.py    MuonAnalysis/TagAndProbe/test/zmumu/fitTracking.py

The module takes as input

  • A set of skimmed rootfiles produced from step 2
  • Information about which columns to use inside of the dataset:
    • for the binning variables for the efficiency (e.g. pT, η, φ)).
    • select which type of files you will run on (Data or MC).
  • A lineshape model, parametrized using RooFit classes
The fitting functions used here are:
  • Signal: sum of 2 Voigtians.
  • Background: Exponential.


PDFs = cms.PSet(
vpvPlusExpo = cms.vstring(
"Voigtian::signal1(mass, mean1[90,80,100], width[2.495], sigma1[2,1,3])",
"Voigtian::signal2(mass, mean2[90,80,100], width, sigma2[4,2,10])",
"SUM::signal(vFrac[0.8,0,1]*signal1, signal2)",
"Exponential::backgroundPass(mass, lp[-0.1,-1,0.1])",
"Exponential::backgroundFail(mass, lf[-0.1,-1,0.1])",
"efficiency[0.9,0,1]",
"signalFractionInPassing[0.9]"
),
),

Open your python file and modify it by putting your files which you got in step 2 as input files "Line 215":

if "TestData_" in scenario:
process.TnP_Tracking.InputFileNames = ["TnPTree_17Nov2017_SingleMuon_Run2017Bv1_Full_GoldenJSON.root"]

if "TestMC_" in scenario:
process.TnP_Tracking.InputFileNames = [ "DYJetsToLL_M-50_TuneCP5_13TeV-madgraphMLM-pythia8.root"]

To produce the fit files for data using eta binning you can use

cmsRun MuonAnalysis/TagAndProbe/test/zmumu/fitTracking.py TestData_et3

And to produce the fit files for MC using eta binning you can use

cmsRun MuonAnalysis/TagAndProbe/test/zmumu/fitTracking.py TestMC_et3

Assignment 2:

  • produce the fit files for Dtata and MC by using phi binning
  • produce the fit files for Dtata and MC by using nVertices binning

Step 4: Calculate the muon tracking efficiency

Between RunI and RunII, some developments have been made in the CMS experiment in different aspects of the tracking procedure in order to cope with the increasing luminosity.

There are two new iterations dedicated exclusively to muons.

  • An outside-in track reconstruction step seeded in the muon system.
  • An inside-out iteration that re-reconstructs muon-tagged tracks.

Where the tracks fulfill these these requirements Kept in the so called All-Tracks collection and tracks that do not belong to these two above iterations are kept in the so called Tracker-only seeded collection.

Question 2: What do you think the tracker only seeded tracks is used for?

Copy these two C++ files to the folder of your local working area:

xrdcp -f root://cmseos.fnal.gov///store/user/cmsdas/2018/short_exercises/Muons/Exercises/MuonExercise6/plotTracking.cxx    MuonAnalysis/TagAndProbe/test/zmumu/plotTracking.cxx
xrdcp -f root://cmseos.fnal.gov///store/user/cmsdas/2018/short_exercises/Muons/Exercises/MuonExercise6/plotUtil_Tracking.cxx    MuonAnalysis/TagAndProbe/test/zmumu/plotUtil_Tracking.cxx

For producing the tracking efficiency for the All- tracks collection as a function of pseudorapidity " η".

root -b -l -q  TnP_Tracking_dr030e030_TestData_et3.root TnP_Tracking_dr030e030NoZ_TestData_et3.root TnP_Tracking_dr030e030_TestMC_et3.root TnP_Tracking_dr030e030NoZ_TestMC_et3.root   MuonAnalysis/TagAndProbe/test/zmumu/plotTracking.cxx

Then to produce the tracking efficiency for the Tracker only seeded tracks collection as a function of pseudorapidity "η".

root -b -l -q  TnP_Tracking_tk0_dr030e030_TestData_et3.root TnP_Tracking_tk0_dr030e030NoZ_TestData_et3.root TnP_Tracking_tk0_dr030e030_TestMC_et3.root TnP_Tracking_tk0_dr030e030NoZ_TestMC_et3.root MuonAnalysis/TagAndProbe/test/zmumu/plotTracking.cxx

Question 3:

  • What do you think the "fake matching rate" means?
  • What is the difference between the two plots "Raw efficiency" and "Efficiency"?

References and further reading

* Info about EDAnalyzer

* Doxygen - CMSSW documentation

* CMS plot style guidlines - python and cpp

* ROOT help forum

* Rochester momentum corrections

* Muon Reconstruction Performance (2010)

* Muon Detector Performance (2011)

* Muon RECO

* Muon ID

* Official muon selection recommendations (twiki)

* Muon POG Twiki

* Connect to the CMSLPC cluster

* LPC computing links

-- Main.NorbertNeumeister - 2018-05-09

Edit | Attach | Watch | Print version | History: r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r2 - 2018-06-26 - WalaaElmetenawee
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback