CMSDAS Pisa 2019 Muon Object (Short Exercise) Work in progress, under construction

%COMPLETE3%

Contacts - Facilitators

Abstract

This exercise aims at getting attendees familiar with muon object reconstruction, and with the use of muons in CMS analyses in general:

  • a pedagogical introduction on reconstruction, identification, isolation and muon momentum assignment will be presented at the beginning of the exercise;
  • a hands-on session will follow, covering:
    • basics about how to access muon objects information, such as identification and isolation variables;
    • examples of how to assess the performance of the muon reference identification criteria and isolation for signal and background muons in montecarlo;
    • highlights about how to deal with high level data/MC calibrations (a specific example will be given for the case of muon momentum scale/resolution corrections).

Prerequisites / synergies

For the exercise miniAOD data, processed using FWLite in python and ROOT, will be used.

The attendees are assumed to have basic knowledge of:

Some familiarity with Git (and GitHub) is helpful, but not strictly necessary.

This tutorial does not require any other short exercise as pre-requisite, anyhow synergies can be identified with the Tracking and Primary Vertices and, in part, with the Particle Flow exercises.

Miscellaneous notes

The color scheme used for the exercise is the following:

  • Shell commands are embedded in grey box, e.g. :
    ipython -i muonIdVariables.py
  • Output and screen printouts are embedded in green box, e.g.:
    "[muonIdVariables.py] processed 10000 entries"
  • General code snippets are embedded in red box, e.g.:
    print "\tmuon:", iMu, mu.charge(), mu.pt(), mu.phi(), mu.eta()
  • Questions to be answered by attendees, hints and suggestions will be highlighted in GREEN

1. Introduction

Slides

A brief talk outlining of the basics muon object reconstruction, identification, isolation and momentum assignment will be presented before the hands-on session.

Setting up the working area

The first task you are requested to perform is to setup a working area and install the software required for the exercise:

ssh -Y  YOUR_USERNAME@gridui1.pi.infn.it

# if you are using BASH :
export SCRAM_ARCH=slc6_amd64_gcc700
export VO_CMS_SW_DIR=/cvmfs/cms.cern.ch/
source $VO_CMS_SW_DIR/cmsset_default.sh 

# if you are using (T)CSH
setenv SCRAM_ARCH slc6_amd64_gcc700
setenv VO_CMS_SW_DIR /cvmfs/cms.cern.ch/
source $VO_CMS_SW_DIR/cmsset_default.sh                                                                                    
source $VO_CMS_SW_DIR/crab3/crab.sh

cmsrel CMSSW_10_2_10
cd CMSSW_10_2_10/src/

git clone git@github.com:battibass/DAS2019Pisa.git

cd DAS2019Pisa/MuonPOG/
cmsenv

cd plugins
make
cd ..

NOTE (code structure): what you have presently downloaded is a set of skeleton macros which have parts that you will need to fill during the hands-on session. The programs are designed to be as simple as possible, technicalities are reduced to the minimum, to give you the possibility to put most of your effort in learning how to use muon objects.

NOTE (hints): right after the description of each step of the exercise hints about how to get to the solution are available. Unless you are already confident about how to proceed, are warmly encouraged to have a look and get "inspiration" from them.

NOTE (solutions): solutions are made available in github to help attendees who "got stuck" on a specific exercise. Anyhow, please consider downloading solutions only when necessary, or if the facilitator will suggest you to do so, because of time constraints.

Accessing muon object information

The first analysis macro you are aksed to inspect and run is called muonEventDump.py. This is a very simple code, including few examples, to get you familiar with the basics use of muons.

Please have a look to the macro file, to get familiar with it, inspecting it with your favourite editor, e.g. :

vi muonEventDump.py
# or
emacs -nw muonEventDump.py

and then run it using ipython :

ipython -i muonEventDump.py
# (to exit from the ipython shell press CTRL+D)

[...]

***** Event 3
Muon collection size: 2
  Muon #: 0
	===== KINEMATICS:
	  charge: 1
	  pT: 48.9595069885
	  phi: -2.10730743408
	  eta: -2.28633999825
	===== ID VARIABLES:
	  segment compatibility: 0.99749982357
	===== ID SELECTIONS:
	  is TIGHT: True
	===== ISOLATION:
	  TRK based relIso: 0.0
	===== SIM INFO:
	  sim flavour: 13
  Muon #: 1
	===== KINEMATICS:
	  charge: -1
	  pT: 44.0470924377
	  phi: 1.30220592022
	  eta: -1.95945966244
	===== ID VARIABLES:
	  segment compatibility: 0.590126991272
	===== ID SELECTIONS:
	  is TIGHT: True
	===== ISOLATION:
	  TRK based relIso: 0.0166731218044
	===== SIM INFO:
	  sim flavour: 13

[...]

SUGGESTION: please notice how:

  • the loop on muons from a given event is performed (slimmedMuons is the name of the collection of PAT muons in miniAOD);
  • muon kinematical variables, such as transverse momentum, are accessed;
  • muon identification and isolation variables, such as the segment compatibility or the track-based isolation, are accessed;
  • selectors, such as the tight identification selector (reco::Muon::CutBasedIdTight), are accessed.

SUGGESTION: if you want, you can play with the muonEventDump.py to experiment how to use the properties of the muon object that you will explore in the following exercises.

2. Muon identification and isolation in MC

During the CMS collision runs, muons can be produced by many processes. They can come from prompt decays of W and Z bosons, or they can originate among the decay products of light (u,d,s) or heavy (b,c) quarks or taus. In addition punch-though from hadronic activity reaching the muon chamber can be misidentified as a muon. The muon reconstruction is designed to be highly efficient for muons of all origins and, in analyses, a convenient signal/BKG efficiency ratio is obtained by appling a set cuts on:

  1. identification variables related to the "quality" of a reconstructed muon;
  2. the proximity of a muon track to the production vertex;
  3. the muon isolation.

The aim of this part of the hands-on session is to get you familiar with accessing muon identification/isolation variables, as well as recommended muon selectors.

After the exercises are completed you should be able to:

  1. implement one of the many combinations of identification/isolation selections supported by the Muon POG in your analysis;
  2. understand the basics beyond the tuning of the muon identification selectors and isolation working point;
  3. measure the performance of such selections in simulated samples.

Muon identification variables

The macro you are requested to use for this study is called muonIdAnalysis.py. The program accesses a sample of ttbar events decaying semileptonically and plots basic kinematical quantities (pT, eta, phi) of prompt muons (from W decays), as well as muons from heavy flavours (b, c quarks) and light flavour (u,d,s quarks) decays. This Monte-Carlo sample is produced specifically for muon studies, hence the simulation-level information about the origin of a muon is stored for all muons and accessible in miniAOD. Details about the matching of hits from reconstructed muons with sim-level information are documented here and here. In particular, the analysis macro makes use of the simFlavour() member of the PAT muon documented here.

What you are requested to do is:

  1. run the analysis macro and get familiar with the kinematics of the different "types" of muons;
  2. extend the macro to plot the distribution of an example of muon identification variable, called segment compatibility, for muon of all origins.

The segment compatibility variable cheks the proximity of the inner track, extrapolated to the different stations of the muon spectrometer, with segments reconstructed in the muon chambers and it is defined in the [0,1] range, with "1" representing the maximal degree of "compatibility between inner track and segments". This is a quite complex variable and the understanding of its exact internals goes beyond the scope of this exercise. If you are interested in getting more details, after the hands-on session you can check its exact definition in this Analysis Note.

HINT (booking and plotting): the programs used for the exercises from now onwards consist in three different parts: (i) a part where all the histograms, graphs and efficiencies used for the study are booked, (ii) a part where the actual logic of the analysis (i.e. the filling of the plots) happens, and (iii) a final part where the plots are printed to screen and saved as .png files. The code related to parts (i) and (iii) is already present in the macro, though it is commented out. To use it in the exercise you can simply uncomment it. This allows you to avoid wasting time with booking and plotting "technicalities" and focus on implementing the actual program logic for part (ii).

HINT: the segment compatibility variable is printed in muonEventDump.py, simply look there to see how to access it.

        print "\t===== ID VARIABLES:"
        print "\t  segment compatibility:", mu.segmentCompatibility()

HINT: if the analysis takes too long, for testing purposes, you can processed a reduced number of events changing the value of the MAX_EVENTS variable.

HINT: notice that, the first time you ran the marco, a folder called id_analysis/ with plots in .png format appeared.

QUESTION: did you expect the kinematic distributions to be as they are? What about the segment compatibility variable?

SUGGESTION: if you want you can experiment with the MIN_PT variable (e.g. loosen it down to 10 GeV) to see how things change.

In case you struggle finding a solution to the previous step of the exercise, please inform the facilitator, then run:
git fetch
git checkout ex1_part1_solution

Performance of Medium and Tight muon selectors

Let's now assume that you are setting up an analysis that studies semileptonic ttbar decays and you need to choose the muon identification criteria for your study. You are using Particle Flow (PF) for the rest of the objects in the analysis, hence you want to do that also for muons. Inspecting the Run-2 muon reference selection TWIKI you see that the baseline identification criteria (ID) using PF are the loose, the medium and the tight muon IDs.

You are interested in selecting prompt muons from W decays and, after experimenting a little with the loose ID, you understand that you need better background rejection, especially for muons from light flavour decays. You want therefore to test how the medium and tight IDs perform with respect to the loose ID in your case. Let's also assume you are using muons with a pT of 15 GeV or more.

Using as denominator loose muons (medium and tight muons are a subset of loose ones), you are requested to:

1. extend the muonIdAnalysis.py macro to compute the identification efficiency of the medium and tight IDs as function of eta and pT and compare the results.

HINT: the tight ID selector is accessed in the muonEventDump.py code, starting from there, and looking the muon selectors TWIKI, you should be able to understand how to retrieve information for the loose and medium IDs.

        print "\t===== ID SELECTIONS:"
        print "\t  is TIGHT:",   mu.passed(mu.CutBasedIdTight)

HINT: an easy way to compute efficiencies is using the ROOT TEfficiency class. As booking and plotting were already prepared for you, all you need to know if how to fill a TEfficiency object. The following example shows you how to d that.

        muIsLoose = mu.passed(mu.CutBasedIdLoose)
        muIsTight = mu.passed(mu.CutBasedIdTight)

        if muIsLoose :
            if mu.simFlavour() == 13 :
                effs["ePtPromptTight"].Fill(muIsTight, muPt)

QUESTION: what can you say about the performance of the **medium and *tight *IDs on muons from light quarks and W decays, and what about muons from b quarks?

QUESTION: what happens if you lower the minimum pT but of your analysis down to 10 GeV?

In case you struggle finding a solution to the previous step of the exercise, please inform the facilitator, then run:
git fetch
git checkout ex1_part2_solution

Muon isolation performance

Another handle to discriminate prompt muons from background ones is the the computation of the muon isolation in a cone surrounding a muon track. Different cone radii can be used to compute isolation (which may be fixed or variable in size), and the isolation can be computed as absoulte value or relative to the muon transverse momentum. Detector based information, such as the vectorial sum of the momentum of the tracks reconstructed by the tracker, or the energy sums from the electromagnetic and hadron calorimeters, can be used for the computation, or alternatively one can use PF quantities such as the energy from charged and neutral hadrons or photons. Finally, to mitigate the effect of pile-up collisions overlapping with the cones were isolation are computed, different strategies may be adopted (for example the delta beta correction method discussed in the slides presented at the beginning of the session).

As isolation is a continous variable, the actual value of the cut to be used to define a muon as "isolated" is tuned by computing ROC curves. A ROC curve is made of a set of points showing, for multiple cut values of the isolation, its efficiency in signal and background. One can decide to tune the isolation targeting a specific efficiency on signal, or a specific value of rate rejection.

In this part of the exercise you are requested to evaluate the performance of the (delta beta corrected) PF relative isolation (PFiso), recommended by the Muon POG and described in this section of the muon reference selection TWIKI. To do that you will make use of the muonIsoAnalysis.py macro, which compares the performance of the isolation using Monte Carlo samples of Drell-Yan (DY) and QCD events.

As it is, the macro compares the kinematical di distribution of muons in the two samples as well as the distribution of the PFiso. It also computes the distribution of the number of reconstructed primary vertices. Such value is a measure of the number of pile-up collisions and, as isolation is in general pile-up dependent, it is an appropriate metric to test the isolation performance.

What you are requested to do is:

  1. cross-check that the implemented version of the PFiso corresponds to the one recommended by the Muon POG;
  2. compute the ROC curve of the PFiso in the samples used for the test;
  3. compute the efficiency of the Muon::PFIsoTight working point on DY and QCD as function of the muon pT, eta and of the # of reconstructed primary vertices.

HINT: actually an helper class to compute ROCs was prepared for you and is used in the code, it is called Roc2DVec. You simply need to extend the number of working points used by Roc2DVec with respect to the two ([0.25, 0.40]) already used in the macro.

 rocs["pfRelIso"]  = isoUtils.Roc2DVec([0.25, 0.40],"QCD","DY","PF RelIso")

HINT: the Muon::PFIsoTight working point is defined in the muon selectors TWIKI. To compute efficiencies you simply need to use the isolation working point in the same way you use and ID selector.

QUESTION: what is the efficiency on signal and the level of background rejection you can obtain by using the Muon::PFIsoTight working point? Does this correspond to the value reported in the Muon POG TWIKI?

QUESTION: did you expect the dependence of the isolation efficiency on signal as function of pT to be as it is? Why?

QUESTION: what can you say about the effect of the delta beta corrections on the signal and background efficiency as function of the # of reconstructed primary vertices?

In case you struggle finding a solution to the previous step of the exercise, please inform the facilitator, then run:
git fetch
git checkout ex2_solution

3. Highlights on the muon performance with data and use of high level calibrations

Beyond the studies based on simulated samples, the muon object performance is also assessed using real data from proton-proton collisions.

For example, muon reconstruction, identification and isolation efficiencies are measured using a Tag-and-Probe method (TnP). The TnP exploits events with dimuon decays from J/Psi and Z resonances, collected triggering on a single muon. The triggering muon is used as tag, and rather tight ID and isolation criteria are applied on it to improve the purity of the selected sample. Rather mild (or no) cuts are instead applied on the other muon, which is used as probe to evaluate reconstruction, ID and isolation efficiencies.

Efficiencies are computed with the same method in data and simulation, and a bin-by-bin ratio is made between the two to obtain calibrations, called scale factors, which are applied to the simulated samples to correct for differences with respect to data, hence improving the data/MC agreement. Due to time constraint we won't explore the TnP method in detail, assuming that the description in the introductory talk and in this TWIKI suffice. If interested, you can get more information out of the publications and TWIKIes linked in the Appendix section.

The scale and resolution of the muon momentum measurement are estimated in data and simulation as well. The width of the J/Psi and Zs resonances, measured in reconstructed muons from data and simulation, as well as compared with generator level information, provides information to compute calibration corrections. For example, the finite resolution of the muon momentum measurement has the effect of increasing a resonance peak width, and similarly a smearing (or a shift) of the peak can be originated by a scale bias. Parametrizing the the muon scale and resolutions as a function of pT and eta, and studying the invariant mass peak of the dimuon in different kinematical regions, one can get corrections, accurate up to approximately 200 GeV.

At higher energies, where muon chambers improve the muon momentum assignment, the corrections no longer hold, but the non resonant tail of the DY distribution (or muons from cosmic rays) can be used to perform similar measurements. More details about the methods used to compute muon resolution and scale are available in the documents linked in the Muon POG reference TWIKI for scale and resolution or in the publications linked in the Appendix section.

In the coming exercise you will get familiar with a tool to apply scale and resolution corrections.

A concrete example: using scale and calibration corrections from the Rochester method

The Rochester mehod is a workflow that exploit J/Psi and Zs resonances to produce:

  1. momentum scale corrections for muons from real data;
  2. momentum scale corrections for muons from simulated samples;
  3. smearing factors to make the resolution of muons from simulation match the one measured in data.

the so called Rochester corrections are intended to be used in analyses with signatures where the bulk of the muon momentum spectra lies below or around 200 GeV.

In this exercise you will apply scale corrections to a sample of muons from proton-proton collisions enriched in dimuon decays from Zs. The macro to be used in this case is muonScaleAnalysis.py. The program selects pair of dimuons passing the tight ID and PF isolation criteria and produces:

  1. kinematical distributions for the muons used in the study;
  2. the invariant mass distribution of the dimuon pairs;
  3. a profile plot showing the average of the dimuon mass as a function of the muon phi coordinate, for muons falling in a specific eta region of the detector, and separately for positive and negative muons.

The latter type of plot is sensible to muon scale issues that might arise only in specific parts of the detector and may affect positive and negative sign muons differently.

This time, your task is to:

  1. get familiar with the plotted quantities;
  2. use the Rochester corrections to re-compute the invariant mass plot from ii. and compare the corrected and non-corrected plots;
  3. use the Rochester corrections to re-compute the profile plots from iii. both for negative and positive muons.

HINT: the Rochester corrections, acessible from the RoccoR module, are already loaded for you in the macro.

 rc = roccor.RoccoR("data/RoccoR2017.txt")

HINT: to use them you should call the kScaleDT(...) function, that takes as input the muon charge, pT, eta and phi and returns a correction factor to apply to the reconstructed muon pT.

                mu1Corr = rc.kScaleDT(mu1.charge(),mu1.pt(),mu1.eta(),mu1.phi())
                mu1CorrectedPt = mu1Corr * mu1.pt()

HINT: to compute the muon invariant mass you can use the ROOT TLorentzVector class as in the following example.

                mu1TkCorr = ROOT.TLorentzVector()
                mu1TkCorr.SetPtEtaPhiM(mu1.pt()*mu1Corr,mu1.eta(),mu1.phi(),0.106)

                mu2TkCorr = ROOT.TLorentzVector()
                mu2TkCorr.SetPtEtaPhiM(mu2.pt()*mu2Corr,mu2.eta(),mu2.phi(),0.106)

                massCorr = (mu1TkCorr + mu2TkCorr).M()

HINT: to plot the corrected and uncorreted invariant mass plots in the same canvas, you can simply extend the line of the part of the code related to plotting as described below.

# from:
histosMass = [histos["hMassPrompt"]]
# to:
histosMass = [histos["hMassPrompt"], histos["hMassCorr"]]

QUESTION: what happens to the invariant mass plot after applying Rochester corrections? Can you already say that this is an improvement?

QUESTION: what happens to the profile distributions after applying Rochester corrections? Does it makes you more confident that the scale corrections iprove the measurement?

QUESTION: knowing that corrections are applied to the muon curvature, and that additive and multiplicative corrections exist, can you guess which type of correction has the dominant impact in fixing the modulation of the dimuon mass as function of muon phi computed in plot iii.?

In case you struggle finding a solution to the previous step of the exercise, please inform the facilitator, then run:
git fetch
git checkout ex3_solution

Appendix

Publications on muon reconstruction with LHC collision data

Additional sources of information

-- CarloBattilana - 2019-01-11

Edit | Attach | Watch | Print version | History: r10 < r9 < r8 < r7 < r6 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r10 - 2019-01-22 - CarloBattilana
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback