Previous page

Low μ analysis with PPS

Overview

This page contain analysis instructions of low μ run data collected in 2017.

On this page:

Usefull Links

  • W/Z in low pileup runs (5 and 13 TeV): link1 link2
  • top in low pileup run (5 TeV only) link

Data and MC samples

Sample Xsec [pb]
/WpToMuNu_TuneCP5_13TeV-powheg-pythia8/RunIIFall17MiniAODv2-fixECALGT_LowPU_94X_mc2017_realistic_v10For2017H_v2-v1/MINIAODSIM 11303.236
/DYJetsToLL_M-50_TuneCP5_13TeV-amcatnloFXFX-pythia8/RunIIFall17MiniAODv2-fixECALGT_LowPU_94X_mc2017_realistic_v10For2017H_v2_ext1-v1/MINIAODSIM 6075.6952
/TTToHadronic_TuneCP5_13TeV-powheg-pythia8/RunIIFall17MiniAODv2-fixECALGT_LowPU_94X_mc2017_realistic_v10For2017H_v2_ext1-v1/MINIAODSIM 313.93427
/TTToSemiLeptonic_TuneCP5_13TeV-powheg-pythia8/RunIIFall17MiniAODv2-fixECALGT_LowPU_94X_mc2017_realistic_v10For2017H_v2_ext1-v1/MINIAODSIM 299.57212
/TTTo2L2Nu_TuneCP5_13TeV-powheg-pythia8/RunIIFall17MiniAODv2-fixECALGT_LowPU_94X_mc2017_realistic_v10For2017H_v2_ext1-v1/MINIAODSIM 72.094191
/WmToENu_TuneCP5_13TeV-powheg-pythia8/RunIIFall17MiniAODv2-fixECALGT_LowPU_94X_mc2017_realistic_v10For2017H_v2-v1/MINIAODSIM 8391.4028
/WmToMuNu_TuneCP5_13TeV-powheg-pythia8/RunIIFall17MiniAODv2-fixECALGT_LowPU_94X_mc2017_realistic_v10For2017H_v2-v1/MINIAODSIM 8391.4028
/WpToENu_TuneCP5_13TeV-powheg-pythia8/RunIIFall17MiniAODv2-fixECALGT_LowPU_94X_mc2017_realistic_v10For2017H_v2-v1/MINIAODSIM 11303.236
Info The cross section is obtained from the file by Runs->Scan("GenRunInfoProduct_generator__SIM.obj.crossSection()"). Open files using TFile *f =TFile::Open("root://cmsxrootd.fnal.gov///store/foo/bar")

2017E
/FSQJet2/Run2017H-09Aug2019_UL2017_LowPU-v1/MINIAOD
/FSQJet1/Run2017E-09Aug2019_UL2017-v1/MINIAOD
/FSQJet2/Run2017E-09Aug2019_UL2017-v1/MINIAOD
2017H
/SingleMuon/Run2017H-09Aug2019_UL2017_LowPU-v1/MINIAOD
/HighEGJet/Run2017H-09Aug2019_UL2017_LowPU-v1/MINIAOD
/DoubleMuon/Run2017H-09Aug2019_UL2017_LowPU-v1/MINIAOD
/FSQJet1/Run2017H-09Aug2019_UL2017_LowPU-v1/MINIAOD
Total luminosity: 211.85 /pb

Analysis

HLT trigger development

The development is done with 103X_dataRun2_HLT_v1 and CMSSW_11_3_0

In lxplus run the HLT java script:

git clone https://github.com/cms-sw/hlt-confdb.git
./hlt-confdb/start
Select: HLTDEVv2 and cms100khz to connect to ConfDB and follow instructions in https://indico.cern.ch/event/1037579/

The following procedure can be used to define the best unprescaled trigger.

Obtaining pileup rate

To be able to know the averange number of collisions per bunch crossing, the most straightforward way of doing this is to use the instantaneous luminosity (see PileupJSONFileforData for more details).

If a single bunch has an instantaneous luminosity Linst, then the pileup is given by the formula μ = Linst σinel / frev, where σinel=69.2 mb is the total pp inelastic cross section and frev is the LHC orbit frequency of 11246 Hz (necessary to convert from the instantaneous luminosity, which is a per-time quantity, to a per-collision quantity). This quantity can be computed on a per-lumi section basis (where a lumi section is the fundamental unit of CMS luminosity calculation, about 23.3 seconds long).

The average instantenious luminosity is stored in /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions17/13TeV/PileUp/pileup_latest.txt for 2017 data. To get the luminosity values you can use python script:

import json, numpy
with open('/afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions17/13TeV/PileUp/pileup_latest.txt') as f:
  data = json.load(f)
run='306936'
lumis=numpy .array([f[3] for f in data[run] if float(f[3])>0])
print('maximal pileup = '+str(lumis.max()*69200.)+'\nminimal pileup = '+str(lumis.min()*69200.)+'\naverage pileup = '+str(lumis.mean()*69200.))

Triggers

How to find unprescaled trigger:

GRL is available here: /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions17/13TeV/Final/Cert_306896-307082_13TeV_PromptReco_Collisions17_JSON_LowPU.txt To get list of runs execute: printJSON.py /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions17/13TeV/Final/Cert_306896-307082_13TeV_PromptReco_Collisions17_JSON_LowPU.txt To get the list of triggers per run goto https://cmsoms.cern.ch/cms/runs, enter a run number, and click on HLT key option.

To get the actual luminosity per trigger use brilcalc:

export PATH=$HOME/.local/bin:/cvmfs/cms-bril.cern.ch/brilconda/bin:$PATH
brilcalc lumi --normtag /cvmfs/cms-bril.cern.ch/cms-lumi-pog/Normtags/normtag_PHYSICS.json  -c /cvmfs/cms.cern.ch/SITECONF/T0_CH_CERN/JobConfig/site-local-config.xml -i /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions17/13TeV/Final/Cert_306896-307082_13TeV_PromptReco_Collisions17_JSON_LowPU.txt -u /fb --hltpath "HLT_HIMu17_v1"

The total luminosity per era: lumi -b "STABLE BEAMS" -i /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions17/13TeV/Final/Cert_306896-307082_13TeV_PromptReco_Collisions17_JSON_LowPU.txt -c /cvmfs/cms.cern.ch/SITECONF/T0_CH_CERN/JobConfig/site-local-config.xml -u /fb The ratio between total recorded lumis is the level of prescale.

The unprescaled triggers used in the analysis are: HLT_HIEle15_WPLoose_Gsf_v1 and HLT_HIMu15_v1

Extra information about calculations of collision rates and pileup interactions is available in PileupJSONFileforData.

Good Run Lists:

The luminosity certified as good for physics analysis is contained in the JSON Golder files:
  • 5 TeV: /afs/cern.ch/cms/CAF/CMSCOMM/COMM DQM/certification/Collisions17/5TeV/ReReco/Cert 306546-3068265TeVEOY2017ReRecoCollisions17JSON.txt
  • 13 TeV: /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions17/13TeV/Final/Cert_306896-307082_13TeV_PromptReco_Collisions17_JSON_LowPU.txt

Ntuple production

Production is performed using TopLJets2015 analysis package which use PPS reconstruction data.

To setup the package follow the instructions in installation-instructions

The analysis are performed on miniAODs, where the proton reconstruction will be obtained from the RAW data. Association between AODs and miniAODs is done in getListAOD() funcion. Before that it good to test if the ProtonRecontsruction algorithm is working. To Test the algorithm on a RAW data file execute the following script:

cmsRun 2017H_W_mass_cfg.py

The scripts are attached to the twiki, rename them from XXX.py.txt to XXX.py and modify the filename to the one found in DAS, (for example for dataset=/SingleMuon/Run2017H-v1/RAW).

Local tests

NanoAOD production

To test the ntuplizer locally, execute:

cmsRun ${CMSSW_BASE}/src/TopLJets2015/TopAnalysis/test/runMiniAnalyzer_cfg.py lumiJson=/afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions17/13TeV/Final/Cert_306896-307082_13TeV_PromptReco_Collisions17_JSON_LowPU.txt inputFile=/store/data/Run2017H/SingleMuon/MINIAOD/09Aug2019_UL2017_LowPU-v1/270000/FA076453-6CD9-2B4E-8B1A-6F081C295054.root runOnData=True era=era2017_H runWithAOD=True ListVars=lowmu_data redoProtonRecoFromRAW=True

The option runWithAOD=True will process the data in two tiers with corresponding AODs, while redoProtonRecoFromRAW=True will force the code to run the proton reconstruction (reminiAOD setup). Running with AOD will allow to access strip track information to obtain fraction of events with 0,1,≥2 protons.

The code will load the trigger list from miniAnalyzer_cfi.py and execute the MiniAnalyzer.cc script (similar to EventLoop in ATLAS). For each event MiniAnalyzer::analyze() function will be called, and the object selection executed.

The output files will be MiniEvents.root,

To make some local tests with the output file you can execute the following code lines in ROOT (after root -l MiniEvents.root)

  • List relevant triggers:
TH1F * tr = (TH1F*)analysis->Get("triggerList");
for(int i=0;i<tr->GetXaxis()->GetNbins();i++){float n=tr->GetBinContent(i+1); if(n) cout << tr->GetXaxis()->GetBinLabel(i+1) << " "<< n << endl;}

ntuple production

The next step is analyzing the MiniEvents.root file, this is done executing the following code:

python $CMSSW_BASE/src/TopLJets2015/TopAnalysis/scripts/runLocalAnalysis.py -i MiniEvents.root --tag Data13TeV_2017H_SingleMuon_v2 -o ntuple.root --njobs 1 -q local --era era2017 -m RunLowMu2020

The LowMu2020 method is attached (LINK TO THE FILE), and it should be copied to TopLJets2015 package.

Submit to grid

After the local tests are successful, one can process all the data on the grid. Correct submitToGrid.py:

config_file.write('config.JobType.inputFiles = [\'{0}\',\'{1}\',\'muoncorr_db.txt\',\'jecUncSources.txt\',\'qg_db.db\',\'ctpps_db.db\']\n'.format(jecDB,jerDB))
...
githash=commands.getstatusoutput('git log --pretty=format:\'%h\' -n 1')[1]
by
config_file.write('config.JobType.inputFiles = [\'{0}\',\'{1}\',\'muoncorr_db.txt\',\'jecUncSources.txt\',\'qg_db.db\']\n'.format(jecDB,jerDB))
...
githash=commands.getstatusoutput('cd ${CMSSW_BASE}/src/TopLJets2015 && git log --pretty=format:\'%h\' -n 1')[1]

Now the jobs can be submitted by executing the following lines:

python ${CMSSW_BASE}/src/TopLJets2015/TopAnalysis/scripts/submitToGrid.py -j ${CMSSW_BASE}/src/TopLJets2015/TopAnalysis/data/era2017/samples.json \
--lumi /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions17/13TeV/Final/Cert_306896-307082_13TeV_PromptReco_Collisions17_JSON_LowPU.txt \
-c ${CMSSW_BASE}/src/TopLJets2015/TopAnalysis/test/runMiniAnalyzer_cfg.py --only 2017H --addParents --rawParents --lfn /store/group/cmst3/user/mpitt/LowMu/nanoAOD
source /cvmfs/cms.cern.ch/crab3/crab.sh
export SCRAM_ARCH=slc7_amd64_gcc700
crab submit -c grid/Data13TeV_2017H_SingleMuon_v2_cfg.py
#to track the progress:
crab status -d grid/Data13TeV_2017H_SingleMuon_v2_cfg

Submit to condor

Execute the following code:

python $CMSSW_BASE/src/TopLJets2015/TopAnalysis/scripts/submitLocalNtuplizer.py --dryRun  --addParent --proxy \
--lumiMask /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions17/13TeV/Final/Cert_306896-307082_13TeV_PromptReco_Collisions17_JSON_LowPU.txt \
--jobTag Data13TeV_2017H_SingleMuon_v2 --dataset /SingleMuon/Run2017H-17Nov2017-v2/MINIAOD --output /store/group/cmst3/user/mpitt/LowMu \
--extraOpts runOnData=True,era=era2017,applyFilt=False,globalTag=106X_dataRun2_v24

The output that will appear in $CMSSW_BASE/FarmLocalNtuple/. Now run the jobs:

condor_submit $CMSSW_BASE/FarmLocalNtuple/condor_Data13TeV_2017H_SingleMuon_v2.sub

Help If something is not working, consult with these slides or LxbatchHTCondor

Analysis

diffractive W and Z bosons

Lowμ analysis's primary goal is to study diffractive production of various SM processes. Since the central mass of EW bosons is below PPS acceptance for two-tagged protons event, we need to study single tagged protons, using the following relation between the central system and the momentum of the proton: ξ± = m/√s · e±Y where the minimal rapidity of the central system if bounded by the PPS acceptance: YMIN = √s/m·ξMIN (example for ξ>0.02 YMIN=1.17)

In LowMu2020.cc we define boson_x1,2 which are reconstructed ξ from the central system.

Plots

execute this and that

-- MichaelPitt - 2019-12-07

Topic attachments
I Attachment History Action Size Date Who Comment
Texttxt 2017H_W_mass_cfg.py.txt r1 manage 0.8 K 2019-12-09 - 23:14 MichaelPitt Test for PPS reconstruction code (From Jan K.)
Texttxt base.py.txt r1 manage 2.0 K 2019-12-09 - 23:14 MichaelPitt Test for PPS reconstruction code (From Jan K.)
Texttxt conditions.py.txt r1 manage 5.1 K 2019-12-09 - 23:14 MichaelPitt Test for PPS reconstruction code (From Jan K.)
Texttxt protonRecoLowMu_cfg.py.txt r1 manage 6.9 K 2019-12-10 - 22:51 MichaelPitt  
Edit | Attach | Watch | Print version | History: r45 < r44 < r43 < r42 < r41 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r45 - 2021-07-18 - MichaelPitt
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback