Multiplicity dependence of JPsi and Psi2S in pPb 8.160 TeV data

Manpower:

Jinjing Guo, Andre Stahl, Shuai Yang, Wei Li

DOCUMENTATION

  • CADI Entry: [[][HIN-19-XXX]]
  • Analysis Note: AN-19-120
  • Paper Twiki: HIN-19-XXX

LINKS

  • CMSSW analyzer repository: link
  • Analysis repository: link

MEETINGS

Questionnaires

Muon

  1. What dataset(s) is (are) used?

  1. What muon selection is used for what dataset?
  2. How are the efficiencies obtained? (e.g. taken from central numbers or recomputed individually for the analysis)
    • The efficiencies are recomputed for the analysis using prompt/non-prompt JPsi and Psi2S MC samples and defined in bins of charmonium pT and rapidity in the laboratory frame.
    • We don't have plots comparing with central results.
  3. How are the efficiency numbers used? Are they applied as scale factors to MC? In bins of eta/pt, ...?
    • The efficiencies are applied in bins of charmonium pT and rapidity. They are used to correct the raw prompt and non-prompt charmonium yields extracted from the fits.
  4. How exactly are the uncertainties from muon efficiencies (MC and T&P; statistical and systematic) obtained and propagated in the analysis?
    • The statistical uncertainty of the efficiency due to the MC sample is extracted using the class TEfficiency
  5. Anything else worth mentioning regarding muon use?
    • No

TODO Analysis

  • Google drive [[][todo list]].

IDEAS for Systematic Variations

Analysis:

Charmonium Binning:

Nominal:

  • 4 rapdity bins (in laboratory frame) : [ -2.4 , -1.4 , 0.0 , 1.4 , 2.4 ]
  • 4 pT bins : [ 0.0 , 3.0 , 6.5 , 9.0 , Infinity ]
  • 7 track multiplicity bins : [ 0 , 35 , 60 , 90 , 120 , 155 , 190 , Infinity ]

Binning used in each dataset:

  • Double Muon dataset (PADoubleMuon):
    • 4 rapidity bins: [ -2.4 , -1.4 , 0.0 , 1.4 , 2.4 ]
    • 3 pT bins: [ 3.0 , 6.5 , 9.0 , Infinity ]
    • 7 track multiplicity bins : [ 0 , 35 , 60 , 90 , 120 , 155 , 190 , Infinity ]

  • High Multiplicity dataset (PAHighMultiplicityX):
    • 2 rapidity bins: [ -2.4 , -1.4 ] and [ 1.4 , 2.4 ]
    • 1 pT bin: [ 0.0 , 3.0 ]
    • 2 track multiplicity bins : [ 155 , 190 , Infinity ]

  • Minimum Bias dataset (PAMinimumBiasX):
    • 2 rapidity bins: [ -2.4 , -1.4 ] and [ 1.4 , 2.4 ]
    • 1 pT bin: [ 0.0 , 3.0 ]
    • 5 track multiplicity bins : [ 0 , 35 , 60 , 90 , 120 , 155 ]

Event Selection:

  • Event Filters:
Flag Name Description Comments
hfCoincFilter Coincidence filter on HF energy At least one tower with energy > 3 GeV on both HF+ and HF-
primaryVertexFilterPA Requires at least one good primary vertex isFake and abs(z) <= 25 and abs(rho) <= 2 and tracksSize >= 2
NoScraping Filter out beam scrapping events numtrack = 10 and thresh = 0.25
olvFilter_pPb8TeV_dz1p0 Overlapping vertex filter to remove PileUp events dzTolerance = 1

Note: The four filters are included in the nominal event selection, which can be applied using: evtSel[0]==true

  • Trigger Selection:
Dataset Cut Description Comments
PADoubleMuon trigHLT[0]==True Events firing HLT_PAL1DoubleMuOpen_v Trigger Index is 0
PAHighMultiplicityX trigHLT[4]==True Events firing HLT_PAFullTracks_Multiplicity185_partX_v Trigger Index is 4
PAMinimumBiasX trigHLT[6]==True Events firing HLT_PAL1MinimumBiasHF_OR_SinglePixelTrack_partX_v Trigger Index is 6

Muon Selection:

  • Kinematic Cut:
    • PADoubleMuon dataset:
Muon pseudo-rapidity range Muon transverse momentum threshold
$0.0 \leq abs(\eta) < 1.2$ $p_{T} \geq 3.3 GeV/c$
$1.2 \leq abs(\eta) < 2.1$ $p_{T} \geq 3.93-1.11*abs(\eta) GeV/c$
$2.1 \leq abs(\eta) < 2.4$ $p_{T} \geq 1.3 GeV/c$

    • PAHighMultiplicityX and PAMinimumBiasX datasets:
Muon pseudo-rapidity range Muon transverse momentum threshold
$0.0 \leq abs(\eta) < 0.8$ $p_{T} \geq 3.3 GeV/c$
$0.8 \leq abs(\eta) < 1.5$ $p_{T} \geq 5.81-3.14*abs(\eta) GeV/c$
$1.5 \leq abs(\eta) < 2.4$ $p_{T} \geq 1.89-0.526*abs(\eta) GeV/c$ and $p_{T} \geq 0.8 GeV/c$

  • Quality Cut: Run2 Muon POG Soft ID

Cut Description
OneStMuon1 == True The candidate is reconstructed as a Tracker Muon and matched to at least one station
nTrackerLayerD1 > 5 Number of Tracker Layers with Hits included in the Tracker Track fit > 5
nPixelLayerD1 > 0 Number of Pixel Layers with Hits included in the Tracker Track fit > 0
HighPurityDaugther1 == True Inner track pass high purity selection
-0.3 < dXYD1 < 0.3 Transverse Impact Parameter of the Inner Track wrt. the primary vertex < 0.3
-20.0 < dZD1 < 20.0 Longitudinal distance of the Inner Track wrt. the primary vertex < 20.0

  • Trigger Cut:
    • PADoubleMuon dataset:
Cut Description Comments
trigMuon1[0][iCand]==1 Muon Matched to Trigger HLT_PAL1DoubleMuon Trigger Index is 0

Corrections:

Vertex: Z-Component

Reweight MC using the following function:

TF1* fWeight = new TF1("fWeight","gaus(0)/(gaus(3))", -30., 30.);
fWeight-SetParameters(0.0207, 1.5839, 4.8070, 0.0176, 1.5073, 5.6747);
double w = fWeight->Eval(vZ); // Where vZ is the Z-component of the MC vertex in a given event

Using Data Pixel Barycenter: TVector3(-0.0264,  -0.0784, -0.5211); // Obtain from James Castle

Computing Luminosity


STEP 1 : Setup Brilcalc

In lxplus do:

export PATH=$HOME/.local/bin:/afs/cern.ch/cms/lumi/brilconda-1.1.7/bin:$PATH

STEP 2 : Get the json files for pA runs

  • Official Certified Json:

The official json files can be found in: https://twiki.cern.ch/twiki/bin/view/CMS/PdmV2016Analysis

NOTE: The naming convention has been reversed in the PdmV2016Analysis twiki page

wget https://cms-service-dqm.web.cern.ch/cms-service-dqm/CAF/certification/Collisions16/13TeV/HI/Cert_285952-286496_HI8TeV_PromptReco_Pbp_Collisions16_JSON_NoL1T_MuonPhys.txt  # For pPb
wget https://cms-service-dqm.web.cern.ch/cms-service-dqm/CAF/certification/Collisions16/13TeV/HI/Cert_285479-285832_HI8TeV_PromptReco_pPb_Collisions16_JSON_NoL1T_MuonPhys.txt  # For Ppb

  • Crab Production Json:

wget https://raw.githubusercontent.com/stahlleiton/VertexCompositeAnalysis/8_0_X/VertexCompositeProducer/test/JSON/CRAB_PADoubleMuon_285952-286496_HI8TeV_PromptReco_Pbp_Collisions16_JSON.json # For pPb
wget https://raw.githubusercontent.com/stahlleiton/VertexCompositeAnalysis/8_0_X/VertexCompositeProducer/test/JSON/CRAB_PADoubleMuon_285479-285832_HI8TeV_PromptReco_pPb_Collisions16_JSON.json # For Pbp

STEP 3 : Run brilcalc to compute luminosity

The official information on how to compute luminosity is explained here : https://twiki.cern.ch/twiki/bin/view/CMS/TWikiLUM

You can compute the luminosity of a json file using the following command:

brilcalc lumi --normtag /cvmfs/cms-bril.cern.ch/cms-lumi-pog/Normtags/normtag_PHYSICS.json -u /nb -i JSON_FILE_NAME.json --hltpath "HLT_PATH_NAME"


In our case, the results are using the crab json files:

Index Trigger Dataset Lumi p-Pb [/nb] Lumi Pb-p [/nb] Lumi [/nb]
0 HLT_PAL1DoubleMuOpen_v PADoubleMuon 110.78 62.64 173.42
4 HLT_PAFullTracks_Multiplicity185_part PAHighMultiplicity1-6 28.04 65.81 93.85
6 HLT_PAL1MinimumBiasHF_OR_SinglePixelTrack_part PAMinimumBias1-20 1.09 2.95 4.04

Computing PileUp


STEP 1 : Setup Brilcalc

You will first need to install brilcalc ( see http://cms-service-lumi.web.cern.ch/cms-service-lumi/brilwsdoc.html )

The easiest is to log in lxplus and do:

export PATH=$HOME/.local/bin:/afs/cern.ch/cms/lumi/brilconda-1.1.7/bin:$PATH

STEP 2 : Get the json files for pA runs

  • Official Certified Json:

The official json files can be found in: https://twiki.cern.ch/twiki/bin/view/CMS/PdmV2016Analysis

NOTE: The naming convention has been reversed in the PdmV2016Analysis twiki page

wget https://cms-service-dqm.web.cern.ch/cms-service-dqm/CAF/certification/Collisions16/13TeV/HI/Cert_285952-286496_HI8TeV_PromptReco_Pbp_Collisions16_JSON_NoL1T_MuonPhys.txt  # For pPb
wget https://cms-service-dqm.web.cern.ch/cms-service-dqm/CAF/certification/Collisions16/13TeV/HI/Cert_285479-285832_HI8TeV_PromptReco_pPb_Collisions16_JSON_NoL1T_MuonPhys.txt  # For Ppb

  • Crab Production Json:

wget https://raw.githubusercontent.com/stahlleiton/VertexCompositeAnalysis/8_0_X/VertexCompositeProducer/test/JSON/CRAB_PADoubleMuon_285952-286496_HI8TeV_PromptReco_Pbp_Collisions16_JSON.json # For pPb
wget https://raw.githubusercontent.com/stahlleiton/VertexCompositeAnalysis/8_0_X/VertexCompositeProducer/test/JSON/CRAB_PADoubleMuon_285479-285832_HI8TeV_PromptReco_pPb_Collisions16_JSON.json # For Pbp

STEP 3 : Create pileup CSV file

brilcalc lumi --xing --normtag /afs/cern.ch/user/l/lumipro/public/Normtags/normtag_HI2016.json -i JSON_FILE_NAME.json -o pileup.csv

STEP 4 : Estimate the PileUp

First create a CMSSW working directory (cmsrel CMSSW_X_Y_Z)

then run the following command:

./estimatePileup_makeJSON_2015.py --csvInput pileup.csv pileup_JSON.txt

STEP 5 : Compute the total integrated luminosity

The official information on how to compute luminosity is explained here : https://twiki.cern.ch/twiki/bin/view/CMS/TWikiLUM

You can compute the luminosity of a json file using the following command:

brilcalc lumi --normtag /afs/cern.ch/user/l/lumipro/public/Normtags/normtag_HI2016.json -i LUMI_FILE_NAME.json --hltpath "HLT_PATH_NAME"

STEP 6 : Run pileupCalc

xsection=14393600 #(taken from 69200 x 208)

pileupCalc.py -i ./LUMI_FILE_NAME.json --inputLumiJSON ./pileup_JSON.txt --calcMode true --minBiasXsec $xsection --maxPileupBin 5 --numPileupBins 100 ./MyDataPileupHistogram.root


In our case, the results are (using the official json files):

  • p-Pb (2nd run):

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Official JSON  :  average !pileUp is 1.2558

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

  • Pb-p (first run):

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Official  JSON :  average !pileUp is 1.0757

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Fitter: Instructions to Setup

Analyzer: Instructions to Setup CMSSW

In order to setup your CMSSW directory to use our analyzer, please follow the instructions below:

cmsrel CMSSW_8_0_32
cd CMSSW_8_0_32
cmsenv

git clone https://github.com/davidlw/VertexCompositeAnalysis.git -b 8_0_X --single-branch
scram b -j 20

Datasets:

To open the following files in root:

  • From CERN prepend root://eoscms//eos/cms to the pathname. For example:
    • root://eoscms//eos/cms/store/...../FILENAME.root
  • From outside CERN prepend root://cms-xrd-global.cern.ch/ to the pathname. For example:
    • root://cms-xrd-global.cern.ch//store/...../FILENAME.root

2016 pPb Data

  • LumiMask :
    • 1st Run (Pb-p) :
      /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions16/13TeV/HI/Cert_285479-285832_HI8TeV_PromptReco_pPb_Collisions16_JSON_NoL1T_MuonPhys.txt
    • 2nd Run (p-Pb) :
      /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions16/13TeV/HI/Cert_285952-286496_HI8TeV_PromptReco_Pbp_Collisions16_JSON_NoL1T_MuonPhys.txt

Vertex Composite Trees

YY/MM/DD Path Lumi [/nb] Size [Gb] Dataset Config CRAB JSON
2019/04/21 /store/user/anstahll/RiceHIN/pPb2016/Tree/VertexCompositeTree_PADoubleMuon_PARun2016C_DiMuMassMin2.root 173.42 62 PADoubleMuon cfg [[][JSON]]
2019/04/21 /store/user/anstahll/RiceHIN/pPb2016/Tree/VertexCompositeTree_PAHighMultiplicity_PARun2016C_DiMuMassMin2.root 173.41 44 PAHighMultiplicity1, PAHighMultiplicity2, PAHighMultiplicity3, PAHighMultiplicity4, PAHighMultiplicity5, PAHighMultiplicity6 cfg [[][JSON]]
2019/04/21 /store/user/anstahll/RiceHIN/pPb2016/Tree/VertexCompositeTree_PAMinimumBias_PARun2016C_DiMuMassMin2.root 173.42 27 PAMinimumBias1, PAMinimumBias2, PAMinimumBias3, PAMinimumBias4, PAMinimumBias5, PAMinimumBias6, PAMinimumBias7, PAMinimumBias8, PAMinimumBias9, PAMinimumBias10, PAMinimumBias11, PAMinimumBias12, PAMinimumBias13, PAMinimumBias14, PAMinimumBias15, PAMinimumBias16, PAMinimumBias17, PAMinimumBias18, PAMinimumBias19, PAMinimumBias20 cfg [[][JSON]]

2017 pPb MC (Official)

https://twiki.cern.ch/twiki/bin/view/CMS/MC_for_2016_pPb8TeV

NTuples: Embedded

  • CMSSW release: CMSSW_8_0_32
  • GlobalTag : 80X_mcRun2_pA_v4
  • Single muon selection : (isGlobalMuon || TMOneStationTight)

YY/MM/DD Path Size [Gb] Dataset Config
2019/04/21 /store/user/anstahll/RiceHIN/pPb2016/Tree/VertexCompositeTree_JPsiToMuMu_pPb-Bst_pPb816Summer16_DiMuMC.root 3.0 JPsiToMuMu_pPb-Bst cfg
2019/04/21 /store/user/anstahll/RiceHIN/pPb2016/Tree/VertexCompositeTree_JPsiToMuMu_PbP-Bst_pPb816Summer16_DiMuMC.root 3.0 JPsiToMuMu_PbP-Bst cfg
2019/04/21 /store/user/anstahll/RiceHIN/pPb2016/Tree/VertexCompositeTree_Psi2SToMuMu_pPb-Bst_pPb816Summer16_DiMuMC.root 3.3 Psi2SToMuMu_pPb-Bst cfg
2019/04/21 /store/user/anstahll/RiceHIN/pPb2016/Tree/VertexCompositeTree_Psi2SToMuMu_PbP-Bst_pPb816Summer16_DiMuMC.root 3.2 Psi2SToMuMu_PbP-Bst cfg
2019/04/21 /store/user/anstahll/RiceHIN/pPb2016/Tree/VertexCompositeTree_BToJPsiToMuMu_pPb-Bst_pPb816Summer16_DiMuMC.root 5.0 BToJPsiToMuMu_pPb-Bst cfg
2019/04/21 /store/user/anstahll/RiceHIN/pPb2016/Tree/VertexCompositeTree_BToJPsiToMuMu_PbP-Bst_pPb816Summer16_DiMuMC.root 5.0 BToJPsiToMuMu_PbP-Bst cfg

How to edit the analysis note

  • NOTE 1: PLEASE USE THE SHORTCUTS DEFINED IN
    utils/trunk/general/ptdr-definitions.sty
  • NOTE 2: please dump the figures in the directory corresponding to your own section in figs/xxx

git clone --recursive ssh://git@gitlab.cern.ch:7999/tdr/notes/AN-19-120.git
cd AN-19-120
eval `./utils/tdr runtime -sh`

# (edit the template, then to build the document)
tdr --style=note b AN-19-120

# the result will be a pdf file, its location is given at the end
# of the compile message you get. 

#You can commit your changes with
git commit -m "commit message"
git push

------------------

# to update your document (copy updates from svn to your directory):
# always start with that before you edit anything!

git pull

# to check the status (is your file different from those in svn?):

git status

# if you modified something and want to upload it to the repository
# (do this frequently so that the repository keeps up to date):

git add filename.tex
git commit -m "your comments"
git push

# to check the history of a file:

git log filename.tex

# to check differences between revisions (example):

  • You can also track the changes using GitLab

How to use latexdiff

-- AndreGovindaStahlLeiton - 2019-07-16

Edit | Attach | Watch | Print version | History: r4 < r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r4 - 2019-07-19 - AndreGovindaStahlLeiton
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback