PPb working notes


After the centrality denpendence 3-particle analysis. Now:

1. Comparing to Hijing. To see if this still exist.

2.2p correlation:

Ridge (2<|deta|<4) and jet (|deta|<1) yield vs trigger pt for a fixed assoc pt=1-1.5 GeV /c. Use trigger pt of 1.5-2, 2-2.5, 2.5-3, 3-4, 4-6, 6-10 GeV /c.

Ridge (2<|deta|<4) and jet (|deta|<1) yield vs assoc pt for a fixed trigger pt=3-10 GeV /c. Use assoc pt of 0-0.5, 0.5-1, 1-1.5, 1.5-2, 2-2.5, 2.5-3 GeV /c.

3. Trigger pt distribution w/ w/o b2b trigger.


For the ridge paper side, Wei is comparing high N pPb to the PbPb result. 55-60% PbPb, 1<pT<3 and N>110 pPb is taken to compare.

In short, PbPb looks more hydrodynamic like. apparently almond shape geometry plays a big role in driving up the v2.


  • 1.Today we are still looking at AMPT tunning. From the result yesterday(mult in different pt range) we know pt specturm has problem. Currently Kurt is looking at
    • multiplicity orthogonal to and aligned with the leading jet in each even
    • so that we can conclude which is the leading reason for the disagree. But it shows both orthogonal and aligned mult distribution is off, about the same amount. So this is not the reason..
  • 2. Quan has problem with the cumulant. The v2{4} keeps give him negtive number before the sqrt(4). Should look at that and compare. The error calculation is non-trival for this v2{4}
  • 3.I compared the dihadron result with Wei's result. They agrees quite well.


In the past 2 weeks, what I mainly did is to compare the Hijing Ridge to real data. After the bug fix of the code, results now seems reasonably close to Wei's plot


I started the trihadron correlation. The procedure is ( https://twiki.cern.ch/twiki/pub/CMS/RidgePA/20121003_fqwang.pdf):

  1. Calculate the T-A(t) correlation with a t restriction.
  2. Calculate the background while the T-A are not from the same dijet or T-t are not the same dijet. Since there would be more than 1 dijet pair in one event.
  3. To do step 3, you need to calculate the percentage of T-t combintorics, multiply this to the (T-A correlation+flipped t-A correlation)
After the MB correlation, we move to look into the centrality dependence. 3 bins are picked (0,40) (40,80) (80,1000)


  1. ECAL plot with cut on real data.
  2. Yue Shi: mid rapidity multiplicity and HF compare.
  3. AMPT tune.


In the past two days, the main thing I did is to make MC ridge plots. Including generator level Hijing AMPT and reco AMPT. The reco plots only collects a few events, if requiring N>110. So now there are 2 plans:

  • 1. Running the Hijing gen only and analyze the genParticles collection
  • 2. Running the AMPT with High Multiplicity Filter cut. And analyze the reco N>110 events.
The 1. is suggested and very urgent.

Some notes these days:

  • 1. What is soft /hard correlation?(from paper)
one is 3 < ptrigT < 6 GeV /c and 0.15 < passocT < 3 GeV /c (we call it as ”soft” associated hadrons since the soft particles are dominated)

  • 2. Trihadron correlation(different from Fuqiang used)
In this method, we mixed two events which have very close centrality into a new mixing event, and extracted ?φ distribution which is regarded as the respective background.

  • 3. ZYAM (zero yield at minimum)
  • 4. Mach-like correlation for soft or hard scattered associated particles was investigated in the framework of the AMPT model which includes two dynamical processes, namely parton cascade and hadronic rescattering.

  1. Ridge: long range --eta, initial geometry

  • 1. how to define centrality? HF and MC
  • 2. In the paper they compare pp result with a -1 shift in eta. Why? Just to make the plots comparible maybe.



  • 1.trihadron correlation?
  • 2.How string melting used? Is the default AMPT with string melting or without?
Need to check code to answer it.


1. Need to contact Fabio for the new beamspot

2. Need to look into Hijing. So it is clear that some B,D particles which should decay in Hijing remains stable until decayed in Geant. However if turn on the decay in Hijing, then Pythia cannot handle all those particles because Geant does not check the decay status(guess).


We run more Pythia sample and will run more during the weekend. YenJie and Yue Shi is investigating Hijing STA. The STA output seems contain rare B, D particles. When Nevts increased to 100k. B, D become obvious.

particle id: http://pdg.lbl.gov/2002/montecarlorpp.pdf


We stopped running Hijing, because Hijing sample has problem on higher pt efficiency.

The efficiency is defined as:


The reason might be there are more heavy flavor particles in AMPT and Hijing


Pythia assign D,B as unstable particles while AMPT,HIJING assign D,B as stable. That's why in AMPT,HIJING samples, D,B show up as simtracks. However these particles would have already decayed (ctau ~ 300micrometer) when they reach the tracker. So that's why we have a lower tracking efficiency at higher pt in AMPT,HIJING when the heavy flavor contribution becomes sizable.



plots for particle distribution D+- B +-<a target="_blank" href="https://twiki.cern.ch/twiki/pub/CMS/RidgePA/particle_species_mc.pdf">https://twiki.cern.ch/twiki/pub/CMS/RidgePA/particle_species_mc.pdf</a>


1.We rerun Hijing samples because of the beamspot problem. The beamspot for MC should be adjusted by the data. So we now use the real data parameters for simulation:

(x_MC,y_MC,z_MC) = (x_data, y_data, z_data) + (0.1475, 0.3782, 0.4847) cm

Beam type = 2
X0 = 0.080989 +/- 2.87774e-05 [cm]
Y0 = 0.0693616 +/- 2.87648e-05 [cm]
Z0 = -0.259745 +/- 0.042059 [cm]
Sigma Z0 = 6.889 +/- 0.0297394 [cm]
dxdz = 0.000102996 +/- 4.28001e-06 [radians]
dydz = -3.96565e-05 +/- 4.27393e-06 [radians]
Beam Width X = 0.0065216 +/- 3.36776e-05 [cm]
Beam Width Y = 0.00556408 +/- 3.36776e-05 [cm]

Before we get the official LHC number, I suggest we put:

emittance = (0.005*sqrt(2))**2/(1100) ~ 5e-8
beta star = 1100 cm

After running the MC. Before the reconstruction, we need to run a beamspot producer on the RAW data to get the correct parameters from MC and create *.db for the reconstruction step(analyze_d0_phi_fromRAW.py under 5_3_3_patch3). Though the parameters are very similar to our input, there is still some difference: such as the sigma Z0. Guess one reason is from the vertex smearing module?

2.In the ridge meeting, AMPT evening 5_3_3 trigger efficiency error is presented. We don't know if this is from the wrong beamspot or the wrong tune of AMPT.



1.The cut of ridge analysis(including the skim tree and cut on primary vertex):

if ((c->skim.phfPosFilter1&&c-
>skim.pHBHENoiseFilter)) continue;
if (fabs(c->evt.vz)>15) continue;
if (c->evt.run!=202792) continue;
if (c->track.nTrk==0) continue;

2. The way to solve "cannot check out package" problem on MIT machine:

export CVSROOT=:pserver:anonymous@cmssw.cvs.cern.ch:/local/reps/CMSSW

cvs login


Q: 1. associate yield?

Integrate over correlations(from all range or from 0->minimum?)


Event plane reconstruction: https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideHeavyIonEvtPlaneReco

Need to do dihadron correlation wrt event plane.


First try to make pyquen cfg

cmsDriver.py Pyquen_DiJet_pt80to120_4TeV_cfi.py -s GEN,SIM,DIGI,L1,DIGI2RAW,HLT:HIon --himix --scenario HeavyIons --conditions START53_V11::All --datatier 'GEN-SIM-RAW-RECO' --eventcontent=RAWDEBUGHLT --processName 'HISIGNAL' --filein=inputfile.root --fileout=outputfile.root -n 1 --no_exec

need to compare pythia pp with real data.

need to have Hijing AMPT compare real data


Problem while running PixelTracker :

Exception Message:

  No "TrackerDigiGeometryRecord" record found in the EventSetup.
   Please add an ESSource or ESProducer that delivers such a record.

solved: GeometryDB _cff

hpt meeting:

  1. trigger menu need prepare
  2. Rcp. What is the definition between this and RAA
  3. HF Tower --background.---cut (Yenjie slide)
  4. diff between express reco and reco ? algrithom!
  5. vertex cut--15
  6. HF eta+-5
  7. compare particle flow jet
  8. double counting(K,pi?)---would change the background shape
  9. hpt -- factor of 2 AMPT
  10. &An,HkT&algorithm


PPD meeting:They have problem reco the pPb data. the 1,2 run is reco ed but shifted for 50cm? The 3rd run cannot be processed.

  • 1. new beam spot(need a shift since the realitvistic situation is a little different from the prediction, the detector is off the below beamspot)
Beam type = 2

X0 = 0.080989 +/- 2.87774e-05 [cm]

Y0 = 0.0693616 +/- 2.87648e-05 [cm]

Z0 = -0.259745 +/- 0.042059 [cm]

Sigma Z0 = 6.889 +/- 0.0297394 [cm]

dxdz = 0.000102996 +/- 4.28001e-06 [radians]

dydz = -3.96565e-05 +/- 4.27393e-06 [radians]

Beam Width X = 0.0065216 +/- 3.36776e-05 [cm]

Beam Width Y = 0.00556408 +/- 3.36776e-05 [cm]

dx/dz...Correlate x,z beamspot. It will be a eclipse x-z, x=dx/dz*z+c slope is dx/dz

  • 2. command under 5_3_3
  cmsDriver.py Hydjet_Quenched_MinBias_2760GeV_cfi -s GEN,SIM,DIGI,L1,DIGI2RAW,HLT:GRun -n 10  --conditions  auto:startup_GRun --datatier GEN-SIM-RECODEBUG --eventcontent FEVTDEBUGHLT --scenario pp --no_exec

Q: What is emittance?(need to find the link)


1.Need to compare mc code with the real data(

done,pt: compared to pp and MC; ECal; Kurt: dijet



  • 1. cvs co -d MitHig /PixelTrackletAnalyzer UserCode /MitHig/PixelTrackletAnalyzer
  • 2.How to cmsRun on imported files from txt:
import FileUtils as FileUtils

mylist = FileUtils .loadListFromFile('pPb_Data1.txt')

readFile = cms.untracked.vstring( *mylist)


1. How to access CAStor data? real data store there.

Seems can be accessed directly

2. during meeting: pixel disks; beamSpot adjust to MC high pt jet spectrum


1.Yetkin: How to Find the machine info in the condor logs:

in .condor: 000 (450233.000.000) 07/11 03:44:06 Job submitted from host: <>



The result is:

   Address: name = T2BAT0050.CMSAF.MIT.EDU.

Though the problem remains unsolved.

The other way is:

   process.SimpleMemoryCheck = cms.Service('SimpleMemoryCheck',
                                    oncePerEventMode = cms.untracked.bool(False)

so the memory used is checked for every step.

2. How to fix the problem that cannot run on multiple root files:

because MC events have the same run number although they are using different seeds.

  process.source = cms.Source("PoolSource",
    duplicateCheckMode = cms.untracked.string('noDuplicateCheck')

to disable duplication check

During meeting:

Q 1.photon hadron correlation MIT

2.fragmentation function


The way to modify the code and tag it:(from yetkin). Adding tag is the highest level. Before adding tag need to validate the code.

Steps are :

cvs co GeneratorInterface /HiGenCommon

cd GeneratorInterface /HiGenCommon

(modify the code)

scram b


cvs ci -m"relevant information of the update"

(find a good name for next tag, for example, from here:


next seems to be V00-01-02)

cvs tag V00-01-02

go to:


Type in the search box, find package

Choose a tag,

Add tag,

fill the form properly, provide as much information as possible,



Finally got progress in code validation. Need to fix the bug of invert matrix


According to Colin, changed some input parameters.

ntmatter ->1000

stringFragB = cms.double(0.9),

stringFragA = cms.double(0.5),

Shengquan made plots to compare them.

during meeting:

1.Wei: need to runOpen HLT

(haven't run yet)

2. Compare to Alice data


3. 0.5 million private

4. question about the eta distribution(Why number of tracks increased? 30->35----because comE increased to 5TeV)


1. cut on tracks

from wei: I suggest you first plot the distributions of d0 and dz relative to the primary vertex, as well as d0/d0error and dz/dzerror. d0error and dzerror are defined like:

    double dzerror = sqrt(trk.dzError()*trk.dzError()+zVtxError*zVtxError);
    double dxyerror = sqrt(trk.d0Error()*trk.d0Error()+xVtxError*yVtxError);

which is basically a combination of uncertainties on the track back pointing position and primary vertex position.

Looking at the distributions directly will be very informative and it is more obvious where you want to put a cut. Normally, we require abs(d0/derror)<3 and abs(dz/dzerror)<3 (within 3 sigmas of the distribution). We do not normally cut on the absolute values of d0 and dz because the uncertainties are bigger for high eta region, for example. A flat cut would not be fair.

To summarize, the minimum cuts we normally use for selecting good primary tracks in pp are:

- highPurity bit (you can do if(trk.quality(reco::TrackBase::highPurity)) continue;)

- abs(d0/derror)<3, abs(dz/dzerror)<3

- pTerror/pT<0.1 (relative uncertainty on pT should also be not too large. Again, you can plot the distribution. The code is like trk.ptError()/trk.pt()<0.1)

if it is not clear to you why one should cut on dz/dzerror and d0/d0error, instead of dz and d0, you can plot their distributions vs eta (in 2D). Then, it will be obvious which one has more uniform behavior as a function of eta.

It turns out d0 vs eta is more uniform. d0 is not correlated with eta. But dz not.

why htrkDzError1 is >=0?

2. 5_2_X new GT to try

Data2/ new GT:START52_V11C

3. The way to print out the files which has a smaller size than XX from quan:

_find /mnt/hadoop/cms/store/user/lingshan/pPb/5020withZdc/RECO/5_2_6/Data1/ -name "*.root" -size -5M -exec ls -l {} \;

ls->rm can delete files smaller than 5M


reco ->need cmsDriver to download new version reco.py


1.Found the way to solve HLT problem of cfg in 5_3_3

  --globaltag auto:startup_GRun

so that several lines are added for L1.

posted here:https://hypernews.cern.ch/HyperNews/CMS/get/hlt/3198/2/1/1/1.html

also works under 6_1_X. with the lines:

  process.GlobalTag = GlobalTag(process.GlobalTag, 'auto:startup_GRun', '')

Sometimes under 5_3_X you also get this line, which works fine, but don't know if this can be used.

2. some of the files has no content. Does this relate to the different machines???

Q: difference between addpkg & cvs co? addpkg ---seems you can add a specific tag.


Start recording from this day. Though the pPb work started in March. During the summer the work progress is slow. Partial question is the lack of communication with others. Since I just getting used to this kind of work template. You need to coorperate with other people.

To do the code validation, we are asked to get a code running on 5_3_X

but using the old method(as 5_2_6), the new error comes out:

[0] Processing run: 1 [1] Running path 'HLTriggerFirstPath' [2] Calling beginRun for module EventSetupRecordDataGetter/'hltGetConditions' [3] Using EventSetup component JetCorrectionESChain/'hltESPAK5PFL1L2L3' to make data JetCorrector/'hltESPAK5PFL1L2L3' in record JetCorrectionsRecor [4] Using EventSetup component L1FastjetCorrectionESProducer/'hltESPL1PFFastJetCorrectionESProducer' to make data JetCorrector/'hltESPL1PFFastJetCorrectionESProducer' in record JetCorrectionsRecord

Exception Message:
No data of type "JetCorrectorParametersCollection" with label "AK5PFHLT" in record "JetCorrectionsRecord"
 Please add an ESSource or ESProducer to your job which can deliver this data.

Compare MC_53_V6(5_3_2) and MC_53_V9(5_3_3) There is a difference in AK5PF etc...

under 5_3_2 pPbboost.py works

under 5_2_6 AMPT_Try.py works

under 5_3_3


-- LingshanXu - 09-Oct-2012

Edit | Attach | Watch | Print version | History: r6 < r5 < r4 < r3 < r2 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r6 - 2015-04-01 - LingshanXu
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback