-- AustinBaty - 2019-08-07

Matt Herndon

Measurement of Z boson yields and azimuthal anisotropy in 5.02 TeV PbPb collisions with CMS

Organized by line number in the AN and where appropriate with the corresponding line numbers in the paper.

35) ( line 18 in the paper draft)
The discussion does not make clear whether the uncertainties are due to the assumptions about the nature of the heavy ion collision, or whether the uncertainties are due to the calculations done within the Glauber Model, or other issues. Please clarify.

52) (line 26 of the paper draft)
Please include a reference for the "scalar-product" analysis method which explains the methed, the various quantities that are defined and calculated and explains the physics importance of these calculations and ideas in the context of Heavy Ion physics.
After some searching: <a data-saferedirecturl="https://www.google.com/url?q=https://arxiv.org/abs/1209.2323&source=gmail&ust=1565297829503000&usg=AFQjCNFGnPuhORN6v-h9e6c47LFv5tzAXQ" href="https://arxiv.org/abs/1209.2323" rel="noreferrer" target="_blank">https://arxiv.org/abs/1209.2323</a> seems to be a standard and comprehensive reference.

I have included this reference where appropriate in the paper draft.


Please include the reference above or equivalent reference at this line and spend a line or two describing the physics of each of the quantities you measure. The discussion of the physics should make plane what the expectations are for Z boson production as opposed to particles that experience hydrodynamic flow.

Some discussion is given later, but this is really material that should be in the introduction.

59) (in the paper starting at line 62)
There are some details of the MC generation that can be extracted from Table 2. However, you should explicitly document what MC are used, what PDFs (or equivalent nuclear distributions) and what tune of the underlying event with appropriate references.

A much greater level of detail is now given in the paper draft with appropriate references and MC versions/tunes.


68) (line 75 in the paper)
For my information.
I don't understand the origin of this bias in the MC. You are also simulating a hard process in the MC. Is there something non-physical about the way that the MC simulates events with Z bosons that leads to an incorrect centrality distribution? Please explain.

The way hard probes are simulated in our MC is not physical, because a single hard probe is injected into a separately generated underlying event background. This means there is by definition 1 hard probe per MC event, regardless of the event centrality. However, in data, hard probes such as jets and Zs are more likely to occur when the two nuclei collide in a head-on manner, because many nucleons will collide with each other at the same time and give a higher probability of a hard scattering happening. I have tried to clarify this in the paper.

79) (line 71 in the paper)
The details of all these filters are not given. For instance pclusterCompatibilityFilter is not described beyond general terms. A reference should be given where they are described in detail.

More detail on what these filters do has been included in the paper draft.


119) (line 78 in the paper)
A reference should be given that defines the nuclear overlap function TAA

The standard HI reference for the Glauber model/TAA is now given in the paper draft.

190) (line 116 in the paper)
Typically an effect like this, especially if it is large, would be accounted for by some type of unfolding procedure. This would allow you to iterate the procedure in a controlled way to account for the fact that if the measured pT spectrum is has a different slope than the predicted one the corrections factors would be different. At the very least if you don't unfold the data it is necessary to assess the bias in the measurement and assign a systematic uncertainty, though this is not the preferred solution. To do so you would recalculated the corrections factors reweighing the MC to match the observed pT distribution in the data and compare to your current results.

We now use a matrix inversion (recommended by the stat committee) unfolding procedure to correct for this effect. The MC has also been reweighted to be more similar to the data distribution.

206) (line 123 in the paper)
In collisions involving matter, specifically protons, neutrons and no antiprotons, you would not expect the number of same sign and opposite sign pairs to necessarily be the same. For instance same sign positive pairs may be preferred However, it could be easily checked and corrected for if they are not the same by running a large QCD MC through your selection.

Do these backgrounds have a specific shape in the variables you measure? If you are subtracting them as a function of a variable or variables you should include that information here. Otherwise you should state what you do more explicitly.

The subtraction is done as a function of whatever variable is being measured. The exact figures showing this are in the Analysis Note. We will try to clarify this in the paper draft.

Has it been checked whether W + random leptons contaminate the same sign sample and should be subtracted off or otherwise accounted for since you separately account for that background?

212) (line 129 in the paper)
You design a selection to remove EM background but you make no comment on whether there is any expectation of residual contamination after the selection. If the remaining EM background contribution is expected to be negligible you should state that and give some proof in the form of a study or reference to justify that statement. If it isn't negligible than you should estimate the contribution of the background.

We now choose working points for the EM background cut that correspond to 90% background rejection, based on studies of EM processes in the STARLIGHT MC generator. The remaining 10\% is a <0.1% contribution to the total yield.

238) (line 148 in the paper)
The procedure you describe here seems to be aimed a normalizing the MC based background to a more data driven estimate of the cross sections based on all the non data driven background. Is this procedure meant to account for issues like higher order QCD corrections, specific issues in the heavy ion collision modeling, or reconstruction issues? You should give an explanation for why you do it this way. If the reason is higher order QCD corrections then normalizing ttbar production to a Z boson production process would not seem justified given tha ttbar production is largely gluon initiated. Also, depending on what you are trying to correct for treating the electron and muon channels separately doesn't seem like a good idea unless the main effect you are correcting for is a reconstruction issue.

257) (163 from the paper)
You've referred to the scalar product method as both modern and well established. Perhaps it's better to describe it by discussing what biases it eliminates and remove the adjetives that only describe it in general terms.

I have removed these terms from the paper draft.

line 270) (line 175 from the paper)
You state that the Z boson reconstruction efficiencies are accounted for by applying weights in the calculation of it's Q vector. Shouldn't this also be done for the Q vectors calculated from HF and tracker activity.

It seems that maybe that this effect is taken into account by centrality calibration discussed in the next section. If so it would be good to state what corrections and procedures are used in calculating the (non Z) Q vectors here in this section.

You are correct that this is already accounted for with a calibration. I have added short section in the data samples section specifying that the global event Q vectors are flattened and recentered (with some references to explain what that means).


line 306)
10% uncertainty seems fairly agressive for the ttbar contribution. However, that depends on how it's cross section was determined which is not well described since the MCs are not described in detail

This has been changed to 20%. These backgrounds are very tiny compared to the total yield in the analysis, and this uncertainty will remain negligible unless it is a >100% variation.

Figure 24) Figure 3 in the paper)
Why do you only show the Glauber uncertainties rather than a Glauber model prediction with uncertainties.

The Glauber model iteslf is only used to calculating the value of TAA, the nuclear thickness function, in heavy ion collisions. It does not model or make any predictions about the yields of particles themselves. If one wants a prediction, the Glauber model must be convolved with some other MC generator model that calculates particle yields.

Anne-Marie Magnan


Major concern: you are considering the lepton reconstruction and identification efficiencies as uncorrelated between the two leptons of the same type. To my understanding, when estimating systematic effects in the tag&probe measurements, we vary all events up and down in a correlated way. There is indeed a part of the systematics which comes from the "pass/fail" fit and that would make individual pT/eta bins independent. But I believe it is very difficult to separate the different contributions and in the end it is more conservative to take the resulting systematics on the data/MC SF varied in a correlated way for all leptons. The different sources can however be varied independently. I.e., the reconstruction eff up/down at the same time for both leptons, then independently the identification eff up and down, then independently the trigger eff up/down, and finally these 3 added in quadrature.

Given your final systematics is dominated by this lepton systematics - it could have a significant impact !

You are correct that there could be some correlations which are difficult to disentangle, so I have taken your advice and varied the systematic uncertainties in a correlation fashion between the two daughter leptons (but do the different sources independently). The statistical uncertainties come directly from the TnP fits. For these, the uncertainty should be fully correlated if the two daughters use the same fit (same TnP bin) but should be uncorrelated if they use a different fit (different TnP bin). Thus, for Reco and ID statistical uncertainties I do the variation in a correlated (uncorrelated) way if the daughters are in the same (different) TnP bin. For the trigger statistical uncertainties I just assume correlated to be conservative, as this uncertainty is negligible compared to other sources. The net effect of this change is that the uncertainties increase by around ~1% in the muon channel and ~2% in the electron channel.


Comments on AN v5

l144 TAA values taken from 2015 PbPb: can you argument a bit why you expect it would not have changed for 2018 data ?

l152 can you explain a little bit better what is this vertex probability and why this cut ?

Fig 16-18 wrong labels for x-axis , "mumu" instead of "ee"

l321-323 what matters is the resolution compared to the bin size....

l338 mention here that you don't have any regularisation -> by the way, this is kind of "lucky", and I wonder if you tested with small regularisation, just to check it doesn't affect the result unfolded/raw ratio much and neither the uncertainties ?? The "folding" exercise you do in appendix is kind of granted to work and is not really a closure test at all....

l677 you mention the maximum pT of the measurement: how was this value decided ?


Comments on paper v2

- you did not run a spell-checker, this is an easy way to catch easy typos ... Few lines I could catch: 95,102,108,109,126,222,268,279,308,309,338,345,

- the draft reads very well up to section 5 and then it degrades quickly wink Trying to give more specific examples below, but generally the sentences are much more "analysis note-like" from section 5 when I felt the paper was quite well-written up to there.

- please try to banish the word "cut" , and expressions like "there is", "there are", and complicated sentences when a simple "subject-verb-complement" can do very well. More exemples again below.

27 and 201 only two places in the paper with "we" -> replace with indirect sentences.

27 pair of electrons

28 "compare with predictions" --> do we ? Not at the moment....

31-32 used to constrain

66-73 you have a mixture of past and present tense. Make a choice: I'd stick to present.

85 twice "impact"

99 direction -> direct

109 first sample -> signal sample ?

114 similar samples -> rephrase as "Same generator and settings" ?

120 ME-level -> ME level

126 primary vertex's z position -> z position of the primary vertex (I believe the CMS guidelines say something like no more than 3 nouns...)

146 add the overall efficiency, like you quote l161 for the electrons ?

148-150 this calls for either saying why inefficient, or just remove this as "too detailed" for the paper. You correct for the missing acceptance in the end so not strictly speaking mandatory to explain in paper ??

155 Cuts -> Selection criteria

169 data and MC -> data to the MC.

179 criterion -> criteria

180 window 60 < Mll < 120 * GeV*

182-184 these numbers do not correspond to numbers in tables 8,9 of AN, shouldn't they ?

194 for detector inefficiencies -> for these detector and selection inefficiencies

195 There are multiple .. that can --> Multiple backgrounds can create...leptons. These backgrounds are subtracted....

Whole paragraph could benefit from better writting.

199 "There are" --> rephrase.

202 equal of -> equal to

203 proxy of -> proxy for ?

205-206 rephrase with explanation why, like in the AN. One more paragraph which could benefit from better writting.

211 0.2% (1%) --> are these numbers for mumu (ee) ? To be added....

222 cut -> just selection

224 the an -> an

225 bosons decays by --> not clear. Just "bosons is 0.3%" ??

234 W bosons decaying to a single lepton being ...

240 opposite sign distribution -> opposite-sign distribution ... though should be opposite-sign events or sample .... from then on many sentences don't have the correct subject, analysis-note-like style ....

248 efficiency -> resolution ?

252 add "hence no regularisation needed" ?

254 spell out 10x

fig 1 add ratio plots, if possible with stat+syst uncertainties on backgrounds . Left should be Mee. Caption: Dilepton mass distribution...

258 add also the number for muon eta<2.1

267 at least three

273 For account -> To account

274 Eqn -> Eq. or I actually rather like to spell out "figure" and "equation" in full actually...

278 "There is a modeling..." -> "The modeling .... is related to..."

289 is varied -> are varied

308 remove, it repeats the previous line (or rather previous line is enough to understand what is done)

312 this is 0.5% +/- 0.5% so relative uncertainty of 100%, is that correct ? Maybe confusing if it is quoted as relative like others or absolute...???

313 cut -> selection criteria

313 working point*s*

315 cuts -> requirements

317 production -> bosons

318 The EM events -> The number of EM events ...is ...

319 cut variation -> remove or rephrase ... Another candidate paragraph for better writting.

324 addition -> additional

326 in quadrature is -> in quadrature to

335 measurement..measured.... rephrase

335-338 fairly standard procedure, maybe no need to explain ? just quote "uncertainty from MC stat in response matrix" ??

346-347 To combine .... combined --> rephrase

355 "form a single data point" --> rephrase, simpler sentence !

374-375 remove as ATLAS pp point is gone now in fig 3 ?

376, 380 Ref -> figure

377 cuts ...

378 there is ....

figure captions: improve with adding what are error bars etc....

384 from 2018 -> collected in 2018

386 "The rapidity and pT spectra constrain MC generators" --> first time this is mentioned, what do you mean ??

388 for <50% events --> mention also that number in the results section: the conclusion should just repeat things already said before....

Wei Xie

- abstract: "The yields in various centrality bins are compared to
Glauber model predictions of the production rates of hard probes not
modified by the presence of a hot medium.".

I couldn't find this prediction on the plots. Does it mean
something different?

- L19: "analysis" --> "analyses"

- L83: "The degree of overlap between the two lead ions (centrality)".
This is already defined in L9. Pick one of the definition and replace
this sentence with just "centrality".

- L90: need a brief clarification why |eta|<0.75 is chosen

- L126: "beter" --> "better"

- L179: "criterion" --> "criteria" since it refers to all selection .....

- Fig.1: what's the reason for choosing mass range of 60-120 GeV? At
least for dimuon channel, the s/B ratio is very high at mass=60 and 120
GeV/c.

- L185-194: related to question on Fig.1 above, is there a cuts on
mass range in the numerator or denominator when calculating the
efficiency? There certainly shouldn't be any mass range cuts on
denominator.

- L207: "0.5%" does it has large pT dependence? Naively the higher the
pT, the higer the chance of mis-identification

- Fig.1 left panel: x-axis title: --> $m_{ee}$

- Fig.1: Are the orange Z->ee and Z->uu from MC or from "Data -
background". It seems to be the former. In that case, is the "MC Z->uu
and Z->ee" or or the "Data - background" used for yields calculation.
Is the MC is used, then we need a pull plot under each panel to show the
quality of describing data. If the "Data - background" is used, then the
"MC Z->uu and Z->ee" is not needed to be in the plot.

- L306-307: need a brief description of the justification on why 20%
is choosen

- L355: need a reference or a brief description of the method used for
combining the data.

- L366-368: The discussion here is a bit too qualitative. It just
says "could be a indication". Then it also could mean due to other
effect, e.g. medium, or PDF. In this case, it's hard to justify it's
role as a proxy of T_AA calculation. Since this is probably the most
important physics message for the paper, the discussion need to be
significantly expanded. For example, including model calculations, etc
and if possible, try to make the conclusion stronger then "could be an
indication" because this is already a precision measurement.

- L376: Ref.4 --->Fig.4

- L380: Ref.5 --> Fig.5

- Fig.3: missing x-axis title. Move y-axis title

- Fig.3: in the legend: "---- Glauber uncertainties", do you mean " []
Glauber uncertainties" ?

- L394: "This reinforced the conclusion....". This need to be more
quantitative or need to be tone down, because we only say "could
indicate" from the yield plot that Z is not affected by medium. If we
can make the stronger conclusion than "could indicate" from the yield
plot, we can adjust the conclusion here correspondingly. Can we have a
model for Z v2? Could the small Z v2 due to the fact that Z is too heavy?

- all Figures: the legends and title are too small. The size need to
be significantly increased.

Edit | Attach | Watch | Print version | History: r11 < r10 < r9 < r8 < r7 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r11 - 2019-11-22 - AustinBaty
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback