-- AustinBaty - 2019-08-07

Matt Herndon

Measurement of Z boson yields and azimuthal anisotropy in 5.02 TeV PbPb collisions with CMS

Organized by line number in the AN and where appropriate with the corresponding line numbers in the paper.

35) ( line 18 in the paper draft)
The discussion does not make clear whether the uncertainties are due to the assumptions about the nature of the heavy ion collision, or whether the uncertainties are due to the calculations done within the Glauber Model, or other issues. Please clarify.

52) (line 26 of the paper draft)
Please include a reference for the "scalar-product" analysis method which explains the methed, the various quantities that are defined and calculated and explains the physics importance of these calculations and ideas in the context of Heavy Ion physics.
After some searching: <a data-saferedirecturl="https://www.google.com/url?q=https://arxiv.org/abs/1209.2323&source=gmail&ust=1565297829503000&usg=AFQjCNFGnPuhORN6v-h9e6c47LFv5tzAXQ" href="https://arxiv.org/abs/1209.2323" rel="noreferrer" target="_blank">https://arxiv.org/abs/1209.2323</a> seems to be a standard and comprehensive reference.

I have included this reference where appropriate in the paper draft.


Please include the reference above or equivalent reference at this line and spend a line or two describing the physics of each of the quantities you measure. The discussion of the physics should make plane what the expectations are for Z boson production as opposed to particles that experience hydrodynamic flow.

Some discussion is given later, but this is really material that should be in the introduction.

59) (in the paper starting at line 62)
There are some details of the MC generation that can be extracted from Table 2. However, you should explicitly document what MC are used, what PDFs (or equivalent nuclear distributions) and what tune of the underlying event with appropriate references.

A much greater level of detail is now given in the paper draft with appropriate references and MC versions/tunes.


68) (line 75 in the paper)
For my information.
I don't understand the origin of this bias in the MC. You are also simulating a hard process in the MC. Is there something non-physical about the way that the MC simulates events with Z bosons that leads to an incorrect centrality distribution? Please explain.

The way hard probes are simulated in our MC is not physical, because a single hard probe is injected into a separately generated underlying event background. This means there is by definition 1 hard probe per MC event, regardless of the event centrality. However, in data, hard probes such as jets and Zs are more likely to occur when the two nuclei collide in a head-on manner, because many nucleons will collide with each other at the same time and give a higher probability of a hard scattering happening. I have tried to clarify this in the paper.

79) (line 71 in the paper)
The details of all these filters are not given. For instance pclusterCompatibilityFilter is not described beyond general terms. A reference should be given where they are described in detail.

More detail on what these filters do has been included in the paper draft.


119) (line 78 in the paper)
A reference should be given that defines the nuclear overlap function TAA

The standard HI reference for the Glauber model/TAA is now given in the paper draft.

190) (line 116 in the paper)
Typically an effect like this, especially if it is large, would be accounted for by some type of unfolding procedure. This would allow you to iterate the procedure in a controlled way to account for the fact that if the measured pT spectrum is has a different slope than the predicted one the corrections factors would be different. At the very least if you don't unfold the data it is necessary to assess the bias in the measurement and assign a systematic uncertainty, though this is not the preferred solution. To do so you would recalculated the corrections factors reweighing the MC to match the observed pT distribution in the data and compare to your current results.

We now use a matrix inversion (recommended by the stat committee) unfolding procedure to correct for this effect. The MC has also been reweighted to be more similar to the data distribution.

206) (line 123 in the paper)
In collisions involving matter, specifically protons, neutrons and no antiprotons, you would not expect the number of same sign and opposite sign pairs to necessarily be the same. For instance same sign positive pairs may be preferred However, it could be easily checked and corrected for if they are not the same by running a large QCD MC through your selection.

Do these backgrounds have a specific shape in the variables you measure? If you are subtracting them as a function of a variable or variables you should include that information here. Otherwise you should state what you do more explicitly.

The subtraction is done as a function of whatever variable is being measured. The exact figures showing this are in the Analysis Note. We will try to clarify this in the paper draft.

Has it been checked whether W + random leptons contaminate the same sign sample and should be subtracted off or otherwise accounted for since you separately account for that background?

212) (line 129 in the paper)
You design a selection to remove EM background but you make no comment on whether there is any expectation of residual contamination after the selection. If the remaining EM background contribution is expected to be negligible you should state that and give some proof in the form of a study or reference to justify that statement. If it isn't negligible than you should estimate the contribution of the background.

We now choose working points for the EM background cut that correspond to 90% background rejection, based on studies of EM processes in the STARLIGHT MC generator. The remaining 10\% is a <0.1% contribution to the total yield.

238) (line 148 in the paper)
The procedure you describe here seems to be aimed a normalizing the MC based background to a more data driven estimate of the cross sections based on all the non data driven background. Is this procedure meant to account for issues like higher order QCD corrections, specific issues in the heavy ion collision modeling, or reconstruction issues? You should give an explanation for why you do it this way. If the reason is higher order QCD corrections then normalizing ttbar production to a Z boson production process would not seem justified given tha ttbar production is largely gluon initiated. Also, depending on what you are trying to correct for treating the electron and muon channels separately doesn't seem like a good idea unless the main effect you are correcting for is a reconstruction issue.

257) (163 from the paper)
You've referred to the scalar product method as both modern and well established. Perhaps it's better to describe it by discussing what biases it eliminates and remove the adjetives that only describe it in general terms.

I have removed these terms from the paper draft.

line 270) (line 175 from the paper)
You state that the Z boson reconstruction efficiencies are accounted for by applying weights in the calculation of it's Q vector. Shouldn't this also be done for the Q vectors calculated from HF and tracker activity.

It seems that maybe that this effect is taken into account by centrality calibration discussed in the next section. If so it would be good to state what corrections and procedures are used in calculating the (non Z) Q vectors here in this section.

You are correct that this is already accounted for with a calibration. I have added short section in the data samples section specifying that the global event Q vectors are flattened and recentered (with some references to explain what that means).


line 306)
10% uncertainty seems fairly agressive for the ttbar contribution. However, that depends on how it's cross section was determined which is not well described since the MCs are not described in detail

This has been changed to 20%. These backgrounds are very tiny compared to the total yield in the analysis, and this uncertainty will remain negligible unless it is a >100% variation.

Figure 24) Figure 3 in the paper)
Why do you only show the Glauber uncertainties rather than a Glauber model prediction with uncertainties.

The Glauber model iteslf is only used to calculating the value of TAA, the nuclear thickness function, in heavy ion collisions. It does not model or make any predictions about the yields of particles themselves. If one wants a prediction, the Glauber model must be convolved with some other MC generator model that calculates particle yields.

Edit | Attach | Watch | Print version | History: r41 | r8 < r7 < r6 < r5 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r6 - 2019-10-08 - AustinBaty
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback