Tristan Du Pree
The main comments are on the fitting procedure and its description, and some unclearity about the main background: ttbar. There are also some comments about the background uncertainties (JES and
BTV) and the used generators. General analysis comments are given first, more detailed stylistic comments are below.
= = = General analysis comments = = =
General comment: It is surprising that the analysis ends up with a ~20% uncertainty of which a large fraction (13.0%) comes from b-tagging. It is expected that the b-tag data-MC scale factors have much smaller per-jet uncertainties in the relevant pT range. Now it appears that this analysis instead takes larger uncertainties from a control region, but there is not sufficient details in the paper draft to understand the reason behind.
(why is it needed at all? wouldn't it be proving the
BTV results wrong if it's serious (but even this is impossible to judge from the paper)? and similarly the size of the JEC uncertainty used by all other run-1 analyses? why not perform a joint fit of the three regions? why can we measure the ttbar cross section with 4-5% precision but have to put a 7.4% uncertainty here?)
- The procedure is needed because we see that the b-tag efficiency scale factors do not describe the data in our signal region phase space. It does not prove the BTV results wrong because we showed good agreement in the ttbar phase space measured by CMS. The JES uncertainty is also within the recommendation and our analysis is particularly sensitive to JES because of the jet veto. We do not perform a joint fit because it is less stable. We have updated the ttbar reference and use a lower uncertainty
General comment on backgrounds. Concerning the ttbar background estimate: why the 3-step procedure and not an all-in-one fit? The 3 steps may sound like a simplification, but the description is poor and it raises a number of questions:
--> what happens if after step 2 (multi-lepton JES fit) you go back to step 1 and fit again the b-tag scale factor? Does it converge? If you have this information please add a statement to the paper
- Yes it converges in closure tests at all steps.
--> what happens in step 3? How much do the JES and b-tag scale factor change in the final fit?
- They do not move from their new central values
--> why is the uncertainty on the JES factor fixed to 100% of the fitted deviation from unity? What was the actual precision from the fit?
- The uncertainty on the JES is now taken from the fit
General comments on JES:
--> It is not clear what 1.3 sigma_JES means: is it a uniform scale factor, or 1.3 times the eta- and pT- dependent envelope of the JES uncertainties? Or ...?
- It is the envelope, and we have added the corresponding percent normalization difference in the ttbar sample (3.3%)
--> It would be useful to indicate the approximate size of sigma_JES in the paper
--> It would be interesting to give the sign of the fitted JES variation: up or down? I hope it is in the same direction as the 1-sigma variation fitted in the Top mass analysis?
- They go in opposite directions which is why the approach of doing a simultaneous fit does not work.
General comments on ttbar:
--> In general I think the impact of ttbar systematics should be checked (also top pt...). Is the hadronization correction for
MCFM unreliable? Better use aMCatNLO and Powheg.
- We see no impact from top pt reweighting which is expected given that we see no shape difference even when varying the JES. The hadronization correction is reliable and we do not have a Powheg sample
--> What cross-section was used for the main background ttbar? LO, NLO, or NNLO? This is relevant to make sense of the ttbar scale factors measured in section 6. The ATLAS-CMS recommended NNLO cross-section at 8
TeV is 253 pb, see
https://twiki.cern.ch/twiki/bin/view/LHCPhysics/TtbarNNLO
- We have updated the ttbar cross section reference ( 239 from CMS to 241.5 from CMS/ATLAS) which is the joint CMS/ATLAS result and has a lower uncertainty
--> section 6: some comments on procedure using the ttbar. Experimental cross-section uses loose b tag, is it realistic to have this 15% discrepancy for 2 tight tags? JES "measured" only from number of events passing cuts. Recommend to check all ttbar model uncertainties (scale, matching, mt, MG vs Powheg, fragmentation, ...)
- We see good agreement with the experimental value when using loose b tags. It is only when we move to our phase space that the disagreements mount, as was presented to the BTV POG and accepted as being related to the b tagging efficiency
--> Are the fitted scale factors compatible with the measured 8
TeV ttbar cross-section and known values for b-tagging SFs ?
- Yes, they are compatible, but with a central value that better describes our phase space
General comments on b-tagging:
--> l.192 : your fit of the b-tag efficiency is basically consistent with one, which is good news since you are already correcting the simulation with data-MC btag scale factors (l 74), which, presumably, come from the BTag recommendation. But your error is very large, 100% of the correction. In the subsequent fit, would you get a different result if you were simply using the results from the BTag group ? i.e. do not apply any additional scaling to the b-tag efficiency, but treat it as a nuisance parameter in the fit, with uncertainties given by the BTag group? My point is that, by applying the additional scaling of 1.15, you may easily bias the subsequent fits - and indeed your fitted JES is a bit off.
- This is along the lines of the first approach this analysis tried, to do simultaneous fits. The results are consistent within uncertainties, but less stable.
--> Is this after applying the standard (pT-dependent) b-tagging SFs prescribed by
BTV? Please clarify this in the paper.
- yes we have applied the SFs
--> In fact, in ttH and tt+bb analyses one typically goes one step further and derives corrections to improve the data-MC agreement of the shape of the CSV discriminator distribution... has this been tried?
- Taking advantage of the pT dependence of the scale factors already derived by the BTV, we found that we only need to adjust this relative normalization.
--> l.166: here an efficiency scale factor is discussed and in line 167 a reweighted sample is discussed. Does this mean that the efficiency scale factor is actually a reweighting? Or what does "reweighted" mean in line 167?
--> l.203-205: can't we also fit simultaneously the signal region and the two control regions, with the Wbb signal strength as the fit parameter, treating the btag efficiency and the JES as nuisances that can vary within the uncertainties provided by the POGs? If the knowledge that we (the BTag group) have on the b-tag efficiencies is better than what comes out solely from the analysis of your control region, using that smaller uncertainty in the fit instead of your 13% would reduce the uncertainty of your measured cross-section.
- We have found that we need to separate the fit to isolate the effects of b-tagging and JES. We now take the uncertainty directly from our fits instead of inflating them to 100\%
General comments on fit
--> General comment: suggested to perform a simultaneous fit.
--> l.169: what does "jet energy scale adjusted mean"? Does this mean that a residual jet energy scale factor is determined? How much do these scale factors differ from the official b-tag and JES factors? Could the discrepancies not be due to other reasons? How do you know that it must be the b-tag and JES that creates the discrepancies? From Table 1 it looks like the effect is rather large (b-tagging is 13% off???). Has this been discussed with the b-tag POG?
- The text has been clarified. The JES is shifted and new templates are made and these are used in the fit. The scale factors are consistant with the official b-tag and JES factors. We are confident that this is coming from the b tag as discussed with the BTV POG, and the JES is expected (and also observed) because of our jet veto. Yes, this discrepancy has been discussed with the BTV POG.
--> l.154: it is written that b-tagging has an impact on the shape of distributions, but Table 1 lists b-tagging only as normalisation uncertainty. Which one is true? Where is the b-tagging shape uncertainty taken into account?
- b-tagging has an impact on the shape of distributions in principle, but in practice in this phase space, it only affects the normalizations.
--> l.189: is this "fit in the tt multijet region" the same that was already discussed in line 164-167? It's just 20 lines above, I don't think that the reader needs to be reminded.
- Yes, we start the Results section with a quick overview of the procedure
-->l.192: why does the relative uncertainty on the scale factor increase when combining the 2 channels?
- This was a result of our inflation to 100% and is no longer the case
Ttbar: It would be good to check the effect of the main systematic uncertainties on the ttbar prediction, to check how much of this is absorbed in the fit:
--> variation of renormalization and factorisation scales
--> top pT reweighting
- We see that the variation of renormalization and factorization scales give a final uncertainty of 10% and top pT reweighting has no effect
General comments on generator:
--> l.259: Is
c-had@LO really suitable for NLO
MCFM? Better use
aMC@NLO or Powheg.
- we compared aMCatNLO and Madgraph (+pythia) obtaining similar results for the hadronization factor using WBB samples
--> l.262: DPS discussion unclear. According to Pythia8 manual, the default
MPI includes 2->g->bb (s channel). So only the t-channel is missing in the 4F samples?
- The problem is not pythia8, but the hard interaction. the 4F sample produces directly at hard level a W+ bb sample. The contribution from events with WMuNu + another interaction that gives the bb par, cannot be done simply with pythia. Yes, the subsequent MPI interactions (more than 2 b) are included, but they are negligible in comparison
--> l.265 : i.e. DPS increases the Wbb xsection by 10% ??? that sounds a lot ! how exactly is this DPS correction calculated ?
- Why is this a lot - do have a reference showing it should be less in mind that you could give to us? It is a value that is cross checked both using Madgraph samples (WMunu inclusive + bbar production, explicitly done with DPS) compared to an older estimation assuming fiducial XSec_WMuNu x fiducial XSec_BB / sigma_Effective.
--> l.270: The scale uncertainties on theory predictions mentioned in line 270 are not shown in Figure 5. Or are they included in the "PDF" uncertainty? Please clarify.
- Figure 5 has been updated
--> Fig. 5: Where are
aMC@NLO and Powheg? Should be more reliable than
MCFM+had.
- We used the samples that were available
--> l.95+104: since you use an NLO generator (POWHEG) for single-top production, why can't you stick to the POWHEG cross-section for the NLO normalisation (l.104)?
--> l.88-95: the justification for using the four-flavour scheme sample for the shapes (larger MC statistics despite the fact that 5F is deemed more accurate) sounds poor - unless the shapes in 4F and 5F agree well. Is that the case indeed ? at GEN level the statistics should not be an issue for LO samples. Anyway, l 130-131 contradicts what was just said - since it now says that the shape is taken from the 5F W+j sample ?
- The text has been reworded - "For the signal distributions, the \Wbb component of an inclusive \Wjets sample is used, with the shapes of the distributions taken from a dedicated high-statistics generated sample of exclusive \Wbb."
Data/sim agreement:
--> Figure 4: It is mentioned l.217 that there is an agreement between data and simulation. But there seems to be a slope in the lepton pT ratio (the data seems softer than MC). Has this difference been quantified / studied?
- We see the slope and thought that others might find this distribution interesting.
--> 217: "Agreement between data and simulation is observed." How is this quantified? By eye, it looks like there is e.g. a downward trend in the slope of the ratio of the lepton pT distributions
- Statement removed. We will leave it to the reader to decide if they would call this agreement or disagreement.
= = = Specific comments (grammar/spelling/style/etc) = = =
There were still quite some typos in the document that could have been easily caught using a spell-check. Please run this next iteration.
- Abstract: very long sentences
- Abstract, 2nd half: this tries to be a sentence but isn't. "agrees" --> "in agreement" or re-write
- Abstract: "the W boson" -> "W bosons" ? (also for consistency with line 4 in the Abstract)
- first instance refers to the process itself, second to our measurement
- l.3: "experimental searches" -> "searches"
- l.3: "searches" -> "searches for ..."
- l.4+5: 2x "production"
- l.4+5: swap "vector boson in association with Higgs boson" -> "Higgs boson in association with vector boson"
- l.8: "an extension" -> "extensions"
- l.9: "lepton(s)" -> "leptons"
- l.8-10: what different models? It feels like a reference would be useful here.
- There are many searches ongoing in CMS in this general category and we did not want to mention a specific one at the exclusion of others.
- l.10: "jet(s)" -> "jets"
- l.11: "associated jet dynamics" -> "the dynamics of the associated jets
- l.12: drop "to"
- l.18: "luminosity" -> "luminosity at the LHC"
- l.19: remove comma after 8
TeV
- l.28: "where pseudorapidity is defined as..." should be moved to section 2 (l.35)
- l.34: "loosely isolated" is not clear, and the isolation is described a few lines below anyway.
- We expect that most readers will be familiar with the concept of lepton isolation so it was worth mentioning specifically that the triggers have some isolation requirement. As you point out, isolation is described more fully a few lines below.
- l.34: this seems to be the momentum threshold of the trigger (and not the muon selection...). This could be said more explicitly.
- changed "triggers with a loosely..." to "triggers which require a loosely..."
- l.36: remove "then"
- l.42: "calorimeter" --> "the calorimeter"
- l.42: "Both the muon and electron" --> "Both the muon and the electron"
- Eq.1: most symbols in this equation are not explained. Do we assume that they don't need explanation?
- Eq.1 : need to say that \Sigma p_T^charged is restricted to charged particles that come from the primary vertex.
By the way, do you use only the charged hadrons, or also charged leptons in this sum ?
- l.54: ...of the W boson...
- l.61: "with a distance parameter of 0.5" -> "with a parameter R=0.5"
BTW: Ref.20 mentions "radius parameter", maybe good to follow that convention.
- changed to "radius parameter"
- l.72: "that both jets" --> "two jets that"
- l.72: "which has a" --> "with a"
- l.73: rewrite as "probability of 0.1% (1%) for light (charm) jets"
- l.74-75 : the data/MC scale factors for the b-tagging efficiencies: besides pT, they do not depend on the eta of the jet ?
- They do not, they are the CSV Tight scale factors.
- l.77: "for the signal enhanced dataset" is not defined, so better to drop
- l.77 : "signal enhanced dataset" sounds weird. Why not simply "After all sel requirements described below are applied,.."
- l.80: comma after "simulation"
- l.82: drop "events with"
- l.82: "+jets or with ttbar" -> "+jets, and ttbar+jets events"
- l.85: qcut = 40 in ttbar
- l.86: you mention V+jets and ttbar+jets, but how about photon+jets?
- l.87: 'inclisuve' --> "inclusive" (please run spell-check)
- l.87: "The normalization" -> "The relative normalization"
- l.88: at this stage, you have not said yet that the analysis relies on a fit of the MT distribution, hence the reader does not know of which "shape" you are talking about, such that you could use "shapes" (plural) instead.
- l.90: "b-quark" versus "b quark" in line 247. Please make it consistent. I think "b quark" is the right spelling.
- l.93: 'mu_F' not defined
- l.95: "extraction." -> "extraction, MT."
- l.96: "Powheg 2.0" -> "Powheg~2.0" (Latex)
- l.97: "in the" -> "with the"
- l.99: Rewrite as "Diboson samples are generated and hadronized with Pythia 6.4 at leading order (LO) using the
CTEQ6L PDF set and the Z2* tune"
- l.101-103: which PDF?
- l.105: "colleced" -> "collected" (spell check)
- l.105 : something like a report number is missing in the reference [42]
- l.108: drop "as are"
- l.109: "additional" -> "additional simultaneous"
- l.119: "the control regions" -> "the data in the control regions"
- l.121-122: drop this whole sentence concerning the muon reconstruction/selection, it was already described before (l.40-50)
- We want to state all of the selection cuts concisely in one place.
- l.125: "in" -> "for"
- l.129: for the tt-multilepton region, do you require that both leptons have opposite charge ?
- We do not make this requirement. With the selection as is, the region is already pure ttbar.
- l.130-131: cf above, versus l 89 ?
- l.131+138: the notation 'W+udscg' leads to confusion because of the 'c'.
- W+c is included in the category (W+cc is not).
- l.132: "done at the truth generator level" -> "done using generator-level information"
- l.132+137: why at least 1 b jet for W+bb, but 2,4,... jets for W+cc?
- W+b is CKM suppressed while W+c is not.
- l.135+139: "generated" -> "generator"
- l.136: does this include the lepton from W? The last sentence of the paragraph, l.141, should be moved upwards, in l.136.
- Yes this includes the lepton from the W.
- l.137: you demand at least 2 charm jets for an event to be considered as Wcc. While at least one b-jet is required for an event to be flagged as Wbb. Why do you make this "even" (l.137) distinction for Wcc ?
- W+c happens while W+b doesn't.
- l.139-141: I did not understand where you need this ?
- How about where it is then?
- l.139: "for the final" -> "for final"
- l.144: "cross sections as" -> "cross sections and shapes as"
- l.145: "for each region" -> in each control and signal region"?
- l.147: "be anti-isolated" -> "not be isolated"
- l.145-153: make one paragraph (it's all about QCD)
- l.152-153: This sentence would fit better at the end of the previous paragraph (l 149). It is not obvious that the QCD mT shapes do not depend on the lepton isolation. Can you quantify that, and/or have a systematics to account for that? By the way, how did you choose the sidebands cuts I > 0.2 (0.15) (l.147) ?
- This is covered by the 50% systematic uncertainty and was determined by looking at Isolation vs. MT. We chose our QCD isolation sideband by inverting the cut at the loose isolation threshold.
- l.158: "on the third" -> "on a third"
- l.161: "JES" -> "JES variations"
- l.162+163: Some unclarity about W+bb region vs process. Suggestion "W+bb has" -> "W+bb process has"
- l.163: should contain jets from parton shower
- at leading order W+bb has exactly two jets
- l.164-171: why don't you fit simultaneously the two ttbar control regions ? that would allow to properly account for the correlations between the b-tagging efficiency and the JES.
- The shapes are too similar for the fit to distinguish between the systematics - both uncertainties are found to have essentially no effect on the shapes of the distributions - only normalization. We therefore use the three step process to isolate the different effects. Note that this is a different statement than that our choice of variable to be fit does not discriminate between the signal and backgrounds.
- l.165: "reigion" -> "region" (spell check)
- l.166: "effeciency" -> "efficiency" (spell check)
- l.167: "averaged" -> "combined"
- l.166-167: "rescaling factor...reweighted samples" --> is it one factor, or per-event weights?
- l.165: "reigion" --> "region" (spell check)
- l.166: "estimation" --> "estimate"
- l.166: "effeciency" --> "efficiency" (spell check)
- l.165-167: Here only the b-tag efficiency rescaling factor is mentioned as parameter of the fit. It is not clear if there are some other nuisance parameters considered, or if the b-tag efficiency is considered as the only source of data/MC disagreement.
- The b-tag efficiency is considered as the main source of data/MC disagreement, though the other uncertainties are applied.
- l.167-169: Same thing for the second step, is it only the JES that is fitted? In particular is the b-tag efficiency fixed here?
- The b-tag is fixed here, and we confirm with a closure test on the post fit result (applying it in both the tt-multijet and tt-multilepton regions) allowing it to float. The result of these closure tests is unity.
- l.170: "properly" -> "better"
- l.171: "a fit in" -> "a fit to MT in"
- l.173: "major" -> "main"
- l.179: "affecting only" -> "in"
- l.180: "uncertainties affecting both the shapes and normalizations." -> "uncertainties in the shapes."
- l.182: "uncertainty does" -> "uncertainty further does"?
- l.186: PDF uncertainties not defined
- l.187: give more details of what you do for the PDFs. The reference to the
PDF4LHC recommendation should be moved here (l.231)
- l.191: "Fig. 1" -> "Fig.~1"
- l.191: "The measured" -> "The central values of the"
- l.192: "averaged to" -> "combined to"
- l.193: "allows the following fits to vary the rescaling factor between 1.01 and 1.29" --> Please remove, it is trivial (assuming this is a 1-sigma interval, not a step function)
- l.193: "where the uncertainty allows the following fits to vary the rescaling factor between 1.01 and 1.29". This sounds like there are cutoffs at 1.01 and 1.29, but it is probably meant that there is a Gaussian prior with width 0.14?
- line removed, but you are correct
- l.193-194: "uncertainty allows ... 1.29." -> "uncertainty is chosen to cover the individual ranges."
- procedure changed, now taking uncertainty directly from fit
- Table 1 caption: "major" --> "main"
- l.194: "rescaled", but before it was "reweighted"? Make consistent.
- Fig.1: Suggestion to improve the readability of the legend, possibly combine several backgrounds into "others" (for example diboson + DY + gamma/jet), and explain in the caption what "others" means.
- We show all of the independent samples which are allowed to float in the fitting procedure.
- Fig.1 Caption: "highest" -> "last" and "as output from" -> "after"
- l.196-202 and Fig 2: the e+mu events shown in Fig.2 left and Fig.2 right must be the same, the only difference being that MT(mu nu) is shown on the left, while MT(e nu) is shown on the right. I.e. you perform two separate fits, one to MT( mu nu) and the other one to MT(e nu)? And they happen to give the same result for the JES? Please clarify.
- Correct, and there is also the difference in the trigger required.
- Table 1 caption: "triggering" -> "trigger"
- Table 1 caption: "PDF and scale choices" -> "PDF uncertainties and scale choices" (it's not because of the PDF choice, or is it?)
- Table 1: capitalize "Uncertainty" and "Variation" and "Effect" and "Uncorrelated" and "Normalization" and "Norm." and "Correlated" and "Luminosity" and "Theory"
- Table 1: also line out 'Correlated' to the middle of the table and change "b tag rescale" -> "b tag eff rescale"
- Table 1: why both "JES" and "JES rescale"?
- One was for the JES uncertainty and one was for the uncertainty on the rescaling. We now are just taking the uncertainty from the fit and the table has been updated accordingly
- Tab 1: no uncertainty on the c --> b and on the light -> b mistag rates ?
- These are the b tag efficiency corrections provided by the POG and used as input in Step 1.
- Tab 1: you use an uncertainty of 7.4% on the ttbar cross-section? From TOP-14-016 (your ref 42 ?) we measure it better than that! Please check, since that is one of your dominant uncertainties in the end.
- changed source of ttbar cross section, and updated uncertainty
What are the uncertainties used in the fit of the control regions ? (since Tab 1 refers to the fit of the signal region) E.g. you account for the luminosity uncertainty here ?
- same uncertainties with the exception of the rescale factors that haven't been fit for yet. Luminosity is added separately.
- l.199: "multilepton enhanced data set" -> "multilepton-enhanced control region" ?
- l.200: "simulation with" -> "simulation and"
- l.201: "sigma_JES." -> "sigma_JES for subsequent fits".
- l.205: "The correlation between different sources of uncertainties is taken into account." Sources are nor independent? Perhaps "correlations between channels"?
- l.205: "correlation between the different sources of uncertainties": you do not mean that the JES is correlated with the UES for example? You probably mean instead the "correlation across all simulated samples", as said in the caption of Tab 1.
- correlation across all simulated samples
- l.208: "boson" -> "boson would"
- l.208: "event yield." -> "event yield and are not considered."
- Fig.2: the caption refers to the "muon sample" and the "electron sample" as if they were different event samples, while I understand that these are the same events ?
- Largely the same events, but not exactly, and mT is being calculated via a different lepton.
- Fig.2 legend with 'c': 'W+cc' and 'W+udscg'
- W+c and W+cc are two kinematically different processes and we would like to distinguish them in principle since we have some W+udscg contribution but essentially no W+cc in the signal region.
- Fig.2 caption: "multilepton enhanced data set" -> "multilepton-enhanced control region"
- Fig.2 caption: "highest" -> "last"
- Fig.2 caption: "as output from" -> "after"
- l.210: "the three fits" -> "should be simultaneous"
- Do not understand this comment. Is the suggested sentence this? "...are also produced by applying the results from should be simultaneous to the simulated samples"
- l.212: "b tagged" -> "b-tagged"
- l.214: "sideband" -> "region" (except if
DeltaR plot is for MT>30)
- l.215: "Table 2" -> "Table~2"
- l.216: "in the combined lepton channel" -> "combining both lepton flavors"
- l.216: "Fig. 4" -> "Fig.~4"
- l.217 and Fig.4 : in contrast to the statement made in l.217, the agreement seen in Fig 4 is not so good !
- Table 2: Give total sum of backgrounds? Suggest to add line under W+bb (to visually separate sig&bkg).
- Table 2: Five uncertainties on W+udscg yields?
- The W+udscg normalization changes on the order of 5\%
- Table 2: "muon" -> "Muon" and "electron" -> "Electron" and "0.0" -> "-"
- Table 2: align numbers right (not left)
- Table 2: for "signal strength / combined ", which denominator? FEWZ?
- Table 2: "total uncertainty of the measurement" is not true, since that is (or should be) stat+syst+theo
- changed to "total uncertainty of the fit"
- Fig.3 caption: "highest" -> "last" and "as output from" -> "after"
- Fig.4 caption: "highest" -> "last" and "as output from" -> "after"
- Fig 3 and 4: it is really impossible to distinguish the Fit Uncertainty band in b&w
- l.219 and equation: "N_signal" -> "N_reconstructed" ?
- l.221: "from simulation," -> "from simulation, reconstructed in two fiducial region"
- l.222: drop "correction factors"
- l.224: "in the following manner" -> "as follows"
- l.224+227: "Madgraph" not written in style consistent with previous occurrences"
- l.225: you should give here the fiducial cuts
- l.225-227: there is no reason why a K-factor computed for inclusive W production would also apply to Wbb in a given phase space. By the way you do not say what you use this k-factor for.
- The W+jets sample is used and the cross section taken from there - but the normalization of the Wbb should not matter. A higher initial cross section would lead to a smaller signal strength and the final measured cross section would be the same.
- l.227: style of "FEWZ" not consistent with l.102
- l.228: "11 (13)%" -> "11% (13%)"
- l.229: drop comma
- l.232: I thought that the
PDF4LHC recommendation involved only the global QCD fits (i.e. CTEQ, MSTW and NNPDF)?
- we varied all listed here, but HERA was not the largest anyways, so does not set the bound on the uncertainty
- l.233: is the variation (2,2) and (1/2,1/2)? Unclear from text.
- correct, added "simultaneously"
- Tab 3 : why is the syst uncertainty quite larger in the electron channel than in the muon channel?
- Electron channel has gamma+jets and electrons aren't as clean as muons.
- Table 3: drop comma before "pb" and add line under Muon+Electron, to visually separate from Combined
- l.242: "using
TuneZ2*" -> "using the Z2* tune"
- l.245-251: sounds weird.. It starts with some general statement and ends up saying that actually, the phase space considered here won't bring any of the "important feedback" alluded to in l.246.
- Showing that two schemes give the same result is something we consider "important feedback".
- l.255: "analysis calculated as" -> "analysis, where it was found to be" (but if same method, fully correlated?)
- l.257: "the statistics" -> "the limited statistics"
- l.261: what is a "CMS simulation" ?
- changed to just "simulation"
- l.263: ", therefore" -> ". Therefore"
- l.268-271: replace by "same way as before for A x eff" (shorter)
- l.275-277: How do you compare 7&8
TeV?
- Fig.5 caption: no need to refer to colors here. "blue" --> "inner" (2x) and "black" --> "outer" (2x)
- Figure and caption updated
- Fig.5: statistical uncertainty invisible in B&W
- Figure and caption updated
- Fig.5: Drop "CMS 2012" in legend and maybe add fiducial region cuts?
- Figure and caption updated
- Fig.5 caption: "blue" not understandable in B&W
- Figure and caption updated
- Fig.5 caption, last sentence, rewrite as: "effects of DPS are included in the generated samples so the DPS correction is not needed."
- l.279: "the W boson in" -> "the W bosons in" (consistency with 281)
- now the same as in absrtact
- l.290: "within the level of 1 S.D." -> is this up or down?
- References: please check bibliography with titles as in Inspire
- References general comments: "Phys.Lett.B 99" -> "Phys. Lett. B99"and "Phys.Rev.Lett."->"Phys. Rev. Lett." and "gamma" -> "\gamma" and "sqrt"->"\sqrt" and "pbar" -> "\pbar" and "Phys.Rev.D 99" -> "Phys. Rev. D99" and "Eur.Phys.J. C 99" -> "Eur. Phys. J. C99" and "Comput.Phys.Comm." -> "Comput. Phys. Comm" and "Nucl.Phys.Proc.Suppl." -> "Nucl. Phys. Proc. Suppl." and "Nucl. Instrum. Meth. A 99" -> "Nucl. Instrum. Meth. A99"
- [18]: "JHEP (2011)" incomplete
- reference taken from inspire
- [27]: "1106" -> "06" (11 is already in the year)
- [42] Citing an unpublished ttbar cross-section measurement technical report in a paper?
- [49]: "1403" -> "03" (14 is already in the year)