Comments after CWR V2

Concerning the updates (slides attached):

  • the changes to the statistical error for e/mu channel separately (slides 2-3) looks ok to me (except that bottom left plot in slides 3 I think is the one obtained with RooFit and not with TFractionFitter). In the new plots the central value is still the one obtained including MC errors right?
  • on slide 4 if I compare the combined plots with those obtained with RooFit, I still find that the statistical error is much larger, especially in the c-jet pt distribution. Do you understand why? The statistical error still refers only to statistical error in the data? Combine it is now run with systematic errors fully correlated? (as it is written in lines 244-246 of the new draft)
Z pt stat errors for latest version are ~equal to stat errors in version, obtained with RooFit. For c-jet electron channel stat errors are bigger, since no regularization used for unfolding.

About the text here are a few corrections (english wording on the suggestions to be reviewed by Joel)

  • line 5-6: Looking at CMS-SMP-15-009 this part on the intrinsic charm can be expanded: split the present sentence after nucleon and change it to â?oand possibly observing the intrinsic charm quark (IC) component in the nucleon. An IC component would enhance the Z+c production, in particular at large values of the transverse momentum of the Z boson and of the c jet." then add these references after "nucleon": <> <> <> Then we can start a new paragraph with â?oAssociated production of a Z boson and a c jet is also an important background in searches for physics beyond the Standard Model (SM). For example, in supersymmetry â?¦" fixed

  • line 9: do we really need reference [1] for SUSY models?

  • line 10-11: in the sentence â?oOne of the main backgrounds for such a process is Z+c jet production with missing transverse energyâ? change â?owith missing transverse energyâ? to â?owith the Z decaying into neutrinosâ?.

  • line 12-13: change to â?owill therefore enhance this and similar searchesâ? in order to be less specific

  • section 3: avoid new paragraph after line 55 and 60 and split sentence at line 61 after "PDF setâ?. The rest of the paragraph, starting with â?oThe program GEANT4â?¦â?, should be moved at the end of the section.

  • line 95, 175 and 176: â?o<â?o and â?o>â? I think should not be used without a variable on the left, i.e. instead of â?olower(greater) thanâ?.
Changed at 95 to "greater than", bot for 175 and 176 not sure, if it can be said this way. changed to "...with \PQc jets with generated \pt ${<} 30\gev$ but reconstructed \pt ${>} 30\gev$ ..."

  • line 107: â?oto correct forâ? -> â?oto removeâ? (to avoid repetition of â?ocorrectionâ? and â?ocorrectâ?)

  • line 110: "particle level jetsâ? -> â?ogenerator-level jetsâ?

  • line 131: in â?opresence of secondary vertices and tracksâ? there is no attribute to the tracks: maybe â?odisplaced tracksâ? or â?otracks with large impact parameterâ?
fixed, changed to "tracks with large impact parameter"

  • section 5, first paragraph can be improved in my opinion. I suggest to changi it from line 151 to â?oâ?¦ in the c-tagged jet [5] can be used to discriminate between signal and background. Figure 2 shows the observed distributions of M_SV in the electron and muon channels, compared to the different signal and background contributions predicted by the simulation. Even if the discrimination power of this variable is reduced for c-tagged jets, each flavor component can be measured by fitting the distribution of M_SV.

  • line 165 â?ouncertaintiyâ? -> â?ouncertaintyâ? fixed

  • line 167: â?oobserved" -> "observed" fixed

  • line 168: â?oand dataâ? (remove â?oobservedâ?) fixed

  • line 268-269: change to "while both Madgaph5_aMC@NLO and Sherpa at next-to-leading orderâ?

  • One last thought: maybe we can add at line 271, just before the last sentence â?oSince inclusive production of Z boson associated with jets at next-to-leading order is better agreement with data than at leading order [], this can be an indication that the charm pdf overestimates the charm content."

Changes after CWR

  • If two SV found inside jet, one with larger significance is chosen.

  • Luminosity uncertainty taken into account at ee and mumu combining step.

Plots with short summary of updates are in updates.pdf


Style comments

Greg Landsberg

Abstract, L6: of a Z boson, and a jet consistent;


L2: The CERN LHC has delivered large samples; L6: parton distribution function (PDF); L9: add a comma before "leaving"; L13: to the charm quark PDF, is; LL14-15: of the Z+c jet production cross sections at s√=7s=7 and 8 TeV can be found in Refs. [3] and [4], respectively. L16: pair of electrons or muons; LL16-17: to have originated by a charm quark by applying charm tagging criteria [5] (c-tagged jet). LL18-19: to the generator level. L21: add a comma before "as a function"; L22: add a comma before "are"; L23: a c jet enriched [or a c-jet-enriched] sample;

Data and simulated samples:

L47: Drell--Yan [en-dash, not a hyphen]; L50: (light-flavor jet). LL53,56: pp →Z+n jet [make the typesetting consistent with that on L50]; L55: In addition to theMG5\_aMC samples, the \SHERPA [11,12] event generator; L58: In addition to events with light-flavor and b jets, there is; L59: These samples were generated; LL68-69: the NNDPF3.1 [26] PDF set, and \GEANTfour [27] is used for the CMS detector response simulation. LL69-70: pp interaction in the same and nearby bunch crossings (pileup).

Object reconstruction and event selection:

L72: The particle-flow (PF); L74: add a comma before "while"; L85: the pT of all additional PF candidates; L98: \FASTJET [use the pen name!] package [31,32]. L104: nonlinear and nonuniform [CMS Style]; L105: particle-level jets; L122: measured using the ``tag-and-probe" method [37]; L126: and light-flavor jets; L128: and mistag rates for the b and light-flavors jets. L132: 1.2\% for light-flavor and 20\% for b jets. LL136-137: of the hadrons within the jet. L140: as Z+light-flavor jet (Z+LF) events.

Signal determination and unfolding:

L143: Signal extraction and data unfolding; LL149-150: for the Z+b and Z+LF jet backgrounds; L152: subscript "q" in Roman; L153: Z+q jet [q in Roman]; LL153-154: Sources of systematic uncertainties; L156: and the pT ranges in which; Tables 1-4 captions, L1: Values of the Z+LF, Z+c, and Z+b jet scale factors; L2: as functions of; L3: from the fit, while the second one is the systematic uncertainty. Also, move all the captions above the tables [CMS Style]. Tables 1-4 bodies, header line: SFLF [subscripts in Roman]; remove all the vertical dividers [CMS Style]; Fig. 2 caption, LL1-2: of the secondary-vertex mass (mSV) of the highest pT [superlative compound modifiers are not hyphenated!] c-tagged jet, for the electron (left) and muon (right) channels. L164: because of the finite detector resolution. L168: for the electron and muon channels. L172: \textsc{TUnfold}; L174: add a comma before "as a function"; L175: Z boson or c jet pT for the electron and muon channels.

Systematic uncertainties:

L178: Systematic uncertainties are; L184: μRand μF[subscripts in Roman]; L189: in the c-tagged jet rats is estimated; L191: mistagging b and light-flavor jets; L195: the secondary-vertex mass; SFb and SFLF measurements. L197: The JER uncertainty [can't start a sentence with an acronym!]; L198: coefficient in simulation up and down; L202: the identification (ID) and isolation (Iso) of electrons and muons are; L205: than 2 and 1\%, respectively. L206: Top quark pair production cross section; L207: of top quark pair production; L209: Integrated luminosity: The uncertainty is obtained by varying the integrated luminosity; Fig. 4 caption, L2: for the electron and muon channels, as functions of; Fig. 5 caption, LL1-2: for the electron and muon channels, as functions of; L2: and c jet (right).


L213: unfolded distributions as:; Eq. (1): end the equation with a comma; the numbering is not needed, so suggest dropping "(1)"; Table 5 caption, L1: uncertainties in the; move the caption above the table. Tabl 5 body, header line: QCD -> Scales; Top Pair -> Top quark; swap the electron and muon rows in the table; in the electron rows, "ee" in Roman in the first column. Remove all the vertical dividers [CMS Style]; L216: with ℓ= e or μ L217: The results from the electron and muon channels are; \textsc{Convino} tool; L219: pileup, and top quark pair production cross section sources. L220: and c jet, after the combination, are shown in Fig. 6. LL222-223: (LO), and \SHERPA. The inclusive fiducial cross section value for the Z boson pT<300vGeV; LL223-224: add spaces before every opening parenthesis and use "(theo)" [CMS Style];


LL228: using data collected by the CMS experiment at the LHC, corresponding to an integrated luminosity of 35.9 fb−1vLL230-231: for the leading lepton, pT>10 GeV for the subleading lepton, |η|for both leptons, and; L232: were Z+light-flavor jet, Z+b jet, top quark pair, and; LL233-234: for the signal with the Z boson pT<300 GeV; LL234-235: add spaces before every opening parenthesis and use "(theo)" [CMS Style]; L239: to extract the parton distribution function of the charm quark;


L14: remove second "Sqrt(s)="

L15: "Here" -> "In this paper"

L19: the generator

L19: suggest to reword as "The analysis uses the data set corresponding to ..... , recorded by ..... 2016."

L27: Finally, the unfolded distributions in muon and electron channel are combined and compared with ...

L55: [11][12] -> [11,12]

L123: line ended with comma.

L125: tagging criteria as provided by the CSV algorithm [5].

L131: .. analysis corresponds to the tagging efficiency of ...

L152: "... scale factors SF_q, " may be confused with SF_{q'}, i.e. a scale factor of q' quark. Consider rewriting to avoid confusion: "The values of the scale factors SF_q are defined as the ratio ...... Z+q jet process. The measured values are given in Tables 1-4 and the sources of systematic .... "

L156: remove extra space before dot

L163-164: these lines do not read well, consider rewording as "primarily due to c jets with generated pt below 30 GeV having the reconstructed pt above 30 GeV because of limited detector resolution." or similar

L205: add a comma after 1%

L237: is it a misprint "@NLO" -> "@LO" ?

Guillelmo Gomez:

- Type A (English) - l2 The CERN LHC has delivered large samples - l6. parton distribution function (PDF) - l9. imbalance [2] (too long separated) - l9. add a comma before "leaving" - l13. to the charm quark PDF, is - l15 [4], respectively - l16. pair of electrons or muons - l16-17. to have originated by a charm quark by applying charm tagging criteria [5] (c-tagged jet). - l18-19. to the generator level. - l21/22. add a comma before "as a function" and add a comma before "are" - l23. a c jet enriched [or a c-jet-enriched] sample - production where --> production, where - l16. to have originated with a charm quark->to have originated from a charm quark - l22. remove "the pT of the" - l114/115/117/... pt > X GeV check the distance between those words, they seem to be too large - l149. events as --> events, as - l150. backgrounds are --> backgrounds, are - l50. (light-flavor jet). - l55. In addition to theMG5_aMC samples, the SHERPA [11,12] event generator - l58. In addition to events with light-flavor and b jets, there is - l59. These samples were generated - l68-69. the NNDPF3.1 [26] PDF set, and GEANTfour [27] is used for the CMS detector response simulation. - l69-70. pp interaction in the same and nearby bunch crossings (pileup). - l104. nonlinear and nonuniform - l105. particle-level jets - l122. measured using the ``tag-and-probe" method - l126. and light-flavor jets - l128. and mistag rates for the b and light-flavors jets - l132. 1.2% for light-flavor and 20% for b jets - l140. as Z+light-flavor jet (Z+LF) events. - l152. subscript "q" in Roman - l153. Z+q jet [q in Roman] - l153-154. Sources of systematic uncertainties - l156. and the pT ranges in which - l189. in the c-tagged jet rats is estimated - l191. mistagging b and light-flavor jets - l195. the secondary-vertex mass SFbSFb and SFLFSFLF measurements. - l197. The JER uncertainty [can't start a sentence with an acronym!] - l198. coefficient in simulation up and down - l202. the identification (ID) and isolation (Iso) of electrons and muons are - l205. than 2 and 1\%, respectively. - l206. Top quark pair production cross section - l207. of top quark pair production - l217. The results from the electron and muon channels are \textsc{Convino} tool - l219. pileup, and top quark pair production cross section sources. - l220. and c jet, after the combination, are shown in Fig. 6. - l223-224. add spaces before every opening parenthesis and use "(theo)" - l228. using data collected by the CMS experiment at the LHC, corresponding to an integrated luminosity of 35.9 fb−1−1. - l232. were Z+light-flavor jet, Z+b jet, top quark pair, and - l233-234. for the signal with the Z boson pT<300pT<300 GeV - l234-235. add spaces before every opening parenthesis and use "(theo)" - l239. to extract the parton distribution function of the charm quark

Su Yong Choi:

L14: [3] and [4]. -> [3,4]. L17: "with a charm quark" => "from a charm quark" L27: "unfolded muon and electron channels distributions" => "unfolded distributions in muon and electron channels”

L55: [11][12] -> [11,12]

Li98: [31][32] -> [31,32]

L123: events, -> events.

L187: wrong reference [59] here.

Fig 2 and 3: add uncertainty band.

Fig 6: it’s hard to see the text in the legend.

L224: "MG5_aMC (NLO) predicted…" => “the value of 524 +- 11.7 (th) pb predicted by MG5_aMC (NLO)"

Burin Asavapibhop:

L7-11: Sentence "For example, ..... Into neutrinos" is too long. Propose to make 2nd sentence starting from ", one of the ...".

L47 : Is “MG5_aMC" official/general abbreviation?

L55 : MadGraph models -> MadGraph generator (models are misleading)

L83 : What is “better than” 7%? less/lower than? or up to?

L115-120 : two sentences are redundant, suggest to merge them and use bracket for electron channel instead.

L120 : m(ee) with small m

L123: Replace "," by "." at the end of sentence.

L159 : M(ee) with capital M — please fix for consistency

L128 : as you are talking about c tagging algorithm in previous sentence, I would propose to rephrase to

“The c tagging rate for c jets, and mistagging rates for b and light jets are measured using data events from W+jets, TTbar, and inclusive jet production processes, and compared to the simulation, where the reconstructed jet flavour is known from its hadron content.”

Figure2,3 : aren’t they Z+c-jets, Z+b-jets and Z+light jets processes? The legends on plots are misleading.

Figure2,3 : suggest to reorder the legend as same as the stack of histograms i.e. Z+c-jets, Z+b-jets, Z+light jets, Top and dibosons

Figure2,3 : please also add “(signal)” to the legend of Z+c-jets

L160 : shall this sentence about removing leptons from jet cone come since in the beginning L113-123 in Section4?

Jean-Baptiste Sauvan:

- l. 1: LHC delivers large samples -> probably this line can be rewritten in the form that such events have high cross section at LHC - l. 3: "c jets" -> "c~jets" (use a non-breaking space to avoid having it split on two lines) - l. 10: Z+c jet -> Z + c-jet (check the notation) (also please correct this in the rest of the paper) - l. 17: with a charm quark -> from a charm quark - l. 58: "As well as events with light and b jets": Rephrase, e.g. "In addition to events containing a Z boson with light and b jets" - l. 58: b jets -> b-jets (please check the notation) - l. 59: Samples of these -> These samples were - l. 60: “There is also background …. simulated using PYTHIA 8” -> “Background contribution from vector boson pair production is simulated using PYTHIA 8” - l. 83: “… resolution in the barrel is better than 7% …” -> "below 7%", or "smaller than 7%", or "lower than 7%" - l. 98: [31][32] -> [31,32] - l. 117: “ …. |eta| < 2.4. One muon must have p_T > 26 GeV.” -> can be rewritten as - “ …. |eta| < 2.4, with at least one muon with p_T > 26 GeV.” - l. 123: comma -> . - l. 124: "c tagging criteria" -> "c~tagging criteria" (use a non-breaking space to avoid having it split on two lines) - l. 148: Then the number …. -> The number …. - l. 149: c jet , b jet, light jet -> c-jet, b-jet, light-jet (please check the notations and correct in all the text) - l. 153: q jet -> q-jet - Fig. 2: Variable M_SV is not mentioned in the text. - Tab. 1-4 should be grouped together. Maybe a single table could be made since they are never referred to separately. - Fig. 3 should be closer to the text where it is referred - Fig. 4,5 should be closer to the text where it is referred - l. 163-164: Space missing before 30 GeV. - l. 172: Add TUnfold version - l. 218: "c tag/mistag" -> "c~tag/mistag" (use a non-breaking space to avoid having it split on two lines) - l. 219: “The values of the cross section …” -> "The measured cross section ….” - l. 221: "predictions from the generators" -> "predictions from the MC generators" - l. 223: “… pb, significantly lower than …” -> “ … pb, which is significantly lower than the MG5_aMC (NLO) predicted value of …” - Fig. 6: the legends are barely readable. Would be good to have bigger text. - Fig. 6: more details could be given in the caption, e.g. the MC predictions used. - l. 236: "The predictions from several generators" -> "The predictions from several MC generators"

Fabio Ravera:

English, Style and Formatting comments:

PDFTitle: Use sqrt(s) = 13 TeV as explained in the original SMP-19-011.tex file. Line 5: Should pT be (pT) Line 10: Remove “invisibly” Line 14: ..can be found in Refs. [3] and [4] or change line 44 so as not to use “Ref./Refs.” for explicit references. Line 17: “originated from a charm quark” sounds better than “originated with a charm quark”. Line 18: “observed data” could be replaced with simply “data” with little loss in meaning. Line 20: pp collisions are no time reference. Maybe say that data are pp collisions which were recorded by CMS … Line 30: 6 m -> 6\unit{m} so inserting a thin space (\,) between the number and the unit. Check other number-unit combinations in the paper to be sure that they either use newcommands (e.g., 13\TeV) or the \unit construction. Line 48: Remove the comma after “(DY) processes” Line 56: The line starting with “The value of the cross section” is already written in line 53. Should it be repeated? Line 60: “There is also background from vector boson pair production” does not sound nice, consider rephrasing it Line 70: …in the same or nearby bunch crossings. One needs to understand what “same” refers to but this is still better than “current”. Line 99: vectorial sum -> vector sum. ”vectorial” is not wrong; it’s just that “vector” is shorter. Line 108: is “degraded” the correct word? maybe smeared or the resolution is degraded Line 128: “quark jets”-> “light quark jets” Tab 1-4: All captions: “... the second IS the systematic...” or “...the second - the systematic...” Line 146: compared to -> compared with. Unless you really want to emphasize differences. Line 156: Here, and for the other tables, the caption should go above the table. Fig2: Reverse the order of the muon and electron plots. observed data -> data. compared to -> compared with. data is -> data are Fig 2&3: Would prefer that electron channel and muon channel labels are included on the plots Fig3: Reverse the order of the muon and electron plots. It might be better to start the jet pt distributions at 0 as well even if entries are restricted to pt > 30 GeV. Line 166: to a generator-level highest -> to THE generator-level highest Figs4&5: Reverse the muon/electron order. These plots are not very appealing esthetically and don’t seem to use the default TDR style (so are missing tick marks in two of the four sides of the frames). Line 215: Eq. (1) (Number in parentheses per the Guidelines) Line 216: The electron symbol should be in Roman font (not italic or Math mode). Tab 5: Use the same number of significant digits for a given column. The electron symbol (e) should be in Roman font per the Guidelines. Line 222: (LO)[,] and Sherpa (serial comma) cross section (no hyphen per Guidelines) Line 223: equals -> is Line 224: ..lower than [the] MG5_aMC (NLO) predicted value Fig 6: Use larger text in the legend of the upper plots. The ratio plots are too large compared with the main plots and the right plot exceeds the right margin. Unfortunately, where green hatch marks (for MG5_aMC) are overlaid on red (Sherpa), it looks brown (which is the color of the theory uncertainty). Line 233: cross-section -> cross section Line 241: Remove or fill in Acknowledgements

Kimmo Kallonen:

L8: SUSY abbreviation here is unnecessary, because it is not used elsewhere L175: Please change "c-jet" to "c jet" L223: Please change “, significantly lower” to “, which is significantly lower”

Sijin Qian

begin --------------------------

In general

(g) L54 and L57: (two places, as the "NLO" has been just introduced on L51)

"calculated to next-to-next-to-leading order with FEWZ" -->
"calculated to next-to-NLO with FEWZ"

Also, the last sentences of two paragraphs on L53-54 and L55-56 are

identical, I'm not sure whether the duplication can be fixed, e.g.

to change L55-56 from

"The value of the cross
section used is calculated to next-to-next-to-leading order with FEWZ." -->

(i) L81-85: (several places)

"results in a relative pT resolution for muons
with 20 < pT < 100 GeV of 1% in the barrel and 3% in the endcaps.
The pT resolution in the barrel is better than 7% for muons with pT up to 1

TeV [29]. In order to reduce the misidentification rate, muons
are required to be isolated. The isolation of muons is quantified as the
sum of the pT of PF candidates ..." -->

"results in a relative pT resolution for muons
with 20 < pT < 100 GeV of 1 (3)% in the barrel (endcaps),
and is better than 7% with pT up to 1
TeV [29] in the barrel. In order to reduce the misidentification rate, muons

are required to be isolated. The isolation of muons is quantified as the
pT sum of PF candidates ..."

(j) L87:
"less than 25% of the muon pT." -->
"< 25% of the muon pT."

(k) L94-96: (two places, the 1st of two "%"s can be removed)

"ranges from 1.7% to 4.5%. The dielectron mass resolution for Z -> ee decays

when both electrons are in the ECAL barrel is 1.9%, reducing to 2.9% when

both electrons are in the endcaps." -->

"ranges from 1.7 to 4.5%. The dielectron mass resolution for Z -> ee decays

when both electrons are in the ECAL barrel is 1.9%, reducing to 2.9% when
in the endcaps."

Also, the subject corresponding to the verb of "are" on L107 seems not very

clear and confusing with several commas on L106 in this long sentence, but

I'm not sure how to improve it yet.

(m) L113-118: (also, it'll be looked clearer if a comma is added after the
"in the muon channel" at the end of L115)

"Events are selected online with
a single muon trigger requiring at least one muon
candidate with pT > 24 GeV (muon channel), or
a single electron trigger requiring at least one electron
candidate with pT > 27 GeV (electron channel). Offline, in the
muon channel two opposite-sign muons satisfying identification and isolation

criteria are
required with pT > 10 GeV and |eta| < 2.4. One muon must have pT > 26 GeV. In

the electron channel, the offline selection requires two opposite-sign
electrons with pT > 10 GeV and |eta| < 2.4 ." -->

"Events are selected online with
a single muon (electron) trigger requiring at least one muon (electron)

candidate with pT > 24 (27) GeV for muon (electron) channel. Offline, in the

muon channel, two opposite-sign muons satisfying identification and isolation

criteria are
required with pT > 10 GeV and |eta| < 2.4. One muon must have pT > 26 GeV. In

the electron channel, the offline selection requires two opposite-sign
electrons with the pT and |eta| ranges same as muon."

(n) L126:
"of c jets from b jets and light jets," -->
"of c jets from b and light jets,"

(o) L149:
"for the Z+b jet and Z+light jet" -->
"for the Z+b and Z+light jet"

(d) L120: (then can be shortened by using the "mZ" correspondingly)

"is required to be close to the mass of the Z boson," -->
"is required to be close to the mZ,"

(6) L124-125, L128 and L131. It seems looked much better if a hyphen is added

in the adjective "c tagging", e.g.

"satisfying tight c tagging criteria ..." -->
"satisfying tight c-tagging criteria ..."

L128 and L131 are similar.

(7) L127, the expression of
"a b hadron" is looked quite odd similar as
"a b c d xxxx".

It may be improved by changing from
"a b hadron" -->
"a bottom hadron"

(10) Tables 3-4, two captions are almost identical except differing by only

one word at the ends of 1st lines; thus two Tables may be combined with an

extended 1st line of Table 3's caption, i.e.

"Table 3: Values of Z+light jet, Z+c jet, and Z+b jet scale factors measured

in the muon channel," -->
"Table 3: Values of Z+light jet, Z+c jet, and Z+b jet scale factors measured

in the muon (upper Table) and electron (lower Table) channels,"

Also, as the header rows of two Tables are identical, when combining two

Tables, the header row of Table 4 can be replaced by a double border line.

(11) L200, L204-205, L208 and L210. Those systematic uncertainties

percentages (e.g. 4.6%, 5%, 2%, 1% and 2.5%, etc.) may should cite some

reference articles, so that readers would not wonder why these percentages

are chosen, why not any other arbitrary percentage numbers.

At least, the luminosity uncertainty of 2.5% should be given a Reference, to

be consistent with all other CMS papers, i.e.

"+-2.5%." -->
"+-2.5% [xx]."

(12) L203-205 and L222-223 can be shortened from

(a) L203-205: (three places; also, it may sound better if two commas are

added after the 1st two words and before the last word of


"For electrons the uncertainty is less than 5%, while for muons uncertainties

for identification and isolation are less than 2% and 1% respectively." -->

"For electrons, the uncertainty is < 5%, while for muons uncertainties
for identification and isolation are < 2 and 1%, respectively."

(b) L222-223: (two places)

"cross-section value for Z pT < 300 GeV equals 413.5 ..." -->
"cross-section for Z pT < 300 GeV is 413.5 ..."

All suggestions above have been carefully considered. In some cases they have been superseded by larger changes, in some cases they have been applied, and in some cases we prefer to keep the original.

end --------------------------

Riccardo Paramatti

- Title: of Z bosons -> of a Z boson Fixed

- Title: "Z bosons and charm jets" doesn't seem to imply associate production, propose to change to "a Z boson with charm jets", or "associated production of a Z boson and charm jets" Fixed

- lines 5-12: The physics motivation that you expand in the text are SUSY searches of the stop quark. Since now most of the phase space accessible at the LHC for SUSY is excluded, you should expand (e.g. adding some reference) the other two motivations: c-quark PDF and (QCD?) theoretical models. Removed "main background" for susy, added "...test the possibility of observing of intrinsic charm quark component in the nucleon..."

- Figure 1: the arrow of the virtual fermion is pointing in the wrong direction (unless this is meant to be a c-bar) Fixed

- l 61: You miss the contribution of misidentified leptons. Even though small, it should be cited at least, to say that it is negligible and neglected (if it is), since most of the Z->ll analyses consider it. As I can see in other papers, misidentificaion coming from Wjets and QCD multijets are not considered. Contributions from misidentified leptons are negligible

- l 63 vs l 68: it is confusing since L63 says that NNPDF 2.3 is used, while L68 says that "Samples are generated using the NNPDF 3.1 [26] PDF set" !!!!! Yes, this is an error: ttbar uses NNPDF 2.3, while Madgraph LO, NLO and Sherpa use NNPDF 3.0 (not 3.1). Fixed

- ll 82-83: "The pT resolution in the barrel is better than 7% for muons with pT up to 1 TeV [29].” Is this really relevant for this analysis? The bulk of events has pT(l)<100 GeV, so we suggest to remove this sentence. Fixed

- ll 88-96 (and Table 5). From the description, and the systematics table, it seems that there is no isolation for electrons. Why is this the case? All the EGamma WPs require isolation. If this is not applied, then the concern about the presence of fake background is even more important. If a standard WP of EGamma is used, inluding isolation, then you should
• add the isolation in the description of the electron selection
• specify in Table 5 that the systematics is for the combination of ID + isolation Yes, isolation criteria is included to electron ID. Isolation and id for muons are done separately, this leads to lots of questions, so id and isolation were combined (mention it in tables as Id/Isolation). For electrons this is already combined, for muons we can sum them and present as Id/Iso

- l 101: "tracks" -> "charged particle candidates" Fixed

- ll 113-115. Why the single-lepton triggers have been used, instead of the double-lepton triggers? Especially for the electrons, the real pT threshold on the single electron is not, as you write, 27 GeV, but a variable threshold ranging from 45 to 25ish GeV, which is the L1 threshold. During 2016 that was higher than the nominal HLT threshold (27 GeV). So at least you should remove the 27 GeV, and say that a higher threshold was used (the exact value varied during the LHC fills). But for the electron channel using the DoubleElectron HLT path would have decreased a lot the threshold and gained statistics. Also, the offline threshold pT>29 GeV on the leading electron is below the L1 threshold of the single-electron trigger (L29). Have you modelled well the trigger efficiency scale factor vs pT? Maximum of leading lepton pt is higher than trigger pt threshold, so there is no gain in statistics from lowering pt threshold. On the other hand it is easier to calculate trigger SFs for Single lepton trigger.

- ll 115 and 118: do you really need the requirement of the two leptons being opposite sign? Since the combinatorial background is very small, especially in the electron case you could have avoided that. It is standard selection for processes with Z+smth

- l 120 : "71 < m ee/mumu < 111 GeV "-> "(71 < M(ee/mumu) < 111) GeV or 71 GeV < m(ee/mumu) <111 GeV " Fixed

- l 121: “small residual differences in the trigger”. How small are they? See AN-2017/340 v17, Fig.25. At pT=30 GeV the electron trigger SF average in 2016 is 80% in the central barrel and down to less than 50% in the second half of the endcap. Please justify better this. Please take notice, that 50% is not SF, but efficiency in data. Efficiency can be low, especially for low pt or high eta, but the difference between data and MC is defined by ratios of efficeincies in data and MC, which is much closer to 1.

- l 142: "deltaR < 0.1" -> "deltaR = 0.1" Fixed

- ll 144-145 and Fig. 2.
• If only the invariant mass of the secondary vertex is used to discriminate S from B, and the Z+jets events are only selected with the loose cut on 71<m(ll)<111 GeV, then the background from fakes must be non-negligible. Can you comment at least on a systematic added about its normalization?
• The m(SV) does not seem to be very discriminant between the light/c-/b- jets. The shape seems to be very similar for the 3 types of quarks. Since this is the focus of the paper, this choice should be better motivated.
We added comment, that SVM allows to discriminate flavor components, despite it was used by c-tagger.

Related to this: with so low-discrimination, have you checked that you can obtain a good fit also with very different combination of the 3 SFs (light- ,b- and c- jets)? I.e., have you checked that you have not multiple minima in the likelihood?) 1)According to other Z+jets analysis fakes from W+jets and QCD are negligible, other backgrounds from top and dibosons are taken into account. 2) Indeed separation of different flavors by SVM after c-tagging is much worse, than before c-tagging, correlation between fit results SFb and SFc is ~-0.8. To check large variety of possible SFc, the fit was done in range (0,2) for all SFs (SFl, SFc and SFb). Negative SFs and SFs > 2 look unphysical, so if there are any other solutions, they are not taken into account, within determined range (0,2) the solution corresponds to maximum of likelihood.

- l 150: "by fitting the secondary vertex mass distribution": much more details are needed to explain the procedure.
Is it a template fit? are the templates taken from MC? do these different processes have different-enough mass shapes to be able to discriminate? maybe adding a figure of the templates (normalized to the same area) could help We added, that despite the fact, that SVM is used by c-tagger, it still allows to discriminate flavor components

- l 151: "fits are performed separately": does this really mean separately? so the scale factors obtained in the Z-pt bins
are completely independent from those obtained in the cjet-pt bins? Yes, these fits are done separately for Z or c-tagged jet pt bins. A good cross-check is that light, charm or bottom pt integrals after such fits are close (within max 4%) for both variables.%END%

- lines 148-156: the text should clarify the role that these scale factors have in the analysis, and in particular how they affect the cross-section measurement Fixed: The number of \zcjet events is calculated as product of the number of \zcjet events, predicted by MC and the normalizations for the \zcjet component.%END%

- tables 1 and 2: replace "pT interval" with "c-tagged jet pT" Fixed

- tables 3 and 4: replace "pT interval" with "Z candidate pT" Fixed

- tables 1-4.
• Why you want to show 4 tables of scale factors for (c-jet pT, Z pT) x (ele,mu)? You could show just one example for c-jet pT and Z pT, choosing one channel, since in principle they should be independent on the Z final state.
• since you show them for the 2 lepton flavors, large difference of a SF in the two channels (e.g. 21% difference for SF_c for pT(c) in [90-250] GeV, but there are many cases) could be an indication of the interchangeability of the three components to have a good fit. The systematic uncertainty is large 5-15% depending on the bin, but does it indicate that you don’t have sensitivity to discriminate the 3 flavors? The main goal is to discriminate charm component from light and bottom, and since the descrimination is poor, light and bottom are anti-correlated,which can lead to big difference. We have also checked that fixing light component doesn't have much impact on the final result. As a good check how large the difference is, the integrals for each components were compared for Z pt and c-tagged jet pt.

- all figures: there are a number of cosmetic improvements that could be done. for example: increase the font size for all labels
(particularly needed for the legend and the CMS label); "DATA/MC" -> "Data / MC"; use consistent capitalization in the legend
(why is "light" lower-case while "Top" is not?); also in the legend replace "Observed -> Data" Changed, light, char, top etc changed to Upper case.

- table 2: why are the uncertainties larger than table 1? Despite the fact, that we change the parameters to measure systematic, the result depends on statistics, so that systematic includes stat fluctuation. The bigger stat uncertainty (electron channel - table 2) the larger the uncertainties

- l 159: same as line 120 Fixed

- l 160: "within deltaR < 0.4" -> "within deltaR = 0.4" or "in deltaR < 0.4" Fixed

- l 163: I would rephase: "primarily events with c jets with p T generated at < 30 GeV but reconstructed at > 30 GeV because of detector resolution"

- l 164: "because of detector resolution" -> "because of finite detector resolution" or "because of detector resolution effects" Fixed

- l 166: "within deltaR < 0.3" -> "within deltaR = 0.3" or "in deltaR < 0.3" Fixed

- ll 182-185: you should say that you don’t consider the anti-correlated combination of muR(up) and muF(down), according to the usual prescription of this systematic. Fixed

- l 209: add a reference on the luminosity uncertainty. Fixed

- Fig. 4 and 5: since the behavior of the purity as a function both pTZ and pt(c-jet) is determined by the jet pT resolution (since the Z->ll resolution is very good for both ele and mu), why do you need to show both? The two plots match almost by definition (because the Z pT balances the c-jet pT). So you could show only the Z pT one, since it goes lower in pT, and beyond 40 GeV should be the same (it cannot be said from the plots, since the binning is different, but it seems so).

Other plots are shown separately for muons and electrons. In this case the difference is negligible, but maybe it is worth showing to the reader, so that it would be clear, which steps of the unfolding are different.

- l 213: maybe remove "Eq.1" from the text Fixed

- table 5: why some of the jet-related systematics are so different for the ele/mu cases? E.g. JER +0.5% (ptZ, mu), 1.2% (ptZ, ele) There is much difference between two channels: Z pt and as a result jet pt spectra are different because of differnt lepton selection efficiency. Second reason is that uncertainty depends on statistical fluctuations, which can also lead to difference between channels.

- l 224: you measure a xsec that in principle is known at NLO, and it disagrees for more than 20% wrt the predictions.
Can you add a physics comment about the importance of this experimental input to the theory?
Also, since you justified this measurement with stop SUSY searches, can you comment on the impact of the precision of your measurement on those searches? We decided to add some interpritation: e.g. since there is good agreement with no c-tagging for NLO, it may indicate that pdf of charm quark can be tuned.

- l 231: same as line 120. Changed ?

Kimmo Kallonen

Type B

  • L3: “hadronic jets” -> “jets”, consider to drop the superfluous “hadronic”


  • L7-L11: Please consider rephrasing this, as it may be difficult to read

  • L15-L17: Please change “Here we present a study of events containing a candidate Z boson decaying to a pair of electrons or a pair of muons, and at least one jet identified as being likely to have originated with a charm quark (c-tagged jet) by applying charm tagging criteria [5].” to “Here we present a study of events containing a candidate Z boson and a candidate c jet. The Z boson decays into a pair of electrons or muons, and the c jet is required to pass charm tagging criteria [5].”


  • L27: What distributions? Are you still referring to the SV mass distributions or the unfolded differential (in pt) cross-sections?

  • L46-L70: Please consider putting this information in an easier-to-read table.
We believe it is better to keep the plain text with all the details, instead of adding a table.

  • L50: Are you using MLM matching with the NLO sample?
MLM can't be used with NLO

  • L97: “Hadronic jets” -> “Jets” (see comment for L3)

  • L113 onwards : Please mention that the HLT objects are matched to the offline objects (if they are).

They are not matched, we use combinatorical formula for efficiencies in data and MC

  • L200: Is there a proper reference (besides the TWiki-page PileupJSONFileforData) for the 4.6%? Please add.

  • L201-203: Here you also mention the “isolation of… electrons”, but you neither define isolation criteria for electrons in section 4, nor is an uncertainty on electron isolation quoted in table 5. Please clarify the situation about the electron isolation and associated uncertainties in the text.

Yes, isolation for electrons is included in their identification, while for muons these two criteria are separate. We propose to combine Iso and Id for muons, and mention Id/Iso for both muons and electrons.

  • L201: Please mention the trigger efficiency scale factors here too
Trigger efficiency scale factors errors were neglected, as the result uncertanty is smaller than 0.5%
  • L239: Please change “These results can help to fit the PDF…” to “These results help with fitting the PDF...”, as you should be more assertive here
  • Fig. 1: Please redo arrow direction for the quark (flipped in the middle)
  • Figs. 2-3: No error bars visible in the stacked plots: Uncertainties so small that they are invisible or are they not plotted? Please clarify.
Only data stat errors were plotted

Fabio Ravera

Strategy, paper structure, emphasis, additions/subtractions, etc:

  • Abstract: The Abstract should include a sentence indicating the relevance of the results - are they the first at 13 TeV? Do they significantly improve on past results? Also, it is not clear why the fiducial cross section measurement is mentioned at the end. (The title refers to differential cross sections without fiducial restrictions.)
Fixed, maybe we should add comparison with 8 TeV result in the summary.
  • Line 5: “Test existing theoretical models…”. It is a bit too vague, there are many theoretical models including BSM ones. I think you mean to test SM predictions.
Maybe there are some other models beyond SM predicting Z+c, e.g. some extentions. We can't prohibit to fix those models using this data.

  • Fig.1: The arrow along the quark line always has to point in the same direction, i.e. the middle arrow has to be reversed
  • Line 18, 26: “unfold the data to generator level” - the data are unfolded to “truth” (or model) and a generator is often used as a proxy but saying that data is unfolded to generator level is not correct; You should also cite something for Unfolding ( or else). Similar for line 26, because the ultimate goal of the real data measurement is not to describe “generator level”; It is worth rephrasing this part
We added definition, what we call generator level.

  • Line 56: Is the background Z+n jets with n>4 determined to be negligible?
One can see from SMP-20-009 , that Z+3,4... / Z+>=1 ~ 6%, so this process can be neglected.
  • Line 59: Single top production is mentioned as a background here but it is not clear if this is used in the analysis. For example, line 127 does not include single top in measuring tagging and mistagging rates. In figures 2 and 3 it is not clear that single top is part of the “top” in “Top and dibosons”.
Yes, both single top and ttbar are considered in the analysis.

  • Line 68: Specify the QCD order of the NNPDF 3.1 set. NNLO?
Fixed, version 3.1 was a mispell, changed to 3.0
  • Line 76: Since the Abstract mentions “electrons or muons” and this is the usual order, you should keep this order whenever the two are paired together. So describe electron reconstruction first.
  • Line 78: The “primary vertex” has not been defined. If the standard definition (highest summed pt-squared of charged tracks) is used then the paper should clarify whether or not the leptons and jet are required to be associated with the PV.
  • Line 113: Describe the electron trigger first - before the muon trigger.
  • Line 117: “One muon...” (similar for electrons on the next line ) - and no requirement this to be the triggering muon? More generally, it is not specified if trigger matching is applied, which you need if using T&P trigger SFs.
Changed to matching trigger object with highest pt lepton, same result, as for combinatoric formula .
  • Line 129: “where the reconstructed jet flavor is known from its hadron content”. In simulation you could trace the jet to a parent quark or gluon, so this could be more simply “where the reconstructed jet flavor is known”.
This phrase can be crucial, because there are two flavor definitions of flavor in CMS: hadron and parton based. We use the first variant.
  • Line 131: Since the c tagging efficiency and associated mistag rates are crucially important for this measurement, it would be useful to indicate the approximate uncertainties on these numbers
Fixed, but more details on scale factors are not discussed in uncertainties section. There are general information, that SFs are varied within 1 std
  • Line 137-140: It may be useful to specify what happens if there are both b and c jets above 10 GeV (possibly the c jet much more energetic)
This will be a repeating: if there is one b-jet with pt > 10 GeV, the event is bottom, regardless what is the pt and flavor of other jets.

  • Line 140-142: Why do you have to correct gen-level leptons for photon radiation and not just use the momentum before any radiation (or after radiation depending on what you mean)? Also, it is not totally clear what is the message this paragraph should pass. Please, clarify if it is Final State Radiation you are talking about or any radiation.
At this generator stage leptons are simulated with emitted photons, so to take them into account, their is a correction (dressed leptons)
  • Line 144: A short description of how invariant mass of tracks used as secondary vertex mass is needed here
Added short description how SVM is used.

  • Line 151: Are the fits done separately? Why not simultaneously across Z pT bins for each c-tagged jet pT bin and simultaneously across c-tagged jet pT for each given Z pT bin?
It is not possible to split events simultaneously into Z and c-tagged jet pt bins: there wouldn't be enough statistics in each bin to perform SVM fit. In addtion it wouldn't be differential cross-section, but rather double differential cross-section measurement.

  • Tab.1-4: How are these used? E.g. Tabs. 1/3 and 2/4 are different parameterization of the same events - which one do you apply? Would a 2D SF make more sense? Also, are the differences between the muon and electron channels expected? We are wondering if these tables are not too detailed given the fact the other SFs are not discussed in the paper. Please, consider to draw all of them on a single (or couple of) plot(s).
We decided to keep present tables and text. SFs for electron and muons are independent measurements of Z+c and as such there can be fluctuations. Correlations among them are small, therefore we can't really comment on the difference between them.

  • Line 154: The sentence says the source of uncertainties for SF will be discussed in Section 6 but this section does not mention anything about uncertanties of SFs.
It doesn't say, that uncertainties for SFs are discussed in section 6, but sources of uncertainties in general.

  • Line 158: The gen level is 26 GeV and your reconstruction cut for electrons is 29 GeV. It is worth clarifying in the paper how large an effect is that (effectively a small acceptance correction), given electrons and muons are treated differently.
Fixed: added sentence, that acceptance takes into account correction for gen-reco cuts difference.

  • Line 164-167: This sentence is a bit too long and makes it confusing; 1) “The fraction… is estimated from…” doesn’t clearly describe how it is estimated. In the caption of Fig. 4 you state what you plot “Fraction of selected events originating...” which appear clearer; please, follow what written in the caption. 2) You should specify if the lepton pair is matched or not; so follow the caption but split this sentence to add additional information. 3) As you are talking about “reconstructed” and also about “generated” quantities, clarify in the caption of Fig 4 what is on the X-axes (your caption implies “reconstructed” but “generated” may be a better choice)

  • Line 169: Somewhere in the text, not necessarily here but it is an option, you should clarify what is your “true” distribution (or your “generator level” as you call it). One of the most important points is to clarify if your “true” includes or not Final State Radiation, it makes a difference for theoreticians (electron and muon response matrices, at least for Z-boson pt, should be sizably different, because reconstructed electrons include rediated photons and reconstructed muons do not).
Changed the sentence to "The generator-level leptons are dressed by adding the momenta of all photons within $\Delta R = 0.1 $ around the lepton directions. ".

  • Line 173: It should be “efficiency” of something, probably “efficiency of event reconstruction” or “event reconstruction efficiency”, don’t use “(...) “ to define it. It is better to somehow relate it to fig 4 because the two together are most interesting and thus better have the same X-axes (now the implication for fig 5 is that you plot “generated” quantities). Please, consider swapping fig 5 and 4, showing first that you get ~20% of the actual events and only then that the fiducial corrections are small at high pT. Then the response matrices as the last piece.

  • Line 175: ..for electron and muon channels. Invert the order of Tables 3 and 4.

  • Line 177: Theoretical and model uncertainties should be present also in the theoretical models, so theoretical/model points on plots should have associated uncertainties. While there are some uncertainties on the figures it is not clear what those are. The PDF uncertainties seem to be playing a prominent role in your experimental measurements themselves too.

  • Line 175: If c tagging is the dominant source of signal loss, why is the muon and electron channels so different in total efficiency (by about x2)? Please specify in the text.
Added, that c-tagging and lepton selection efficiencies contribute to acceptance

  • Line 187: The reference number has jumped to 59 - although the last entry appears to be 39. It may be unclear for the reader how the 90% uncertainty level is “scaled” to 68.3%
That's an error citation. Changed the whole phrase for a more commone one.

  • Line 215: What is the value of BR(Z->ll) used?

  • Results: A description of how uncertainties of theoretical results are calculated is needed. Moreover, why are the uncertainties of MG5_aMC LO and Sherpa very small compared to MG5_aMC NLO?. Fig. 6 caption should explain whether “Total” uncertainty includes “Stat” or not?

PDF and QCD uncertainties considered only for MG NLO. In new version PDF uncertainties added for Sherpa.

  • Line 218: “their correlation” is an important point. Please, consider to specifying what is correlated, how you estimate it, how important the overall effect is
correlated sources listed, added, that correlation was assumed equal to 1.

  • Line 222-224: This sentence risks to lessen the results achieved in your analysis. Since it is a major achievement and it is not stressed enough in the conclusions, in this part you should elaborate what is only cited in the last two sentences of the summary.
  • Line 239: It is not clear what “PDF of the charm quark” refers to - is this a reference to the c-quark component of the sea quark portion of parton distribution function sets? If this idea is important it should have been developed in the Introduction. We changed to more general sentence, that it can imporove constraints on parton distribution function of charm quark.

  • Summary: The context of the results should be given here (and in the Abstract). For example, is this the first measurement at 13 TeV? Added, that it is first measurement at 13 tev

  • Reference [35]: would JME-17-001 fit better? Fixed
  • Reference [36]: would JME-18-001 fit better?Fixed

Jean-Baptiste Sauvan

## Type B
- Abstract: Add the fiducial region used and the measured value of the inclusive cross-section.

- l. 5: "can test existing theoretical models": which theoretical models are referred to here? At least some examples should be given
We present list of MC event generators (MG LO/NLO and sherpa) later

- l. 18,19: "we unfold the data to generator level". The term "generator level" is too vague. It is for instance better described in [4]
We clarified in the beginning, what we call generator level.

- A short discussion on the backgrounds could be added in the introduction before the "simulated samples" section

As I can see in other papers, it is usual to discuss backgrounds in simulated samples section and skip this information in the introduction. (see e.g. SMP-19-004)

- l. 50: is the NNLO inclusive cross section rescaling used here as in the other MCs? It seems strange that this one is not rescaled while the other two are rescaled to NNLO
The MC weights are +/- 1 for central values, and some weights, different from 1 for qcd/pdf variations. These don't take into account cross-section values, leaving the choice to the user (at least as I understood)

- l. 53 and 56: "The value of the cross section..." -> "The value of the inclusive cross section..." (to make it clearer that the NNLO cross section is inclusive in the number of jets, >=0)
CHanged to inclusive cross-section

- l. 118: mention identification criteria, as for muons

- l. 135: Are jets really made from final state hadrons only? This should rather be all visible final state particles.
changed to "all stable particles resulting from hadronization"

- l. 158: "(at least one with p T > 26 GeV)". This is not consistent with the reco level selection in the electron channel (29 GeV). It means that there is an extrapolation to a wider fiducial region in the electron channel. That should be more explicitely written (in particular this is visible in Fig. 5, with lower efficiency for electrons, but no comment is made). This kind of extrapolation can also be subject to uncertainties, which would affect the efficiencies shown in fig. 5. Though it seems not uncertainty is associated to that.
The acceptance takes into account this extrapolation to wider regio, just as it is done with c-tagging efficiency.

- Fig. 2,3: systematic uncertainty bands should be added.

- Tab. 2-4: It is not clear whether these scale factors uncertainties take into account differences from different MC generators used to extract the relative fractions of signal and backgrounds. The fit may give different values for the numbers of signal events using different MCs. But it is not mentioned anywhere. If it has no impact, it should also be mentioned.
SFs depend on MC choice, so it is not correct to compare them, but the product of SF and prediction of MC is measured number of charm events, which is compatible within stat errors for LO and NLO.

- Fig. 4,5: it seems that no uncertainties are associated to these values. While they should depend on the choice of MC used to derive them. Something related to that should be mentioned in the text.
We have checked, that unfolding of measured charm events using LO responce, bkg and acceptance is compatibles with baseline NLO values.

- Fig. 4,5: Comments on the binning choice should be made. In particular it seems that the same binning (apart from the different starting point) has been used for pT(Z) and pT(c), while it is expected that the migration matrices will be very different for pT(Z) and pT(c) (lepton vs jet-based quantities), and would require different optimal binning for the unfolding.
The binning at reco level was chosen so that number of events was ~the same in each bin and it was possible to make a SVM fit.

- Uncertainties on the purity, efficiency and unfolding related to the choice of MC and to the choice of the unfolding method should be mentioned (or it should be mentioned if they are negligible). Constructing the response matrix and deriving purity and efficiencies relies heavily on simulation. Different MC might give different results and this comparison should be done and mentioned. Also inverting the response matrix can be done in many different ways, and the unfolding methods have in general parameters that need to be tuned; which means that there can be a dependency of the unfolded results on the way the unfolding has been done. Unfortunately no discussion at all is presented here, and it would be good to have some details on the various checks done related to the unfolding.
We have checked, that unfolding using sets of bkg, acceptance and responce from different generators (LO and NLO madgraph samples) provide compatible results (within stat errors)

- l. 173-176: The differences between the electron and muon channels should be explained. Added, that the loss is due to ctagging efficiency AND lepton id efficiency, this should explain the source of difference

Burin Asavapibhop

Section3 : which PDs have been using in the analysis? It seems missing from section 3, only MC is given. Fixed

L14: Why do you use Z+b as a reference for 7 TeV, [3] The main goal is Z+b, but since Z+c is one of the main backgrounds it is also studied in this work

L79 : Impact parameter is a jargon, please give some clearer physics meaning It is a jargon, yet it is used in almost all papers, so chosing another term can onle mislead the reader

L120: How is the mass range of Z chosen? It is standard choice for Z+jets analysis

L144-145 : are "the invariant mass of tracks associated with the secondary vertex” and "the secondary vertex mass” same thing? I would guess they are the same thing but it’s not so clear to reader. Changed everywhere to M_{SV}

L144-147: It is unclear how Fig 2 is used to discriminate between signal and background. Could you please clarify? It was added to the paper, that different flavor components of DY have different shapes of SVM distribution, so this is used in the fit

L150 : it is not clear from the text how the fitting is done to get normalisation of each Z+q jet processes. It is maximum likelihood template fit. Added this to the text

L158 : Why the electron pT is 26 GeV for simulation but 29 GeV for Data? The difference is for detector level and generator level, not data and MC. 29 GeV cut is applied at detector level because it has to be greater than trigger threshold. On the otherhand, we wanted to keep the same generator level signal definition for both channels, so 26 GeV is the same cut for electrons and muons at gen level. The difference between 29 and 26 for leading electron is small, but taken into account in unfolding

L189: c-tagging efficiency, is it recommended by JetMET group or you derive it? If from recommendation, please provide a reference. Same comments for JES/JER, Lepton ID, Luminosity. These tag/mistag SFs and their uncertainties were recommended by BtagPog. They are stored only on twiki, so there can't be reference for them

Summary : do we know why the LO shows better agreement to data results than NLO? One of possible explaination, is that in both MC LO and NLO charm component is overestimated, but LO predicts less Z+jets, than NLO, which has good agreement with data for Z+jets. SO these two effects compensate each other.

Su Yong Choi

* Type B ***

L11: are you sure that the search for t~ -> c + LSP is limited by the Z+c production cross section? The largest uncertainty on the Z->vv prediction is the size of the dilepton control sample, so it’s not clear how a better measurement of Z+c in visible decay modes helps the search. Please clarify.

Left the reference but change the text to make a more general statement that Z+c is an important background for new physics searches with a charm and missing energy in the final state.

L96: "reducing to 2.9%" => why is it reducing? 1.9% to 2.9% is not reducing. Please clarify.

fixed to "...degrading to..."

L148: please justify this approach. What is the level of your trust?
Mismodelling of dibosons and single top can be neglected: as sum of these components is ~1% of DY events. From varyng normalization of ttbar by 10% we see that contribution to result uncertanty is smaller than 1%

- where is the template taken from? If they are from MC, what is the uncertainty on the shape?
- according to Fig 2, the M_SV shapes look similar (maybe it’s because the templates are stacked…). So, it’s not clear how the similar-shape templates are constrained by the fit. Please clarify.
The templates were taken from MC. In order to validate, how MC describes shape of SVM after applying c-tagger, it was compared with data in ttbar and W+c events. Though there is small mismodelling of SVM shape, the difference is small comparing to other errors. The shape difference after applying c-tagger is smaller, than without any HF taggers, but b-jets tend to have larger tails for larger SVM values, which allows to discriminate charm from bottom. The anti-correlation between charm and bottom components is ~-0.7/-0.8
L175-176: if the dominant effect is due to c-tagging efficiency, why are e/mu numbers so different? I guess it’s due to lepton efficiency and you might want to mention that.

Fig 3: the first bins of p^{ee/mm}_T distributions show discrepancy of 10%. Is this understood?

It looks like the residual mismodelling of Z pt spectra: in case, when no c-tagger is applied the "wave" in data and MC ratio is observed for small Z pt values. This plots are shown in the AN for both ee and mumu cases.

Guillelmo Gomez Ceballos Retuerto

- Type B (physics, clarity)
- abstract. in the fiducial --> in a fiducial (since there are many possible fiducial regions, you just use one)

- l5. can test --> test

- l13. We would remove "sensitive to charm quark PDFs", it doesn't add anything

The sentence was changed, though we still mention, that the study can provide some information about c quark pdf.

- l16. Since the whole paper is muons/electrons, we suggest to swap it here too. Make sure you are consistent in the whole paper
Whole paper now follows the electron/muon order

- l20. The data used, corresponding to an integrated luminosity of 35.9fb-1 at sqrts - 13 TeV, were recorded by the CMS experiment in 2016

- l25. The secondary mass hasn't been defined, what is it? you need to explain it
Fixed: removed.

- l28. physics generators --> event generators


- l28. and to compare with predictions->and to compare them with predictions


- l48. To which oder do you generate Z+c?


- l53. We assume you use FEWZ to obtain the full inclusive generator, and then the split in nJets is assumed by the generator, right? say it

FIxed, added : "...for inclusive Z + jets process."

- l55. You need to mention the SHERPA version


- l60. You need to mention the POWHEG version


- l61. You need to mention the PYTHIA version


- l65. Do we really use 19?

we decided to keep it

- l70. We think it would be good to add the average pileup in data in 2016

added , which in 2016 equals on average 23.

- l72. The particle --> The CMS particle (notice there are many possible PF algorithms, that's why you need to define as the "CMS" one)
- l77. You should add a reference here We used standard sentence about muons in CMS

- l78. How is the PV defined?

- l83. Actually, there are very few "fake" muons, but rather nonprompt muons. It's better to write it in that way

- l86. It is not clear how the pileup is substracted by just reading the text.
Its a standard phrase in all papers about correcting isolation parameter.

- l88-96. We are surprise there is no mention about electron isolation, we assume you use it, don't you?
Added text about electron isolation

- l97 and that paragraph. Are you applying a lepton-jet overlap removal? this is not mentioned anywhere We used standard description of jets in CMS

- l142. Shouldn't you mention this is usually call dressed-leptons?
We don't mention dressed leptons in other parts of text, so there is no need to inroduce this standart term - dressed leptons

- Tables 1 and 2 (major point). We don't understand the results, and why you use two different SFs for muons and electrons. There is no reason why the jet SFs should be different for the muon and electron channels, did you check it in MC? do you see different performance between them? it should not be the case. In particular, there are some bins with significant differences in the SFs, is it understood? For instance (but not only) pt30-37 SFc (0.85 vs. 0.69)
We don't understand the difference between two channels: there is large statistical uncertainty for these values, within which they overlap. The same difference was observed in other analysis, e.g. SMP-19-004.

- Figs. 2 and 3. Could you make slightly larger legends?

- Fig.2 Muon and electron channels are not indicated in the two figures

- Tables 3 and 4. Same comment as before, we are not expecting sizable differences. In particular, the SFl trends for muons and electrons are exactly the opposite, is it understood?
It is not well understood, since separation between flavor components after applying b-tag is small, the uncertainties are large, especially for light component, which is the smallest comparing Z+b and Z+c.

- l170. What is the default MC used for the response matrix for the unfolding?
MG NLO, added to the text. We have also cross-checked using LO, since bkg, acc, and response shouldn't depend on generator choice.

- l185. Do we assume you don't take the extreme cases? say it!

- l193. Can you quote the rough input uncertainties?
Added SFs variation estimates

- l198. Can you quite the rough input uncertainties?
Added SFs variation estimates

- l199. Add a reference about it

- l204. The 5% electron uncertainty is a very large overestimation, how did you get that?
Yes, this is too conservative upper limit, changed to 3%.

- l208. Can you justify that value? is there any reference?
Yes, reference was added. This approach was used in other analysis with Z boson and jets. - l210. Give a reference about the luminosity
Fixed - Fig.6 Z and c jet is not indicated in the two figures

- Section 7. In general, we don't understand why you split the muon and electron channels, and then you perform a combination afterwards. The jet results should not depend on the lepton flavor. Nevertheless, since you do so, you must report both results, and their compatibility

It is not clear, how to select muons and electrons simultaneously (there are samples enriched with either muons or electrons). This apprach is used in other studies, which include Z boson.

- Table 5. You need to add two significant digits in several cases (e.g. 4 --> 4.0, 0.4 --> 0.40). In addition, you are missing the luminosity uncertainty (constant, maybe mention it in the caption?), but more importantly the total uncertainty should be quoted) We quote total theoretical and experimental uncertainty in Results and Summary sections, after combining two channels

Unlike other uncertainties, we use uncertainty on luinosity after unfolding, when unfolded distributions are devided by total luminosity. This leds to simple varying within 2.5% so it was not added to the table.

- l172. Which unfolding method do you use?
We use TUnfold with no regularization, but cross-checked that using build-in option of Tikhonov regularization gives the same result. We also substantiated the choice by calculating condition numbers, which are small, in this case no regularization is allowed. - l185. Do we assume the extreme variations are excluded?
Yes, added this to the text.

- l217. We find strange that you use that tool
This is recommended tool for combining distributions from different channels.

- l223. How did you define that maximum value? did you apply that cut-off to the jet pt values? you should say something about it
We added formula for calculating of fiducial cross-section in Results section. There we clarify, that measured number of charm events is used.

- l223. How do you split the experimental and theoretical uncertainties? We assume the values come from Table 5, but you don't say it. In fact, it would be great to add the total ll uncertainty in Table 5, since this is what you quote in the main text. added Experimental systematic uncertaitnies include those, related to \PQc tag/mistag rates, JER, JES, identification and isolation, pileup and luminosity. The rest are theoretical systematic uncertainties.

-Table with data and fitted yields is missing. You need to add non-Z background, Z+LF, Z+b, and Z+c, both muons and electrons, this is absolutely needed We keep SFs to show postfit results for MC.

- l236. You mention 'several' generators, but you have already mentioned one in the previous sentence. Maybe you can say "other' generators. Or swap the two sentences. Summary changed: we keep it this way, because then we added some comments about all three generators after that sentence .

- l239. Do you really need the last sentence? Since you are not doing it, it's not clear why you need to mention it. If you want to keep it, we would say something like "These results are sensitive the PDF of the charm quark and can improve the existing constraints" Summary was changed, fixed.

- The summary is unclear to me, what's our take with the large data/prediction disagreement?
Summary was changed.

- Reference 15. Don't we have something better than that reference? This is usual reference for ttbar

Sergey Polikarpov

Type-B (physics questions) + Figures + Tables:

General: Do you think a measurement of Z+charm meson production would be useful for the area of research of your paper? CMS is able to reliably reconstruct D*+, D0 and D+ mesons in pt range from as low as 3-5 GeV and up to ~100 GeV. Charm mesons would likely have smaller signal yield compared to charm jets, but, on the other hand, it is significantly easier to separate them from light and beauty contributions: it can be done by a fit to invariant mass distribution. Such an analysis could be less dependent on MC predictions. As an example of what I am talking about, you can look at this early LHCb paper done with 2011 data set: (we have much more data and better acceptance than this LHCb paper). (Consider citing it in the introduction). Another example is the ongoing Z/W + Upsilon production analysis BPH-19-004, where a 2D fit to m(Z)&m(Y) candidates is used to get rid of non-Z and non-Y backgrounds.

This measurement is out of the scope of this study

General: (just a curiosity) Given that you have reconstructed a large sample of Z+c events, have you looked at M(Zc) distribution, to see any indication of a new particle decaying into Z+c ? (Of course I understand that a proper search for resonances decaying into Z+c is completely different analysis, out of the scope of this paper) This measurement is out of the scope of this study General: Is only single parton scattering process (SPS) considered for production of Z+c ? Do you have any estimates on the magnitude of the double parton scattering (DPS) contribution ? Presumably it is very low due to tight requirement on c jet momentum. Maybe this could be mentioned in the introduction. Do all MC samples described in section 3 only use SPS ? Can e.g. deltaR(Z, c jet) distribution help us to distinguish between SPS and DPS contributions ?

This question requires separate study.

L25: "secondary vertex mass" was not defined, not sure if this is well-known enough term. consider adding a definition or a reference. added reference to IVF algorithm for SV, before "The invariant mass of tracks associated with the secondary..."

L59-61 In my opinion, it would help to clarify which (at least dominant) processes with top quarks and double boson production contribute to the background in your analysis. For instance for double boson, I imagine, it is ZW, where W decays into charm and strange jets. There is a short description of backgrounds: top (single and ttbar) and dibosons. Given that their contribution is small, we feel that we don't need to go into the details of how these different processes may mimic the signal event.

Such process is possible, but contribution is too small. Single top + dibosons / DY ~ 0.01 (after c-tagging)

L78: quality of the fit - which fit ? track fit ? (consider adding a clarification)

L78: please specify how is the primary vertex selected. Fixed: the fit quality -> the tracks fit quality


L102: how are the pileup vertices defined ? All the other vertices except the "main" one ? If yes, is the main vertex selected as the one associated to Z->ll candidate ? Yes, lepton id includes cut on distance from primary vertex, which is defined as one with highest sum pt.

L112: same question. How do you ensure the charm jet originates from the same PV where Z boson is produced. Is there any requirement on dz between Z decay vertex and charm jet production vertex ? PileUp id is applied to select jets associated with primary vertex.

L128: "c tagging rate" is not clear: do you mean efficiency ? Fixed

L133: does this selection affect the reported measurement? (It is corrected for ?). For example I could imagine artificially increased cross section measured in high-pt(c) bins due to this requirement. What is the average number of c-tagged jets before this selection is applied ? Selections at reco level can only increase/decrease acceptance/background, since measurement is divided/multiplied by acceptance/background, any selections at reco level can increase/decrease measurement uncertainties. %END%

L144: suggest to define in this sentence M_SV variable that is used in the X axis title of Figure 2. For example, "... vertex (M_SV) in the ..." fixed: The invariant mass of tracks associated with the secondary vertex ($M_{SV}$)

L145-146 if the previous comment is applied, you can simplify the sentence as "... observed M_SV distributions in the muon ..."

L150-151: please provide more details on the fits. Are they binned chi2 fits ? What parameters are free and which are floating in the fits ? Presumably all the shapes are fixed to MC and only the 3 scale factors are floating. added in the beggining of the sentence : "Binned maximum likelihood template fits are performed..."

L152: It is not clear for me why the SF_q are floating in the fits, while "top and dibosons" contribution is fixed. Is there any reason to trust MC prediction for top and dibosons more than we trust MC for light, c- and b-jets ? SVM distribution doesn' allow to disciminate all components, on the other hand contribution from top and dibosons is small comparing to DY, so shape mismodelling of these components can be neglected.

L148-154: If I understand correctly, this procedure assumes there is no background under Z boson. So that all the selected Z+jet candidates have true Z bozon and some jet, which may be c, b, or light. Is it true ? Has it been checked ? In general, I think the reader would be curious to see the Z->mumu and Z-ee invariant mass plots, showing a clear Z peak on top of (presumably) very small background, in a paper that studies Z associated production. I encourage you to add such a plot(s) into the paper or supplementary material. Actually no assumption about absense of background under Z boson was made. It is stated at the beginning of the paragraph that "The top quark and diboson background predictions are taken directly from simulation." and we believe this sentence is clear about how the background is treated.

L156: I think it would be useful to underline here that after accounting for the measured SFs, all the M_SV distributions are in a good agreement with sum of MC templates, as Figure 3 shows. Fixed

L162: may be not clear what "this signal phase space" refers to, as the previous sentence describes removal of double counting. Consider moving sentence in L160-161 somewhere else, or repeating your region definition in L162. Changed to THE signal phasespace

L175-176 and Figure 5: why are the efficiencies so much different between electrons and muons ? L176 says that the efficiency loss is due to tagging. Why does the tagging efficiency depend on Z reconstruction channel ? Muons and electrons have differnet reconstruction efficiency%END%

L178: suggest starting the section with more general sentence. For example: This section describes the sources of systematic uncertainties considered for the measured Z+c jet production cross sections. Uncertainties are estimated by varying input parameters and repeating .... We prefere to keep the sentence as it is.

L180-181: this is not clear. If I understand the procedure correctly, I would suggest to write something like "Uncertainty related to each source is estimated as the difference, in each bin, between the obtained cross section and the baseline result obtained in Section 5." The current text "the differences between the unfolded distributions" may be interpreted in several ways.

L181: it is not clear which of the uncertainties below are pt(Z)-and pt(c)-dependent (and calculated for each bin), and which ones are independent on pt(Z) and pt(c). Consider adding a clarification. Table 5 summarises uncertainties for different variables/channels.

L1822-183: Maybe these are standard and well-known scales and procedures, but they are not clear to me. The procedure also needs more explanation. If possible, give a reference. Are these scales used in the MC generation parameters ? Would it make sense to introduce them in section 3? Do you produce additional MC samples with different mu_R and mu_F to calculate this uncertainty ? (Or do you somehow reqeight the existing MC samples?) Are mu_R and mu_F scale uncertainties considered uncorrelated and summed in quadrature ? Or are they correlated ? This is standard description of qcd uncertainties.

L186-188: Maybe I am mistunderstanding something, but if your measurement is based on PDFs, there may be a vicious circle: in Line 6 you say that your measurement provides information on PDFs, but (presumably) you use the predictions for these PDFs in your measurement and discuss the related systematic uncertainty in L186-188.

This problem is common to most measurements that constraint the pdfs. First of all the pdf uncertainties affects both signal and backgrounds in this measurement. For the background both the uncertainty on the total cross section and the shape of the observables can be relevant. For the signal and other Z+b and Z+uds backgrounds, that are fitted to the data, the pdf uncertainty can have an effect only on the shape of the distributions and, indirectly, to the result of the measurement. However the latter can have a smaller uncertainty than in the pdf used to extract it and in such case it provides a useful information to further constraint it.

L192: Please clarify where do you take this standard deviation from. Are those from MC ? From some reference with a study of CSV tagger performance (give a reference in this case) ? Or these are the uncertainties obtained in the fits described in the beginning of Section 5? In the latter case, uncertainties on SF_l, SF_c, and SF_b are correlated with each other, and the systematic uncertainty obtained by varying each SF by +-1sigma may be underestimated or overestimated. The magnitudes are added. The description of these uncertainties is available only within CMS, so can't be cited.

L194-198: It is not clear where the magnitudes of the tested variations are taken from (both L197 and 198). Please clarify, e.g. add a reference. The magnitudes are added. The description of these uncertainties is available only within CMS, so can't be cited.

L199-200: It is not clear how this cross section variation accounts for pileup-related uncertainty. Please explain what is uncertainty related to pileup (where exactly does pileup enter your measurement ?). Why the variation of 4.6% is taken ? (Give a reference ?). Fixed

L203-205: Again, give a reference to motivate the chosen variations. The resulted uncertainties were obtained in this measurement and can vary in different analysis

L208: add a reference for this 10% uncertainty or explain why it is a reasonable value. Why only the uncertainty in top pair cross section is calculated. Are the uncertainties in single top and dibisons negligible ? If so, clarify it in the text. Fixed

L209: Add a reference for this uncertainty in the luminosity. Consider adding a clarification that the luminosity is used in section 7 to calculate the results. Fixed

L211: Are all uncertainties in Table 5 independent on Z or c jet pt ? If yes, clarify it in the beginning of section 6, if not, clarify what Table 5 gives. It is added to the text, that the table summarizes errors for fiducial cross-section

L219: why are the luminosity uncertainties considered to be uncorrelated ? Same question for QCD renormalization scale and PDFs.

luminosity uncertainty is correlated and taken into account as change in normalization of final results after unfolding and combining(added to the text), QCD and PDFs uncertainties are calculated so that it is impossible to track separate variations of QCD renorm scale and pdf choice.

L223: how is this inclusive cross section calculated ? Please explain in the paper. What is the last uncertainty "th" ? It has not been discussed in the text/defined.

L222-223: why only Z pt<300GeV requirement is given here ? no pt(c jet)>30 GeV requirement is listed. no requirements on Z decay products are listed.It is mentioned, that fiducial cross-section is measured, so cut on Zpt is additional to fiducial volume selections.

L222-223: suggest to reword as "inclusive fiducial cross section of Z and charm jet production for .... is measured to be 413+-...., which is significantly below the prediction of MG5... generator of 524...."

L230: according to L119, pt(e)>29GeV requirement is applied. Please clarify. The cut was chosen so that is was ~2GeV higher corresponding trigger threshold.

L234: why only pt(Z)<300 cut is listed here among all the applied kinematic cuts ? It is mentioned, that this value is for fiducial cross-section (fiducial volume is defined), this cut is additional to fiducial volume definition.

L239: "fit the PDF of the charm quark" - not clear what is meant by "fit". Maybe better "to determine" or "to estimate" or "to evaluate" ? changed to "...can improve the existing constraints ont the parton ..."

Tables 1-5:
- In general, are tables 1-4 helpful to the reader ? Would not it be better for readability to present these tables as Figures ? (like you do in Figures 4-5 for acceptance and efficiency)
The plots were changed to tables, as it is common way to present flavor components SFs - table captions should be above table
Fixed - remove outer table lines and most of the inner lines, according to the guidelines.
--- E.g. for tables 1-4, leave only the horizontal line after the 1st row and three vertical lines separating the columns
- Use double hyphen in latex to indicate the range in the 1st column of tables 1-4
- use proper minus symbol (not dash) in table 5
- use triple dash in the last two cells of Table 5
- There is no point in repeating (%) in each column title of Table 5, remove those and change the table caption to "Systematic uncertainties in percent in the measured inclusive fiducial cross section."
- Tables 1-4: increase the line spacing so the numbers don't overlap.

- all axis labels, axis titles, and, especially, legends, are too small. Please increase their size for better readability.
- Figure 2-6: suggest to reduce or eliminate space between top and bottom panels in each figure
Fixed - Figure 2-6: the plot on the right seems to stick out from the textwidth
- Figure 2-3: marker in the legend for data points does not match the plots: there should be no line in the legend
Fixed, vertical line kept: other papers have such markers in the legend.

- Figure 2-3: green box in the legend for "top and dibosons" has a small black dot in its center, which should be removed (as there is no such marker in the plot itself)

- Figure 2-3: to be consistent with text, remove dashes in the legend (c-jets and b-jets -> c jets, b jets)

- Figure 4,5: make sure you use CMS TDR style script to produce the plots. Axes labels and titles are too small.
- Figure 4: figure appears to be quite empty and take a lot of space. Consider reducing Y axis range.

- Figure 5: Suggest to reduce Y axis range and put legend below points.

- Figure 4,5,6: figures seem to be wider than textwidth.
- Figure 4,5: are vertical uncertainties zero or too small to be visible on the plots ? Please clarify in the caption.
Yes, these uncertainties are small.

- Figure 4,5: consider using different line colors for electrons and muons (in addition to different markers and marker colors)

Do we need different colors for bars? Error bars are usually black. - Figure 4: some points are overlapping between muons and electrons, so they are hardly distinguished.

If values are close, it is hard to distinguish them and their plots overlap

- Figures 3-6: p in symbol for p_T should be in italics to be consistent with text. (X and Y axis titles)

We use standard symbol for pt in plots

- Figure 6: (top panel) legend entry for "Measurement" has a horizontal line, which is not there on the points. Remove it from the legend.

- Figure 6: (top panel) are vertical uncertainties too small to be visible on the plots ? Or they are given by the height of the yellow band ? Consider clarifying in the caption. Consider also giving the results also in a table (e.g. in supplementary material), so that one can precisely know the measured value and uncertainty in each bin (e.g. for comparison to other experiment data or new theoretical predictions).
Yes, these uncertainties are small.

- Figure 6: (top panel) The blue points appear darker than the respective blue legend entry.
- Figure 6: (bottom panel) legend entry for "Stat" should have vertical uncertainty line.
They have the same color

- Figure 6: (bottom panel) legend: Stat -> Stat. uncertainty
- Fixed - removed

Figure 6: (bottom panel) legend: Total -> Total uncertainty

Fixed - removed

- Figure 6: (bottom panel) legend: Th Syst -> Theor. syst. uncertainty
Fixed - Figure 6: (bottom panel) legend: Exp Syst -> Exp. syst. uncertainty Fixed

Albert De Roeck

General Comments

There is not much mention in the paper on how it compares to the earlier 8 TeV analysis in ref 4, eg, on improved quality/improved understanding of the data/process. We should comment on that in the
- by how much is the precision of the data (stat error) improved?

Comparing with results in SMP-15-009: total xs x branching 8.8±0.5(stat)±0.6(syst)pb, or 5.5% stat and 6.8% syst at 8 TeV, with current result 1.9% stat and 5.5% syst.

- Madgraph LO and NLO seemed to agree with the data in ref[4], now the NLO one is
20% too high. MCFM at the time was about a factor two too low (we do not test it here)
I s the all consistent with the present results? The NLO prediction is off in this paper
- Do our measured values in the two papers agree (modulo an expected CM energy dependence)?
- Given the discrepancies we see with data and calculations, I am not sure we actually
understand exactly what we can concluding. Do we consider these measurements + the theory
to describe them satisfactory and understood enough to suggest to use these data for PDF charm
We conclude the paper saying that these results can help to fit the PDFs, but we also say before the
best agreement is reached with a LO program QCD program. The way it stands there now, it looks
like a contradiction to me. LO predicts less jets comparing to NLO, but the prediction for charm component may be lower, than they actually are, just like with NLO, so these two effects compensate each other for LO.

- line 53: matching: the matching is again discussed for all MC programs on line 66. Is that not

- line 54: So the madgraph (LO) cross sections are not used but we use the FEWZ
ones, correct? This is used to for the curves in the comparisons in Fig 6 as well I presume?
This may not be clear when just looking at fig 6 where it looks as these
are madgraoh predictions from he legend. As people may just
see this figure in talks, one should explain this better in the legend (or at the
minimum in the caption). Similarly for ShERPA We mention that all 3 signal models are normalized to nnlo with fewz in section, where MC models are described

- line 65: is this 19 GeV the default value for MLM? Or why was this value chosen and
not say, a round number like 20 GeV?
There is a standard procedure to choose the matching scale for a given process. Even if not so important, it is a useful number and should be quoted for completeness. There is no need to explain how this is chosen, which is a rather technical detail, well known to generator experts.

- line 69: Geant4 is written in the wrong format, for what we usually do in CMS papers.
ps: we use: \new-command{\GEANTfour} {{\textsc{Geant4}}\xspace}

- line 76: add a ref to the muon reconstruction paper (in order to make it symmetric with
what we do now for the electron paragraph following this one).
Added the same reference as for muon pt resolution, the same paper describes muon measurement parameters at CMS

- line 78: Say somewhere in this section how the primary vertex is defined in this

- line 86: How do we compensate for pile-up contribution? Add a reference for the method used.
I can't find reference to this part in other papers, it seems there are some technical details, which are available only on some special POG twiki.

- line 92: are the electrons required to originate from the primary vertex with the
same conditions as above imposed on the the muons? please specify what is used
Fixed. - line 98: [r1][r2] -> [r1,r2]

- line 125: since c-jets are very central to the physics measurement in this paper
I would have expected a few more lines on explaining how the c-tagger that we use
in this analysis works, not just a reference. Some information, that it uses different parameters of the jet, such as SV or its impact parameter, is added.

- line 133: So we don't take the c-quark with the highest quality as given by the
c-tagger? Would that not be the better choice? I assume you checked that?
Not sure what highest quality means here, the tagger shows, whether jet passes the criteria or not. Since we select highest pt c-jet at gen level, there is more change to match with it if we select highest pt c-tagged jet.

- line 170: what used as standard MC for the response matrix for the unfolding?
What systematic uncertainty is taken for the unfolding, eg the response of alternative
MC models? There is no systematics in table 5 for the unfolding.
Please spell out what is done.
We did unfolding of measured Z+cjet events using acceptance, responcse and background from LO madgraph, results are in agrrement within stat errors with results from unfolding, using the same variables from NLO madgraph, so unfolding with LO madgraph wasn't used as uncertainty

- line 185: The typical question on how the scale uncertainty was determined.
Do we avoid in this process to include combinations where the change of the ratio of
the scales is more than 2? It is strongly recommenced NOT to use these extreme
variations as otherwise you get a too pessimistic systematic uncertainty from that
Yes, unticorrelated combination 2muF,0.5muR and 0.5muF, 2muR are not taken into account.

line 187 what is "CT14 [59]" There are only 39 references. Is that something
left over in the text?
This is an error, this pdf set shouldn't have been mentioned, fixed.

- line 188: You use the NNPDF PDF distributions so why you use here the
CT14 description for the uncertainty and not the NNPDF one (or the PDF4LHC one)?
- line 200 one could add a reference here for this number (as we do in other papers)
This is an error, this pdf set shouldn't have been mentioned, fixed.

- Table 5: +/- uncertainties for one channel/quantity are not always given to the same
numerical precision. This looks sloppy. fixed.

- line 217-224: The discussion at the end of this paragraph is very terse. See also my general
comments above. Is it not weird that the NLO calculation is so significantly off?
Or is it because of the FEWZ normalization that the LO one works better?

It might be more complicated: Z+ any jets NLO provide better agreement with data, then LO, normalized to NNLO, but overshoot Z+c-jet. LO might overestimate Z+c-jet, but since total Z+anyjet is underestimated, this effect is compensated and LO provides better agreement.

Or can we blame the charm PDFs (probably not). The reader is a bit left on his hunger here.
- line 237: ... but this is with NNLO cross sections right? So this is a bit of
an unfair comment in that case. At least this needs to be explained. Added, that predictionas are normalized to next-to-next-to-leading order cross-section

Olaf Behnke

- l.150 could you specify what you mean by fitting.
Is this a Poisson based maximum likelihood template fit or
is it a chi2 fit?

Fitter was changed to TFractionFItter which uses likelihood fit.

- l.150 You are not discussing the problem of limited MC statistics
for the Z+c, Z+b and Z+l tempates that are fitted to the data Msv distributions.
Can it be safely neglected? Normally one uses the Barlow Beeston method that
is implemented in the TFractionFitter code. Or one uses the combine tool
where the so-called Barlow Beeston lite method is used, approximating
the uncertainty of the sum of templates by a gaussian distribution.
In particular for the fits in the differential bins one could easily
imagine that the MC statistics could play a role in the fits.

Yes, RooFit doesn't take into account stat errors of MC, analysis was redone using TFractionFitter.

- Fig. 2 as discussed in recent months the SV mass is alreay sculptured/distorted
from the deep CSV tight c tagging criteria that already used
the mass in order to enhance the c component. I think this distortion
should be mentioned in the text. Otherwise it looks like we have
much worse SV separation for the various flavours than other experiments.
Normally the b-jets are the only source that have a large contribution
between 2.5 GeV and 6 GeV, but here, after the deep CSV cut, the situation
has changed. We can still say that after all we have still a reasonable
separation power. In this content I think it would be very much worthwhile
to also mention in the paper the cross check you performed with using
the deep CVS b medium point, after which the SV mass looks much different
and less sculptured. If I recall correctly the scale factor results
(table 1 and 2) come out consistently.

Yes, we have checked, that after using medium b-tagger instead of tight c-tagger the unfolding results in both cases are consistent withing errors (stat and tag SFs unc were taken into consideration), not sure, what is the best way to put it in the paper We added the sentence, that MSV is used in c-tagger, though still can descriminate flavor components. We also decided not to add results of cross-check using b-tagger.

- Fig. 2 it would be really nice to show how the post-fit SV mass distribution looks like, so how the template fit really describes the SV mass discriminator spectrum after the fit. One could do that using the total sample like in Fig. 2 or after summing up all the fits in five bins of Z candidate pT, separately for electron and muon channel. Such a Figure is missing in the paper. One could also see better if some residual shape problems that are visible in Figure 2 vanish with the fit adjusted templates. After all you have not really demonstrated that data are described well by the fit. (One step further would be some chi2 GOF test). It is nice to see the control plots in Figure 3 but still a post-fit plot of M_SV would be good to see. Final purpose of the fit is not good agreement of data/MC for SVM, but for pt spectra, do we really need to include these plots? MSV distributions after applying k-factors as functions of Z or c-tagged jet pt are added as suplementary material.

- Table 1 and 2: I would add some information about the level of fit-correlation
between the sources, as usually expressed by the correlation coefficients rho.
State some typical value for rho_bc, rho_clf and rho_blf in the text.
I would guess that rho_bc comes out to be negative and somewhere between
-0.5 and -0.9. Of course if it would be -0.99 it would become

The correlation coefficients between charm and bottom components are shown in plots in attachment (correlations.pdf) as function of pt. As expected by Olaf, these are ~-0.75-0.85.

- l.169-172: the unfolding is missing the information
that you unfold the fitted Z+c event numbers (which you get
from the product of SFc times the number of Z+c signal MC events
at detector level, per bin). Please add such a sentence.
Are you using really the same number of bins at detector level
and generator level, or you are using more bins at detector
level, which is what TUnfold normally expects. We have been
probably already discussing this recently but I forgot what you did.
If you use more bins at detector level I think this should
be clearly mentioned in the text. If you use more bins it
is a generalised matrix inversion (from chi2 minimization), matrix inversion
would be used for the same number of bins at both level. Added phrase tha ttunfold is used to unfold measured Z+cjet events distribution, and Nrecp = 5 and Ngen =4.

l.127 and l.189-193 the information on the flavour tagging calibration
and its uncertainty is too brief in my view, or there should be
some references. It seems, that information/plots about this SFs calculation is available only within CMS, thus we decided to leave it as it is.

Fig. 3 is there a problem in pT^ll in the first bin? It looks like the residual mismodelling of Z pt spectra: in case, when no c-tagger is applied the "wave" in data and MC ratio is observed for small Z pt values.

l.217 "The results" which results??? The cross section results, right? Fixed

l.224 How is the theory uncertainty 11.7 (th) pb calculated?
should be described in the text. Fixed

l.223 how was the inclusive fiducial Z+c cross-section measured, is it
just from summing the differential cross sections vs pT^Z?
I suppose so, but where is that stated in the paper? Added formula to the results section

l.223-224: there is really a significant discrepancy betwen data and a NLO
calculation, isn't the large level of discrepancy a bit surprising
and we should think further about it?
In this context: do I understand correctly the for
the signal MC NNPDF3.1 PDF set is used, at NLO,
and in the 5 flavour scheme, including c and b quarks
in the proton PDFs? Does the simulation also include processes with an inital gluon in one of the proton that splits into a ccbar pair and one the charm quarks radiates a Z and interacts with a gluon from the other proton? Yes, c-jets can originate from gluon splitting, but this measurement is inclusive Z+cjet and takes into account such jets

Greg Landsberg


  • Title: ... and charm jets in proton-proton collisions at s√=13 TeV. Fixed

  • - LL7-12: please split this very long sentence in two - it's hardly parsable otherwise. Fixed

  • - L40: within a fixed time interval of about 4 μs. Fixed

  • - L47: ... \MGvATNLO 2.2.2 [8,9] (MG5\_aMC) ....

  • - L48: please, specify the perturbative order at which the Z+c jet process is simulated. Is it NLO, just as for the Z+jet backgrounds? Fixed

  • - L54: give the full version of FEWZ here - is it 3.0? Fixed

  • - L55: ... (n=0--4) processes. The value ... [no need to specify the MLM matching here, as it is already specified on L85, which should be moved before the present sentence [see a comment on LL2-67 below]. Also, give the full SHERPA version here; is it 1.0? Fixed

  • - L60: give the full \POWHEG version here, 2.0. Fixed

  • - LL62-67: it makes little sense to discuss the matching schemes until you describe the parton shower generator. Therefore, it would be logical to move this paragraph right after the first sentence of the section and then start "The \MGvATNLO version 2.2.2 ..." as a new paragraph. Also, is this really true that you use PYTHIA to shower the SHERPA samples, given that SHERPA has it's own parton showering and fragmentation built in? Also, please give the full PYTHIA version here, 8.212? On LL62-63, say: "... showers, hadronization, and the underlying event, with the CUETP8M1 tune [21] that uses the NNPDF2.3 [22] LO PDFs ...".
Fixed: madgraph and Sherpa use nnpdf3.0, then pythia uses cuetp8m1 or cuetp8m2t4 with Nnpdf 2.3

  • - L78: define the primary vertex here. Fixed

  • - L86: either give a reference to, or explain how the isolation is corrected for the effects of pileup here.Fixed

  • - LL76-96: you constantly switch the order you describe electrons and muons in the paper, which is pretty annoying. Please, use the same order - electrons before muons - throughout the paper. Here you need to swap the order of the two paragraphs. Now, the muons are required to be isolated, but you say nothing about the electron isolation. Surely, you require electrons to be isolated too, so please specify the isolation criteria in the corresponding paragraph. Similarly, you talk about the dielectron mass resolution, but muon pT resolution. This makes little sense, so please either specify the dilepton mass resolution for both leptons, as this is the relevant quantity for this paper.Fixed

  • - LL113-114: again, swap the order of the electron and muon trigger description here. Fixed

  • - LL115-119: ... and the order of the electron and muon paragraphs here. Fixed

  • - Tables 1-4: swan the order of Tables 1,2 and 3,4 [i.e., 2,1,4,3], so that you talk about electrons before muons. Also, suggest identifying which pT is used in each table fro clarity [i.e., use pjT in the headings of Tables 1-2 and pZT in the headings of Tables 3-4. Fixed

  • - LL158-159: I don't understand why the generator-level phase space is defined differently from the reconstructed one. In the reconstruction, you use pT>29 (26) GeV for the leading lepton in the electron (muon) channel, while the generator-level phase space is defined as pT>26 GeV. You can't be that sloppy in a precision measurement paper, particularly since you claim that the fiducial cross section you measure is significantly higher than the theoretical one within the same phase space! Surely you will measure lower cross section in the electron channel given a harder lepton pT requirement! I therefore believe that the measurement should be redone in the phase space defined by the leading lepton with pT>29 GeV, both at the reconstructed and generator levels; only then the comparison would make sense. The phase spaces at gen level and reco level don't have to be the same: the difference is taken into account by fraction - acceptance. Maximum of distribution of leading electron pt is bigger, than 29 GeV, so the gap doesn't have big impact on acceptance, which is mostly contributed by c-tagging and electron id efficiency. On the other hand, muon sample has larger statistics, so the threshold was taken as low as possible.

  • - Figure 2: swap the order of the left and right panes and update the caption accordingly. It would really be useful to also show the same distributions after the scale factors are applied, either as a second set of plots, or as the second Data/MC distribution in each lower pane of the present plot. In the lower pane y axis legends, use "Data/MC". In the legends, use: "LF jets", "c jets", "b jets", "t quark and dibosons".

  • - L172: specify what unfolding method was used - d'Agostini, SVD, ...? Fixed

  • - L182-183: Renormalization and factorization scales: The ambiguity in the choice of the renormalization (μR) and factorization (μF) scales leads to an uncertainty ... [These are not QCD sales! While the renormalization scale, is indeed a scale of a QCD renormalization group equation evolution, the factorization scale has to do with PDFs, and not QCD matrix elements.] Not sure, it is not stated, that these are qcd scales.

  • - L185: specify the default μR=μF scale used in the calculations. Fixed

  • - LL186-188: first of all, Ref. [59] is nowhere to be found; the paper only has 39 references, none of which resembling the CT10 one. Second, why do you use a CT10 uncertainty set, rather than the full PDF4LHC prescription for Run 2?
The whole paragraph seems wrong: changed to smth more common, like in other papers.

  • - Figure 2: swap the order of the two rows of plots - electrons before muons - and update the caption accordingly Fixed. In the lower pane y axis legends, use "Data/MC". In the legends, use: "LF jets", "c jets", "b jets", "t quark and dibosons". Move the x axis labels a bit lower, so that they do not overlap with the axis values.

  • - L200: give a reference to our total inelastic cross section measurement to justify the 4.6\% number. Fixed

  • - L208: why the cross section is varied by 10\% - where does this number come from? Please, provide a reference. Fixed

  • - L210: give a reference to LUM-13-001 for the 2.5\% number. Fixed CMS-PAS-LUM-17-001

  • - Figures 4-5: swap the order of the electron and muon keys in the legend.Fixed

  • - Table 5: the caption states that the uncertainties are in the inclusive cross sections, whereas the entries clearly show differential results, as the uncertainty depends on whether the pT of the c jet or the Z boson was used. Please correct the caption accordingly. Changed "inclusive" to "integral".

  • - L223: explain "(exp)" as experimental systematic uncertainties. Fixed

  • - L224: how was the theoretical uncertainty estimated - please explain in the Systematic uncertainties section. Does in include the scales and the PDF? Fixed

  • - Figure 6: the legend is illegibly small - please make it bigger. Move the x axis labels lower so that they do not overlap with the axis values. In the lower pane legends use "Theo Syst.".

draft V6 comments

Main updates after V5

Main changes were done in sections 5, 6 and 7: instead of multiplying bottom and light components by corresponding k-factors and then subtracting from data distribution, number of charm events is found in each pt bin instead in fit of SVM, along with k-factors, which now are used only for validation - data/MC agreement after applying k-factors. This procedure is described in section 5. Light and bottom k-factors uncertainties were removed from section 6, since these k-factors are used only in validation and are not used in calculation of differential cross-section. Section 7 contains updated results and plots.


  • I always find it cumbersome when systematic uncertainties are quoted before being described in the text. It happens here for the SFs in Tables 1-4. I just want to make sure that the systematic uncertainties in those tables correspond to the sources described in Section 6.
    • Yes, these are the same uncertainties.

  • Section 6: can you add references (if journal publications are available) for the systematic uncertainties you use? (for leptons, Jets, ttbar). In particular, where do you take the +-10% uncertainty on ttbar from?
    • The systematics were taken mostly from twiki pages by corresponding POGs. ttbar uncertainty was taken from SMP-16-018 AN2016-379.

  • There are some features in your current paper version (v6), for which the names of generators do not appear, e.g. on line 221, or lines 233-235. This was not the case with the previous version of the paper (v5). Also please state/quote where the predicted cross-section of 524.9 pb (line 223) is coming from.
    • Some bug from CADI, which will be fixed: in git version of the paper the names of the generators are shown properly. Same for 524 pb from MG NLO, which is not shown after loading to CADI from git.

Approval questions

  • - double check the implementation of QCD scales uncertainties as we discussed
    • We use weights with Ids 0-8 for qcd variations: excluding two combinations (2\mu,0.5\mu) and (0.5\mu,2\mu) there are 7 weights. Weight with id = 0 corresponds to (\mu,\mu) combination and concedes with default event weight, which is used as central value, so after unfolding there are 6 copies of distributions, corresponding to variations.

draft V4 comments


  • line 31: “pseudo rapidity”
    • Fixed

  • line 143: “Estimates”
    • Fixed

  • line 244-245: in the results put the statistical error first, then “exp” followed by “th” as the last one. The whole sentence can be moved to line 238 just before “The predictions from …” and can be also copied just after line 229 at the end of the Results secton.
    • Fixed


  • Tables 1-4: Can you format the systematic uncertainty as ^{+…}_{-…}?
    • Fixed

  • l157: SFc, and SFb
    • Fixed (coma in enumeration before and is optional?)

  • l160 and elsewhere: generator-level (when followed by a noun)
    • Fixed

  • Fig 2 and others: Horizontal error bars should be removed for constant-width bins. In the legend and axis title, change DATA to Observed.
    • To be fixed

  • Fig 4: The y axis title is not accurate (the caption indicates it should be a fraction). The CMS simulation and luminosity headers are too small. The labels of both axes are too small.
    • To be fixed

  • Table 5: the table is too wide
    • Fixed

  • l221 and elsewhere: branching ratio -> branching fraction
    • Fixed

  • l244: remove italic font
    • Fixed

  • References: remove no. (eg in [1] and [2]).
    • To be fixed: no. is generated automatically

  • The letter of the journal should not be bold and attached to the number (e.g. in [20] and [22])
    • Fixed

ARC questions


  • I am having a hard time following exactly what was done with the fits to obtain SFc and SFb and how they are applied. I think some more details on how it was done exactly need to be provided.

  • What selections are applied? Is it the full signal region selection? Or is this some orthogonal dataset?
    • No orthogonal dataset can be determined for Z+c-jet, so normialization for Z+b background is obtained from the same sample, which it is subtracted from.

  • In what bins are SFc and SFb measured? (is it one per ptZ / ptC bins? e.g. x < ptZ < y && z < ptC < a?)
    • SFs are measured either as function of ptZ or ptJ. There is not enough statistics to split in Z and c-jet pt simultaneously.

  • Why is the light component kept at 1? Could you provide some justification for this?
    • There are several resons for that: for Z+jet (without c-tagging) there is good agreement between data and MC, while most of the events are light, so k-factor is ~1. The problem with retreiving k-factor from SVM fit is caused by small number of Z+light, so the fit doesn't converge/has big errors. In other analysis with flavor k-factors same assumption was done.

  • For visual aid I think if the histograms are stacked in light -> b -> c order the shape difference would be more easily noticeable.
    • Fixed in draft paper.

  • Itâ?Ts probably my ignorance. What does it mean by "In this analysis secondary vertex mass was corrected for the presence of neutral particles?â? What is the correction exactly and is there an associated uncertainty to the correction?

  • I would like to see individual pre-fit distributions as well in the AN as well.
    • added

  • Figure 51. the bottom right plot seems to have qualitatively different composition is this understood?
    • According to prefit plots there is different data/MC agreeement for muons and electrons channels for c-tagged jet pt > 90 GeV.

  • The shape of the secondary vertex mass seems to be one of the most important quantities which seems to be taken from MC directly. Is there some orthogonal dataset where the modeling of this is verified?
    • Yes, these studies are done by Juan Pablo, he also made corrections for the shape of different flavor components.

  • If the b template or the c template has a shape mis-modeling by say ~10% what is the impact on the final SFc and SFb value? Is there a justification for using the MC shape?
    • There are many studies on modeling of secondary vertex mass, ususally reported on SMP V+J by Juan Pablo. Another way to validate obtaiend k-factors is pluging obtained k-factors back to MC and checking data-MC agreement.


  • Title: I think you are measuring the "Z+c-jet differential cross section not the inclusive one

  • Please write a complete abstract so that it's clear to the reader the purpose of the search and the general key features of the analysis.

  • Section 1.1, line 53-54: To count the b-jets and c-jets at the generator level you have defined what is a b-jet and a c-jet at the generator level. Which method are you using to define them? Are you using hadronFlavour (5 for b-jet and 4 for c-jet) or other methods? If you are using hadronFlavor or any other CMS "centrally maintained" method I think it may be worth at least to cite them. If instead, you are defining by yourself b-jets and c-jets looking to the presence of B and D mesons in the gen-jet, then I suggest to explicitly describe your algorithm in more details.
    • It is hadronFlavor method definition.

  • Section 1.1, line 55-56: You defined the "bottom" and "charm" MC component relying on counting the number of b-jets and c-jets with a pT>10GeV at the generator level. Also, you define the "light" MC component if there are no heavy flavor jets. How do you treat the events in which you have b-jets and c-jets with pt<10? Are you still considering them in the light MC component or are you discarding them? I suggest adding a line specifying this.
    • Yes, objects with pt < 10 GeV are not treated as jets at all.

  • Section 3, line 70: I think here the reference to Table 12 is wrong. Table 12 is on page 54 and is comparing the NLO and LO generators. I guess that the Table you want to refer to is Table 1.
    • Fixed

  • Section 3, Table 2: From reading Table 1, I guess that you used MINIAOD format also for the MC sample. I suggest making it clear in Table 2 as well.
    • Fixed

  • Section 4.1, muon selection: Since you ask for two offline muons to reconstruct the Z boson, have you checked the difference in trigger efficiency using a di-muon trigger vs the single-muon trigger you are currently using? Same as for electrons.
    • We didn't use double lepton triggers: lowering thresholds for leading lepton pt wouldn't add much statistics according to control plots.

  • Section 4.4, line 138. The CvsL and CvsB variable you are using represents a key feature of your analysis. I think that you go a bit more into the details. Your analysis is among the first in CMS to make use of a dedicated charm tagger based on advanced machine learning techniques (I am assuming you are using CvsB and CvsL evaluated from DeepCSV). I suggest a) to describe a bit why we have two discriminators (one is dedicated to separate charm from light and the other one to discriminate between charm from bottom). b) Also, given that these two discriminators are the ratio between multiclassifier outputs (CvsL = p(c)/[p(c)+p(l)] and CvsB = p(c)/[p(c)+p(b)], where p(c), p(b) and p(l) are the multiclassifier scores evaluated per single jet, interpreted as the probability for your jet to be generated by b-quark, c-quark, and light quark or gluon, please specify the algorithm used by the multiclassifier. In particular, if it is DeepCSV or DeepJet based, and maybe give a short description of their architecture or just explain that are multiclassifier based on machine learning techniques/DNN referring to BTV-16-002 if it is DeepCSV.

  • Section 5.5: line 185-186: here you wrote: "Three different weights were used, depending on the flavor of the c-tagged jet". In my opinion, this sentence is not very clear since, if I understood correctly, you are applying a fixed cut on the charm taggers pair (CvsL >0.59 && CvsB >0.05), so events that pass these cuts, all contain at least a c-tagged jets. I would rephrase as "Three different weights were used, depending on the true flavor (or flavor at generation level) of the jet passing the charm taggers working point (or the charm tagger selection)".
    • There is a separate pair of CvsL and CvsB for each jet. We require at least one jet to pass c-tagging, then use one leading c-tagged jet.

  • Section 5.5: I guess that the per-jet-scale-factors are evaluated inclusively in pt and eta of the reconstructed jet, it's correct? If yes, please specify it, otherwise point it out that the scale-factors have been evaluated differentially in pt and eta of the reco jet (it is the case).
    • These scale factors were calculated as functions of pt, this was added to AN.

  • Section 5.5: You describe the data-to-simulation event reweighting to account for the difference in the mistag rate between data and MC, however, you don't provide the corresponding formula for the charm-efficiency. You also write how you have estimated the light and b-jet scale factors (inclusively, T&P, etc), but not the charm ones. Moreover: are you applying the scale factors to all the MC samples, signal and backgrounds? Please specify if it is the case or not.
    • There is no dependance on datasamples, one formula is applied for c-tag case. These SFs are applied to all MC events.

  • Section 6, Control Plots: What is the "take-home" message from these control plots? What have you learned/noticed plotting these distributions? I found that here more detailed comments on the control plot would be useful and expected.
    • The control plots may not contain important information for the analyzis, however in my experience it is always usefull to leave control plots at each intermediate state, so that anyone could reproduce that if needed.

  • Section 7, Monte Carlo k-factors: If I understood correctly the k-factors are correction factor that you apply to MC to restore the agreement with data: a) The plots in figure 27 and 28, showing the "disagreement" between data and MC as function of the Z-pt and c-jet pt, are obtained after having applied the c-tagger efficiency/mistag rate scale-factors? (I guess so smile ). If it is the case, please make it explicit in the text and the caption of Fig. 27/28. b) You show a residual data/MC disagreement vs Z-pt and c-jet pt, but the k-factors are estimated through a fit in RooFit to the secondary vertex distributions of Drell-Yan events, leaving the c and b component free to float, fixing the light component at 1. Could you provide some plots in the AN showing the secondary vertex distributions before/after the fit, highlighting the three components, light, b and c? c) The fit to the distribution of the secondary vertexes has been carried out in bins of Z-pt or c-jet pt? d) Do you know the reason why, even after the application of the c-tagger scale factors, you observe a residual data/MC disagreement?

  • Section 8.1: I see that in lines 238-241 you provide a short description of the method. This method represents the other key feature of your analysis, you may want to spend a few more words on the method itself. For example, you could add a description of what it is exactly a "response matrix" and how do you obtain it.

  • Section 9: line 264-267: I don't understand this sentence: it seems like you are applying systematics uncertainties only to the MC DY samples, is that correct? What is the MC component of the remaining 10% of the events? I think that it would be useful to assess at least the main systematics also on the remaining 10% of events.

  • Section 9: Fig. 41 to 49: Would it be possible to reproduce the plots coloring/filling the inner part of the histograms with dashed lines/light color? It will help the readability especially of those plots that show up/down variation very small.

  • Section 9.3: Make a proper section for Final results, not just a subsection. This is an important section.

  • Section 9.3: You state that the systematic uncertainties relative to the different sources have been added in quadrature in each bin. Have you checked that these uncertainties are not correlated or weakly correlated? If there is a substantial correlation among some of the uncertainties, then the sum in quadrature could lead to an overestimation of the total uncertainty. Also, when you say that the sum in quadrature is done separately for deviation up and down from central value, what do you mean exactly? The deviation up/down vs central value is referred to as the variation of the systematics itself or the up/down variation of the event yield in the corresponding bin? ( In other words, how do you treat the uncertainties, if any, which an up variation lead to a decrease of the event yield)

  • Section 9.4: Please be more specific and add details on how the combination is carried out, at least pointing out the main features of the approach chosen (and then it is ok to refer to Convino)

  • Section 10: Please provide a summary that will conclude the analysis note highlighting your very nice results and pointing out what could be done in the future to even improve these results!

  • References: Clearly need be added. Please remember to put the doi and arXiv or url as well.

  • Appendix: for my understanding: what are the upsilon variables defined in page 58? Are
    • For the next analysis we plan to split events also into different Yb and Ystar variables, sensitive for pdf of c-quark. At some stage we understood, that there is not enough statistics for this partition, so it just shows how results can be improved in future analysis.

  • Check that all the acronym are defined in their first appearance through the analysis note;

  • Please make sure that you are using consistent notation through the whole analysis note, e.g. line 60 pT>30: it should be \text{p_{T}} or if you could start using the cms variable definitions (you would have to do so for the paper anyway) and use just \pt. Another example is in line 64: you specify that eta is for electron or muon, but you don't when you write pt-leading >26... so keep the same convention. (also here, "pt" is different from line 60. Finally, it would be good to have also the labels of all the plots consistent with the notation convention you will chose.

  • line 40: run 2 --> Run-
  • line 46: I would use pp instead of p-p. However, define what p-p is in p-p collision, maybe in line 40 at the beginning of the sentence: "During proton-proton (pp) collision"

  • line 95: I would just remove off-line in front of analysis. There is not an offline and online analysis, there is just ONE analysis that relies on data collected through an hardware (online) trigger and further analyzed through software algorithms.

  • line 167: Title: "Muon identification and isolation" --> "Muon identification and isolation reweighting"

  • line 254: remove comma before "passing both"

  • line 255: remove comma after "and events"

  • line 288: "modeling of of Z+c process" --> "modeling of the Z+c process"

  • line 292: Add space before "Figure 42". Also change "shows dependance" into "shows the dependence"

  • line 337: Correct "Normalizing of Monte Carlo events..." into "Normalization of Monte Carlo events..." also in line 339, use "normalization"


  • Why dont you use double muon and double electron triggers to increase statistics?
    • Lowering of threshold for leading lepton won't significantly increase statistics: on control plots leading lepton pt distribution maximum is much higher then threshold. For this threshold efficiency of single lepton trigger is bigger.

  • Why do you select electrons with abs(eta) < 2.4. The threshold is usually 2.5 for reconstruction, and 2.1 for some triggers.
    • In other analysis eta threshold for electrons was set to 2.4, will check for the next iteration, if there are any updated for recommendations.

  • l148-151: I cannot really understand how you extract the scale factors SFc and SFb. Are they extracted using exactly the same data as those you do the subtraction in to get the Z+c component? Or do they correspond to an orthogonal dataset? If the dataset is not orthogonal, how do you treat the fact that you use the same data twice (to determine the SFs and to do the subtraction)? If the dataset is orthogonal, can you please clarify the selection?
    • There is no orthogonal dataset for Z+c-jet, so we use one for unfolding and calculating SFs. SFs are intermediate step, so we don't propogate errors, obtained at this step, to total uncertainties.

  • Why dont you extract a SF for Z+light?
    • There are two reasons for that: for Z+jets without c-tagging there is good agreement between data and MC and since most of the events in that case are Z+light, SF for this component is close to 1. If one tries to estimate this SF from fit after adding c-tagging, it will have big errors, because of low statistics for Z+light component. Most of the events after c-tagging are Z+charm and Z+bottom.

  • Tables 1 and 2: there are differences between ee and mumu larger than the uncertainties. Do you understand why some of the SFs depend on the Z boson decay?
    • K-factors doesn't depend on the Z boson decay. However k-factors are calculated as function of reco level objects pt, thus they depend on reconstruction. Shape of Z and c-jet pt is different for muons and electrons (see attachmens Ratio_j.pdf and Ratio_z.pdf), which leads to different fit results.

  • l207: I believe the uncertainty in the ttbar cross section is lower than that
    • I saw different scale at different analysis, the most conservative - 10% - was used e.g. at SMP-16-018.

  • l208: I do not understand how the luminosity is an uncertainty if you fit the normalisation of all background components to data instead of estimating them based on the cross section. Is it related to Eq 1, which has not been introduced yet?
    • Yes luminosity uncertainty was taken into account through eq1, by varying the luminosity, used for normalizing each bin.

  • Figure 3: can you add uncertainty bands for the predictions?
    • Added for DY NLO

  • Figure 2: can you show the predictions if you use SFb and SFc derived from data?
    • The are shown separately for each pt bin in appendix in section Post fit secondary vertex mass distributions. Will be combined at next iteration.


Section 1

  • L7-12: remove the paragraph â?oFor example, â?¦ + LSP. â?: the measurement of Z+c is interesting per se: donâ?Tt put too much emphasis in one particular search it is a background forSection 3

  • An important point in the associated production of vector bosons and heavy quark is the number of flavours included in the PDFs. I think they are all 5 flavours (i.e. you can have bâ?Ts in the initial state) but this must be stated. In particular check what is done with Sherpa, because I believe there are more complicate options than in Madgraph to deal with heavy flavours in the PDF.

  • Try to state the generator version (including PYTHIAâ?Ts one if used for the PS) for each process!

  • L53 Rescaling to NNLO cross section value applies only to LO generators right? If not, I think it should otherwise we loose correct order on jet observables and proper scale uncertainties. Then I suggest to write once for all after SHERPA description â?o
    • All LO event generator samples are scaled to the cross section calculated to next-to-next-to-leading order with FEWZ [10]â? If I understand correctly this is a common practice to normilize both LO and NLO madgraph to NNLO. This is done in other analysis.

  • L53-54 move here the sentence at lines 63-65, describing MG5_aMC ME-PS matching details

  • L55: which version of SHERPA is used? Cited paper is for version 1.1, but this is pretty old. You might want to quote instead Also, is SHERPA LO for all jet multiplicities or NLO up to 2 and then LO ? These are the most usual configurations. It could be something different, but in any case it must be clearly stated.

  • L57-59 Start paragraph with â?oThe POWHEG [16-18] event generator is used to simulate backgrounds from top quark pairs â?¦ â?o. However, the ttbar sample you have in Table 2 of the AN has been done with MG5_aMC (with Tune CUETP8M2T4 , a specific tune for top) and not POWHEG. Please clarify what you have used and modify the text accordingly. In any case I donâ?Tt see why we should quote references 12, 13 and 14. For single-top in addition to reference 15 I think we should quote â?oSingle-top production associated with a W boson, E. Re, Eur. Phys. J. C71 (2011) 1547, arXiv:1009.2450â? (for more information please check Add that the parton shower used with POWHEG is PYTHIA 8 version xx.

  • L59-60 â?oThe background from vector pair production is simulated with PYTHIA 8.xxxâ?

  • L61 â?oThe CUETP8M1 [20] tune is used for all samples done with PYTHIA 8 as parton shower MC, with NNPDF 2.3 [21] LO PDF and â?¦ = 0.119.â? Is this true also for DY samples? Why in the last sentence you mention also NNPDF 3.1?

  • L66 give details on pdfs for each generator, remove â?oSamples are generated â?¦ , andâ?, and start the sentence with â?oGEANT 4 â?¦â?

  • L78 for which fiducial region the efficiency of muon reconstruction is 96%?

  • L80-81 is it relevant for this analysis the resolution for 1 TeV pt muons? if not remove this sentence (but keep the reference for muon performance paper)
    • It is not relevant, sentence will be removed for the next version.

  • L86-94 why isolation isnâ?Tt required for electrons? in the figures 17 and 18 (before requiring charm tag) of the AN there is a clear excess of electrons in data at low rapidity value. Is this effect understood?
    • Electron ID definition includes cuts on isolation, so id and isolation are not separate, as for muons.

  • L111-121 Why dilepton triggers are not used? What is the efficiency of the trigger for the fiducial region defined by offline selection? If it is relevant you should mention that it is measured using tag&probe and add some word about systematics later.

  • L130 Just to be sure, since this question came up during the pre-approval: the c-jet must not be the leading jet in the event, right? If it so, the text is clear
    • Yes, in last version first c-tagged jet are taken into account, then leading among them is used in the analysis. So selected c-tagged jet may be not the leading jet in the event.

  • L135 what is the rapidity cut applied to jets when classifying the event as b-, c- or light-flavour?
    • No rapididy cut is applied to these generator jets, it is applied at signal definition stage, which is different from flavor definition.

  • L144 the â?ojet secondary vertex massâ? is always available for a c-tagged jet? the events in figure 2 correspond to the whole sample of events selected as signal or to a subsample of this?

  • Section 5

  • This is my opinion the most important part of the analysis but there is no detailed explanation of what you have done in the paper and there is even less in the AN.

  • If I understand correctly, you are doing a template fit to the secondary vertex mass in bins of pTZ or c-jet PT. Some general questions/comments:

  • except for Z+c and Z+b, are all the other components kept fixed in the fit? or are they included as nuisance with some constraint? Fit is perfomred for DY and data - top/dibosons, light component of DY is fixed.
    • This is a usual approach, used also in other analysis, for several resons: there is a good agreement between data and MC for Z+jet events, without requireing c-tagging. In this case most of the events are Z+light events so their normalization is very close to 1. When c-tagging is applied however small fraction of Z+light jets is left, so the fit has large errors.

  • each bin is fitted independently from the others? In other words, are the statistical errors in the Tables 1-4 of the paper uncorrelated between different bins and different channels?
  • Yes, fit was done independantly in each pt bin.
  • if it is so, the overall Z+c and Z+b is not forced to be the same, so you can get a different inclusive cross-section (in the same fiducial region) from the fit to pTZ and the fit to c-jet PT. Have you checked this result and compared it between the two?
    • Yes, the integral difference between cross-sections as functions of pt of Z-boson and c-jet is <2% for muons and ~4% for electrons.

  • - systematics among the bins are correlated. How is this taken into account? - the main problem I see, however, is that, if I understand correctly, you use this values as a per-event weighting factor and this introduces a statistical correlation among the uncertainty of nearby bins in figure 3. How do you propagate the uncertainty on SF? When unfolding, later on, the statistical error on each bin is also considered as if it is a counting? It seems to me that this way you would count the same statistical error twice. - why didnâ?Tt you unfold the results in the tables 1-4 instead? the parameter you measure in the fit then would be simply the strength of the signal, i.e. the ratio of the cross-section in the data to the cross-section in the MC. Doing so you can not unfold to a larger number of bins than the ones you measured, but I am not sure by applying the result of the fit as scaling factor the treatment of the statistical uncertainty is correct. We need some feedback from the statistical committee on this point. And if you donâ?Tt consider the statistical error twice, since you apply the same scale factor to events in several bins, it looks to me as if you are doing some kind of regularisation (which might explain why you donâ?Tt need regularisation in the unfolding).
Additional comments:

  • Figures 27 and 28 of the AN: they are supposed to be for muons and electrons, respectively, but they seem to be exactly the same.
    • Fixed. These are also outdated, obtained before changing leading c-jet requirement. New plots were added.

  • Table 2: the first line is repeated twice Fixed Tables 1-2: there are large differences between the values measured with muons and electrons, e.g. for SFc in pt bins 30-35 and 110-200 and SFb in pt bins 30-35, 50-110, and 110-220.

  • Statistical error can not at all explain the difference. Which are the most relevant systematics and how much are they correlated among the measurements in the two channels?

  • L161-167: this paragraph is a bit confusing, but I finally understood that you are dealing here with out-of-fiducial Z+c events that are selected as signal because of detector resolution. I am not sure how the text can be improved. I suggest at least to add â?oThis fraction is estimated [on the simulated Z+c sample] from the number of eventsâ?¦â? .

  • L168: which â?osimulated DY sampleâ? has been used to calculate the response matrix?
    • Main event generator is madgraph amcatnlo, which was used to calculate response matrix, acceptance and background. Preseneted final plots are obtained using this generator. However there are cross-checks, using madgraph MLM.

  • AN L256-7: by the way, the AN does not help much on the same point. The sentence â?oIn order to take into account pt migration effect, data distribution of the variable, which is to be unfolded, is bin-by-bin multiplied by the (1 - background) distribution.â? is confusing because here by background you mean out-of-fiducial Z+c events, while usually you referred to background as events coming from other processes.

  • Figure 4 and 5: do we need these plots in the final paper? they are done on simulation and does not convey much informations: the content can be described with additional words in the text (especially for the acceptance that is quite flat).

Section 6

  • L183-184: all variations are considered? usually those where muR and muF change in opposite directions are excluded
    • No, combination like (0.5μ, 2μ) and (2μ,0.5μ) were not taken into account.

  • L186: why the prescription is taken from CT14, that is not used? NNPDF usually provide both Hessian matrix and replica method. I guess you are using the Hessian matrix, but I would said it explicitly and quote NNPDF

  • L188-192 I did not check through the exact details of c-tag/mistag weights described in section 5.5 of the analysis note. Has this been signed-out by BTAG-POG contact?
    • There wasn't official request for sign-out, but there were consulations with Caroline Collard and Kirill Skovpen as well as Duong Nguyen, who also used b-tagging/mistagging weights.

  • Table 5: The values shown are the average over all the bins or the maximum? Do you have an explanation of why the PDF error for c-jet pT is larger for muons than electrons? Also JER (up variation) for electrons look a bit strange. I would use the same number of digits after the dot for all result, e.g. 4.0 instead of 4 and 0.6 instead of 0.58 Section 7

  • Figure 6: uncertainty band for the NLO MC (scale and pdf variations) will be added? what is the status?
    • Yes, these are going to be added, it required downloading new ntuples.

  • I am wondering if we shouldnâ?Tt also quote an inclusive number for Z+c in a given fiducial region. That could be maybe compared with full theoretical calculations (MCFM?) and not just generator predictions.


  • As it is now, is mostly a recollection of what has been done. The conclusion that results are in better agreement with MG5_aMC MLM is a bit weak. There is no comparison with NLO predictions that takes into account scale and pdf uncertainty on the predictions. We need to work on it and also I think we should seriously consider to add the inclusive result. =

Style/other comments


  • â?oconsistent withâ? â?"> â?oidentified asâ?
  • add text between parenthesis â?oThe measured [differential] cross sections [with respect to the transverse momentum of the Z Boson and the tagged c jet] are comparedâ?

  • Figure 1: I can not see it on my mac either using preview or acrobat. Does everyone else can see it? This figure is seen on mac/acrobat, but when pdf is attached to cadi, it transforms to smth like barcode. Now it is now clear why this happens.

  1. L19: I suggest to change it to â?oIn order to compare the data with different theory predictions, we unfold the measurement to the level of observables defined on stable-particles ({\it generator level} in the following)â?

  • L23 remove â?oestimates ofâ?

  • L45 â?oZ+jets signal and background processesâ? (signal first!â?¦)

  • L71 remove â?oin an eventâ?

  • L106 I suggest â?oThe jet energy resolution (JER) in simulation is degraded to match the resolution in the data: about 15% at 10 GeV â?¦â?

Answers (CMS Statistics Questionnaire)

  • you write that you use for PDF systematics some RMS of variations, is this really correct or are the variations being added in quadrature?
    • PDF uncertainty was calculated as it is suggested,-25ns,-MiniAODv2). For each bin of histogram there are 100 entries for different pdf options + one central value. All 100 are then divided into those, which are above central and below central. Then in each set RMS (sqrt(sum squares/n)) was calculated and then used as pdf uncertainty Up/Down. These calculated pdf uncertainties were summed with other sources of uncertainties in quadrature.

  • Chapter 5: The treatment of the light jets background in the fit to the sv mass shown in Figure 2 is obscure. Is this component (fraction) also fitted or somehow just subtracted using the MC prediction? If it is subtracted then what is the uncertainty on this component? If it is fitted it might be hard to separate it in the fit from the Z+c component since the Msv spectra look not so different in Figure 2. Why is the light background not a systematic uncertainty source for the measurement?
    • Light component was subtracted from data distribution, but its normilizing is kept equal to 1 (k-factor for light component = 1). It wasn't obtained from fit due to low statisctics for light component. Normalization of light component will be added at next iteration.

  • Table 1/2: what is the level of anti-correlation of the fitted SFc and SFb? Why don't we measure simultaneously Z+c and Z+b production?
    • After applying c-tagger, number of Z+b events is ~35% less than Z+c, while after applying b-taggers, most of the events are Z+b events, Z+b can be studied with better precision with other taggers.

  • Table 1/2: Can we really unfold well bins 30-35 GeV and 35-40 GeV? Perhaps better to merge these bins in order to have a bin-width that is clearly larger then jet pt resolution.
    • Will be merged at next iteration.

  • Table 1/2: It would be good to see for each differential region the fits to Msv like in Figure 2, to see that the fits work fine in each of the different kinematic regions.
    • Post fit Msv were added to new AN version

  • Fig.3 is there a problem with the fit in the upper left panel near zero?
    • There Z and c-tagged jet pt distributions on figure 3, there wasn't fit of this varibales for different flavor components. These fits were done for secondary vertex mass distributions. Disagreement between data and MC for small Z pt values is seen for inclusive Z and Z+jet without HF tagging, perhaps not related to HF normalization.

  • l.166 TUNfold is usually used with more bins at detector level than at unfolded level and also with applying Tikhonov regularisation. Don't you use it this way?
    • There are two times more bins at detector level than at gen level. Desicion on wheather do regularisation or not was based on condition number - ratio of biggest and smallest eigenvalues of responce matrix. This number was ~200-300 for different pt and channels. According to recommendations at twiki if this number is closer to ~10 (no regularisation), rather than to ~10^5 (regularisation required), so no regularisaion was done.

Answers (pre-approval)

fill the stat questionnaire Done (currently is processed by stat committee)

define a Journal Target JHEP

notify pub comm (Boaz) that Joel is serving as CCLE done

Update & upload new AN and paper drafts documentation with most recent status and fixes

fill the data tier survey in done

get in touch with Pietro Vischia to obtain GL for the k-factors fit with combine as detailed here we ask you to upload the fit setup to gitlab, while Pietroâ?Ts review and GL can go in parallel with the ARC review. We don't use combine for k-factors fit, since this tool doesn't work for our case. K-factors are calculated for different pt bins/MC weights/uncertainties, so there are hundreds of k-factor pairs. Combine tool is too slow to calculate that number of k-factors, so we use RooFit. It was shown here that results from combine and Rofit are the same

What if c-tagged is not leading jet? It turned out, there was fraction of events (~20%) which contained c-tagged jet, which was not leading jet. Selections were changed, so that only c-tagged jets were considered, and among them leading pt jet was chosen. The result didn't change much, but stat errors became less.


Do you exclude jets that overlap with isolated lepton (deltaR cut of 0.4) ? Yes

Unfolding: why the delta_R(cjet-reco, cjet-gen) cut of slide 14 ? To make sure we make unfolding of the same object why there is a trend in slide 15 bottom plot ? Last slide 27 helps to understand. Charm tagging efficiency vs pt has a flat ratio vs pt-jet Does the acceptance numbers of slide 17 make sense ? It is mostly c-tagging what drives the numbers In slide 18, the closure test uses the same events that produce unfolding matrix [clarified]

Answers (paper draft v1) Juan Pablo

have you applied the ctagging SF's you mention on L123 on figure 2 ? I understand that this SF is applied to the MC before the unfolding procedure Yes, ctagging/mistagging SFs are calculated separately for each event and the whole weight is multiplied by them, before filling any histogram.

L10: decays into neutrinos -> decays invisibly into neutrinos Fixed

L21 : There is no need to say that measuring the differential cross section is the main goal (the abstract is already there to mention it). So, I would change --> The goal of this analysis is the measurement of the differential cross section of Z+c jet production as a function of pT of the Z boson and c jet. This is done in several steps by The measurement of the differential cross section of Z+c jet production as a function of pT of the Z boson and c jet is done in several steps. Fixed

L64 : ppinteractions -> pp interactions

L121 : I am fine with the efficiency you quote. In I see eff-c 19.3 % b-mis-id-rate 21.7 % light-mis-id-rate 0.5 % but when I compute the efficiency myself (AN-18-324 table 4, 3rd row from the bottom) I get a 30 % more than the 19.3 %. Again, I am fine with your numbers.

L127 : are neutrinos excluded in the gen level jets ? If that is the case , may I suggest to mention it in the paper draft ? Here is an example of the way I would mention it " Generator level jets are built from all showered particles after fragmentation and hadronization (all stable particles except neutrinos) and clustered with the same algorithm that is used to reconstruct jets in data " I have to check this. In analysis, I don't check overlap of gen jets with generator neutrinos, so if they are excluded, this is done by clustering algorithm, not manually in the analysis. This should be done before mentioning in the text.

L 137: which corrects normalization of bottom -> which corrects for the normalization of the bottom Fixed

L 140 -> along with normalization of charm -> along with the normalization of the charm Fixed

L156 : last sentence is the same is in L133-134. I suggest not to repeat it. Fixed

L169: Efficiency of selections is taken into account by acceptance -> The efficiency of the selection is taken into account by the acceptance Fixed

L170: Nominator distributions is -> The nominator distributions corresponds to the Fixed: -> The numerator stands for the ...

L171: For denominator stands generator... -> The denominator corresponds to the generator... Fixed

L172: Fig 5 shows acceptance ... as a function of ...-> Fig 5 shows the acceptance ... as a function of the ... Fixed

L173 : Efficiency of c-tagging -> The efficiency of c-tagging Fixed

L177 : and then repeating the unfolding procedure, acceptance... -> and then repeating the unfolding procedure. The acceptance ... Fixed

L 191-196 sound quite general to me and I am not sure this is what we are looking for in this particular paragraph but I understand this goes along Joel's comments/suggestions on ( Joel comments : "Described as it is done on correspondign btagging twiki page: methods used for measuring SFs for different types of tag/mistag" ). Maybe this part, which describes methods for measuring SFs , should be moved to object reconstruction and event selection section? There is a paragraph, which describes deep CSV algorithm, there we can mention, that there are scale factors, which take into account efficiency etc... And then in uncertainties section just mention, that efficency scale factors can be varied within systematic uncertainties.

L 191: Depending of the type of jet ... -> [again I am not sure this is appropriate for the paper draft] Different measurements (each of them enriched on each particular flavor of interest) were performed to estimate the data and MC efficiency difference for each flavour of the jet passing the c-tagging requirement: for b-quarks a tag-and-probe technique was on used ttbar events, W+jets sample was used for c-quarks and an inclusive jet measurement for light jets. Depending on the jet flavor, the corresponding tag/mistag scale factor was varied with respect to the nominal value within the recommended range given by each performance measurement.

L 212 missing table number Fixed

L 236 Obtained results -> The obtained results Fixed


The abstract should be written in proper Latex (fix fb-1, mll). In the first like should it be better "a Z boson and at least a jet..."? Fixed

swap references [3] and [4] in References (both in time and sqrt(s) it would fit better) Fixed

line 27: I would not mention here details on Convino, just replace the last line "and to compare with predictions from QCD". Fixed, added "...and to compare with predictions from different MC generators"

64: fix space in pp interactions Not sure what is wrong here, Joel added special character \pp.

155: "overlapping": is there a DeltaR cut, or how is it done? Fixed: Jets, overlapping with one of two signal leptons from Z-boson in cone $\Delta R < 0.4$ are not taken into account.

170: "The nominator distribution is the generator..." changed to Numerator stands for generator level Z-boson or c-jet $p_T$ distribution ...

Table 4: too many digits, do you need the 3rd digit after the comma? Fixed

191: I do not see an "uncertainty" here, only a descritpion of the method.

193: fix ttbar Fixed

196: also here I do not see an uncertainty, but just a description of the correction

204: how large are these uncertainties? Added uncertainties values - 5% for electrons and 2% and 1% for muons. Maybe we should add another table with uncertainties summary, which shows max and min deviations up and down (previous version of table) ?

208: add a reference, why 10% Added reference just like in SMP-16-018 (

212: fix Table number Fixed

219: add reference to Convino here Done

223-224: it still needs more physics and comparison. The PDF used in the MC should be quoted. Is there a problem to add MCFM with different PDFs at least at parton level, like in SMP-19-004 (Duong's paper)? This can be done for LO madgraph, NLO contains only NNPDF2.3, while LO has NNPDF2.3, NNPDF3.0, CT10nlo, different flavor schemes etc. But it has a bit unusual reweighting (reweighing strategy 3 This will take more time to add accurately.

Somewhere in the figure 6 there should be the kinematical cuts, but we can rediscuss this at the pre-approval, as everybody has different opinions on this. I still did not understand from the paper and from your explanation in the twiki to which gen jets you correct, for instance do they have some kinematic cuts in eta or not? And the gen leptons, do they have eta cuts? Your acceptance around 20% makes me think that you have some more kinematic cuts on both jets and leptons than what written at lines 151-157. I think this should be clear, both in the pre-approval presentation and in the paper, to what you correct to. Yes, cut on eta for gen jet wasn't mentioned in draft, now fixed. The kinematic cuts for leptons and jets are close to those, which were used for detector level selection, small acceptance is caused by small fraction of c-jets, passing tight c-tagging.

Figure 6: caption should be more extensive and explain better lines, uncertainties. kin cuts, etc.

ref. [10] still authors name written in different style. Fixed

fix Sj\"ostrand name in ref. [19] Fixed

Answers (paper draft v0) Elisabetta

Abstract: it has to be longer. I suggest that you start like the first paragraph that you have in the conclusion now, and you end saying that the resulting differential cross sections are compared to predictions from various Monte Carlo models. updated

page 1: something happened to Fig.1, last time I saw it it was ok. compiled without problems, could be some temporary bug

line 53: is alphas(mZ) really 0.130? Also what do I learn from the two matching scales of 19 GeV and 30 GeV According to NNPDF 2.3 [21] it seems, that central value for alphas(mZ) = 0.119 in nnpdf2.3. Fixed

Section 5: I would leave lines 132-138 as they are now, but change: 6 Background subtraction in 5.1 Background subtraction 7 Unfolding procedure in 5.2 Unfolding procedure This 3 sections (5, 6 and 7) were changed a bit: chapter 5 was small and changed into part of introduction, bacgkround subtraction and unfolding - chapters 5 and 6 respectively. In my opinion background subtraction looks like separate step, independant from unfolding procedure, thus is in separate chapter. What do you think?

178: I think that there is some confusion between acceptance and efficiency. Acceptance for me would mean correct to the overall kinematic region, while efficiency is the correction inside the kinematic region, but it is a question of taste. I think you mean the correction to your kinematic region at gen level. Anyway:

1) here it is signal at reco/signal at gen

2) In the note at lines 250, 251, it is: signal reco+gen/signal gen

3) and in Figure 34 of the note again another definition which I do not quite understand

So what was done exactly? Independently of how you call it, it should be correct. So the unfolding takes into account resolution effects from one bin to another one and so migrations. But still I would say that rec/gen is still the correct definition, or not?

There are 4 possible pt distributions: 1) signal gen pt, whithout any reco lvl requirements, 2)signal gen pt, which are matched with corresponging objects at reco level, which pass our reco level selection criteria 3) reco level pt for Z or c-tagged jet, which pass reco level criteria, without any gen level requirements 4) reco level pt for Z or c-tagged jet for events, that are not matched with signal events at gen level. Fraction 2)/1) is defined as acceptance. Fraction 4)/3) is defined as background. Reco level selected events are multiplied by (1 - background), then transformed with unfolding, using response matrix, then devided by acceptance.

Figure 5: I think this is all simulation, so it should be marked as CMS simulation Fixed

184-185: if I understood from the note, you not only repeat the unfolding, but before also the extraction of the scale factors for charm and beauty and this should be written. Fixed Table 5: I am also confused by table 5, I thought I unerstood it but now I am not sure. Could it be that you vary something up or down and the numbers indicate the range of variations in the bins of that variable? Whatever it is, it should be clarified and maybe reformatted, for instance exchanging columns with rows and putting as rows i.e. QCD down variation, QCD up etc.... changed to integral difference from central value

216: here N_i is the number of corrected events, but where is the acceptance correction here? Why not defining an A_i symbol and add it in the formula? Definition of N_i fixed: now it is the number of events in bin i of unfolded distribution. This already takes into account acceptance.

219: "The results are extracted separately for the muon and electron channels..."maybe add that they are compatible? ".. and combined by a fit using the Convino [33] tool..., taking into account the statistical and correlated/ucorrelated systematic uncertainties..."

Being not familiar with the Convino tools, it would be good to have some details in the note. For instance some uncertainties are correlated (i.e. c-jete energycale), some not (leptons,..) and I guess that this has been taken into account.

225: Of course more discussion is needed to complete this part.

227: "production" mispelled fixed

237: It would be good to have a final sentence that these data will be useful to constrain charm PDFs or something like this. Added some general sentence that existing constrains can be improved.

Acknowlegments missing

References: [9] some problems in the names, not clear why the first names appear Thats how authors are listed on arxiv:, but can be changed, if it is better for paper style


  • (l.119) I have guessed a bit here: still needs some work e.g. do you actually check the quark originator or just go by hadron flavour? Yes, we use hadronic flavor definition, thats what was recommended on SMP VJ meetings.
- (l.131) Could this go as a paragraph in the introduction? That's a good idea, this section is very small, I put this to the end of introduction section.

- (l.140) It is not clear what you do with $SF_c$. Is it just for display purposes e.g. Fig 3? charm SF is used to show the agreement between data and MC after applying it along with SFb to corresponding flavor components. For unfolding, only SFb is used for normalizing bottom component.

(l.162) I think you probably need to mention the matching here Fixed. - (l.163) What happens if you have multiple c-tagged jets? We take into account only the leading central c-tagged jet at detector level, and only leading central c-jet at generator level.

- (l.170) Is this (background) calculated from data or just from simulation? Background definition uses generator level information, so it can be calculated only from simulation (MC).

(l.171) The next two paragraphs still need work.

(l.176) Is efficiency incorporated into the response matrix? Not in our case, response matrix shows, how some spectrum is changed because of detector resolution: it takes one distribution and changes to another keeping the integral unchanged. Efficiencies are taken into account in acceptance.

- (l.177) Do we really need to define both acceptance and efficiency? Acceptance is the part which takes into account different efficiencies (selection, c-tagging, etc.) (l.189)Should mention how much the scales are varied Fixed: mu_r and mu_f varied within 0.5 - 2

(l.190)Please complete - the important thing is what prescription is used, not the technical detail that this is done via weights Described the way it is done in other papers: The PDFs are determined using data from multiple experiments. The PDFs therefore have uncertainties from the experimental measurements, modeling, and parameterization assumptions. The resulting uncertainty is calculated according to the prescription of CT14 at the 90% confidence level and then scaled to the 68.3% confidence level.

(l.193) Again, needs a description of how the values were estimated, not the technical detail of weighting Described as it is done on correspondign btagging twiki page: methods used for measuring SFs for different types of tag/mistag.

(l.212) I have to admit I can't work out what is going on with the table - perhaps a different format is needed? New table added, as suggested by Elizabetta and Juan Pablo, it shows integral deviation from central value in %.

- (l.225) Needs a discussion of the results/comparisons

We're still waiting for Sherpa sample to be added, maybe we should add this discussion after all 3 signal models are there

- (l.232) Isn't there a different cut on the lower lepton pt? yes, that's a mistake, subheading lepton pt > 10 GeV.

Answers (paper draft) Elisabetta

Title and abstract are missing

Introduction, in my opinion it should be structured in 3 paragraphs: - why Z+c is interesting, you have it already - previous measurements - this measurement, what is new. Also I am not sure that you need all details of all kinematic cuts at this point. Fixed

Fig. 1, there is a strange gray background. I like this diagram when it is drawn more "rectangular" . Fixed

Detector: did you use the standard description, as in the guidelines? I took detector description from Duong's paper, and it seems that it containes standard sentences from

Lines 108-109. when you talk about c- b-tagging, this part should be expanded and the "tight" point should be defined, usually this is given in terms of the fake rate. Fixed

113-121 I would move this part on the generator level later, when you talk about unfolding. Please also specify that the leptons are dressed and if parton or particle level jets. Fixed. It was specified, that generator leptons pt was corrected to take into account radiated photons in cone of radius dR = ...

I would make 5.1 its own section and describe how you extract the c-component more in detail, i.e. from a fit to the M_SV distribution, which btw has also to be defined precisely. The name k-factor at line 134 is confusing and probably you also do not need it, you can avoid it or call it with another name. Fixed. K-factors were replaced by SF_c and SF_b.

Fig. 2: you show the pre-fit distribution I guess. Why not the one after the fit? Also: - use less bins - use CMS style for figures (see guidelines), all labels must be bigger, CMS is missing on the plot, same for lumi and sqrt(s), i.e. follow guidelines. Fixed

Then after explaining the fit to extract the c-jet contribution, you can go back to the beginning of Section 5 and explain the cross section that you want to extract, line 123-126 and that you do everything in bins of ptZ, pt-cjet. This is now explained twice: there is a short chapter which gives an overview of analysis strategy, then following chapters descibe the process of subtracting backgrounds, unfolding and measuement of cross-section using unfolded distribution.

Lines 126-129 I would move them to a new section and there also explain the gen cuts you have now at lines 113-120. Fixed

In summary: - section: first explain the fit - section: explain which cross section you measure in bins, eventually other backgrounds like top etc. - section: then section on unfolding to gen level and explain what gen level is Fixed

Your captions are also all not CMS style, there should be only 1 caption explaining all (a) (b) (c)... Fixed

Figure 3: do you need it? Can these numbers and uncert. be in a table? Plot removed, k-factors presented in table

Figure 4: too many bins, please reduce - use CMS style etc. Fixed

Figure 5: do you need it? why not put the numbers in a table? It is clear also that the background shoots up at low ptZ, it needs some explanation in the text. Figure 6: do you need or can the numbers be in a table, i.e. a combined table with k-factors, background and acceptance? The shape of the acceptance needs an explanation in the text. There are too many bins for this plot, maybe showing on plot is more compact then table in this case.

Section 5.3, make a own section. Do not make subsubsection for each systematics, just paragraphs. For the c-tagging efficiency scale factors are mentioned, but they were not mentioned before, this must be first mentioned in the selection. This also for leptons and b-jet scale factors and also it must be written how they are determined (you can find it in many other papers). ttbar backgorund is here mentioned for the first time, it should be also before. Fixed

Result, should be its own section. Formula (1) should have N(p_tbin) and not dN/dpt in the numerator and all the formula could be written better. What about distributions also in eta, no intention to produce them? Fixed

Physics is missing! Comparison to MC with a couple od PDFs, with details on them, especially on the HF scheme. PDFs uncertainties will be added to two madgraph models. Sherpa event generator is to be added. Figure 7, again not in CMS style, it has to be redone. In addition I find cofusing that in the ratio the dots indicate MC Fixed

Juan Pablo

Fig 6. I think you will be ask to add some uncertainty to the predictions ( typical are statistical , PDF and scale variations [but Z+c at 8 TeV has no scale in the LO calculations] ) so start working on it (whenever you have spare time ... not priority for now... the priority is just adding/modifying the text). In progress

Fig. 6 again: Do you guys have an idea why the LO gives a better normalization (may be not in shape) while we see that NLO gives always better performance in our Z+jets (you do not have to know the answer of course, just open question) ? Is this something we do not understand at GEN level, gluon-splitting related may be? In progress

L16. Jets with charm quark content are identified using (standard?) charm tagging methods developed in CMS [reference] where the presence of c quarks is inferred from the characteristics of jets (denoted as c jets) that originate from their hadronization products and subsequent decays. fixed

L 50. This generator calculates LO matrix elements for five processes: pp -> Z + Njets with N = 0...4. Fixed

Section 3. Forgot to mention that the predictions use PYTHIA for the hadronization. Fixed

L113 : may be add a reference ( see reference 37 in Dan's paper above) : CMS Collaboration, “Measurement of the Inclusive W and Z Production Cross Sections in pp Collisions at $\sqrt(s)$ = 7 TeV ”, JHEP 10 (2011) 132, doi:10.1007/JHEP10(2011)132, arXiv:1107.4789. Fixed

L116: "algorithm [25], using tight working point, which ... passing this criteria" . Working poing is jargon. Remove and instead put -> algorithm [25]. The threshold applied to discriminate c-jets from b-jets and light-jets gives a c tagging efficiency of about 30% and a misidentification probability of 1.2% for light jets and 20% for b jets. Fixed

L128. Please mention that a generator level leptons are dressed. Fixed

L 218 :feducial Fixed

L219: comment a bit about agreement disagreement seen in shape/normalization with different predictions. I think NLO is better in shape than LO but LO is better in normalization than NLO, right ? In progress. There will be also sherpa event generator, once all 3 generators are compared, we'll add conclusion, which one describes data better. We're also checking predictions of number of jets at gen level for different flavors to find out what could cause the difference. Will be added to AN soon.

Did you evaluate the LO cross section to next-to-next-to-leading order (NNLO) calculation computed with FEWZ[*]? If so , mention it . NNLO xsection value was used for both generators (5765 pb). Fixed.

L17-19: you define you fiducial region here, can Z-ee and Z-mumu be combined when having different pt_lepton cut ? I guess so because in L120 the fiducial pt cut is 26 GeV Same cuts were used for leptons at generator level in both channels.

L111 , 112 : different properties of the jet, such as secondary vertex and tracks -> put here that it accounts for displacement and long lifetime of particles w.r.t. light but no so long as b (I might come with a suggestion if I do not forget about it)

Fig 4. Too course binning here. Use less bins ( in fact I would just use the same number of bins as in fig. 3). Fixed

L161: at detector level -> I would say at reconstruction level (sometimes I use detector level to refer to gen level but may be it is just me) Fixed

L195: this is the first time you talk about lepton scale factors ( mention in section 4 what they are: lepton identification, isolation, trigger etc with, mention how they are computed :tag and probe with the Z,and mention how this is used your analysis: via weights and add a reference etc) Fixed

L208: channels were combined by a fit -> which fit ? I guess convino as you mention in L129. Can you put convino as the reference there in L208? Fixed

L209: taking into account statistical and theoretical uncertainties -> you should also consider systematic uncertainties in the combination (as recommended by stats. commitee if I am not wrong), did you get in contact with the stats. commitee already ? In orther words , did you fill the stats. questionnaire ? Ask them in case you have doubts . We talked about this in your last presentation. Stats. commitee reommended using Convino, which takes into account both stat ans syst uncertainties. Stats questionaire will be filled soon.

Fig 7.: I do not like the fact that your k-factor binning is not the same as the final binning. The bottom k-factor does not seem to be flat with pt(c-jet) on figure 3 (b, top plot) K-factros can't have as fine binning as for pt distributions, because fitting SVM for k-factors requires more statistics.

Answers (AN) Elisabetta

at line 55 there is a cut of 20 GeV on the jets, while it is 40 GeV at line 49, any reason for that? 20 GeV is a threshold for muons, which are checked for presence inside the jet. for the cuts on lines 66,67 for the discriminants, is there any study, justification how they were chosen? These values were taken from Btag POG page: I am not sure that I understand the data-MC comparison in Fig. 9b and in Fig 10b. In Fig 9b the data agree with the overall sum of MCs well. In Fig. 10b they agree less and the figure caption does not help. Is Fig. 10b after applying the kMC factors? Because the agreement looks worse. In fact 9b and 10b are two different plots: 9b shows comparison between all data and MC , and 10b represents (data - Top/Dibosons) and Drell-Yan. However, plots at figure 9 were produced with wrong Drell-Yan normilization: for these two figures I used NNLO Drell-Yan xsection values - 4578 pb, 851 pb and 335 pb for DY 0,1 and 2 jets (I was trying to reproduce Duong's results with NNLO and forgot to change xsections back to NLO while making these two plots), and for the rest of the plots in the AN standart (NLO) xsection values were used - 4754 pb, 888 pb and 348 pb for DY 0,1 and 2 jets. I'll replace plots at figure 9 in new version. Two versions of Ystar with no tag and c-tag with NLO and NNLO xsections are in attachment. In general, the method to extract the kMC factors is based only on number of events. It would be much better to take a distribution which is sensitive to c- and b-tagging and fit that as sum of the 3 components to extract these kMC. I would recommend to try it, it should not be very complicated. We used RooFit to obtain k-mc factors from shapes fit ( see kFactosFit.c in attachment). Is was done as simultaneous fit of two distributions - Ystar with b- and c-tags, with k_MC-factor for light component fixed to 1. As a result, k_mc factors for b and c components were equal to 0.78 and 1.03 respectively , which is consistent with the results, obtained by solving equations with numbers of events.

Figure 19: it would be good to understand the shape of the acceptance cut-by-cut. At low pt this is due probably by the pt lepton cut, the drop at higher pt maybe due to the c-tagging, maybe you could try to understand it. It seems, that the shape is typical for c/b tag efficiency, found similiar shape here, slide 22:

in the closure test, are the same events used or which events are used? Yes, one sample was used to calculate response matrix, background and acceptance and in closure test. Result of appying unfolding procedure to anther sample in closure test a priori won't coincide with generator level distribution from original sample, so it will be impossible to say, whether this difference is caused only by statistics or some errors in the unfolding procedure.

I am surprised that the pileup has such a large effect on the last 2 bins for the c-jets, unless it is just statistics. These effect is seen for most of the uncertainties, not only pileup, because of the statistics.

- what would happen if at page 14, first formula, you would take the N_data-Top/Dibosons-light tagged= k_light*NDY,light,light-tagged+ k_c*NDY,c,light-tagged+ k_b*NDY,b,light-tagged Light tagging requires anti-b and anti-c tagging. However, there are no anti-tag SFs, so the number of events in this modified equations can be incorrect, so that the result of the equation solving wouldn't be correct.

it would be good to have comparisons before and after k-factors, similar to Figs 14-16 of the note, for all the possible cases, for the moment for instance I do not understand what Fig. 19 is, are k-factors applied? These figures show SVM before applying k-factors, additional plots and descriptions added in AN.

It is indeed not good that using the SVfit mass the k-factors come out so different including or not SV jets. Concerning the tagger, did you get any feedback from BTV on the best tagger to use for c-jets? There is feedback from Juan Pablo, who suggested, that there are problems with modeling of reconstruction of SV, and he also has no objections to method 1 - equations solving method.

make the selection similar to Duong's selection and compare to their k-factors I used almost the same selections, as Duong used here (different triggers and muons id and isolations) and flavor defined by the hadron flavor of the leading medium CSVv2 tagged jet. The results of the fit from combine tool (combine -M MaxLikelihoodFit ) are the following : charm k-factor = 0.777 ± 0.023 and bottom k-factor = 0.839 ± 0.007, and Duong's results for muons channel are charm k-factor = 0.81 ± 0.02 ± 0.06, bottom k-factor = 0.88 ± 0.01 ± 0.02, so the results are almost consistent within the errors.

at line 209 you write that the light fraction has a normalization fixed to 1, I can't recall what Juan Pablo does. What happens if you leave it free, is it not possible to constrain it, maybe due to the different shape? Do you have plots in the note showing results of the fit? The result of the fit for light component is close to 1 (if one doesn't fix it), so this component was kept fixed. In other analysis it was done the same way, as I understan, Duong finds k-factors only for charm and bottom too. There are figures 27 and 28, showing agreement between data and MC after applying these k-factors. There are also plots 25 and 26, which show measured k-factors as funcitons of pt of Z-boson or c-tagged jet.

I understand from the answer to Juan Pablo that you do not have a pt cut on the generated leptons. First of all I guess now you are correcting back to dressed leptons. Then it is always better to correct back to a fiducial region which is close to the experimental one, not to have huge unknown acceptance corrections. It would be good to add in the note the exact fiducial region and in principle genjets and genleptons should have kinematic pt, eta cuts similar to the reco ones, + the invariant mass Mll gen cut is needed. We have added leptons pt and eta cuts, close to those used at reco level : leading pt > 26 and subleading pt > 10. In new AN version it is stated, that we measure fiducial cross-secion. It would be good to understand some of your systematics like: - Fug. 44 (c) - why pt c-jet high at high pt and only in muon channel it seems, that in last bins there may be large statistical fluctuations, so if there are few events, change of one parameter can lead to large change for distribution. Plots data_mu and data_el in attachment show, how these fluctuation can appear varied distribution to central distribution fraction. - Fig. 45 (b) - why pileup high only in electron channel To be understood. - Figs 46 (b) and (d) what is happening in the eID so bad compared to the muon ID? It must be an error: for electron pt~65 and eta~-1.5 the efficiency and error (GetBinError) are equal to 1! So the weight changes by 100%. Will ask electron pog. Update: For electrons SFs depend not on electron eta, but on electron supercluster eta, so that bin should be skipped.

Juan Pablo

+) about the c-tag/mistag efficiency. You explain in eq. 5 how you apply the weight for the SFc as recommended. Let me explain what I do on my code. Let's imagine that the SFc for your ctagger-T is 0.92 +/- 0.06 +/- 0.01 (overall, then there is the "file" in bins of pt of the jet but for simplicity let's get the single number here). What I do is weightMC *= 0.92; This lowers the MC and improves my data/MC agreement. Could you please check that this is equivalent to the procedure you describe and follow ? The eq.5 looks a bit complicated, because there are 2 weights, which improve data/MC agreement: one improves data/MC agreement for B-F samples and another weight takes into account difference between data and MC in G-H. Since there is only one set of MC samples, the weight to be added to weightMC is made of these two weights, proportional to luminosity of each subset, that what eq.5 tells. In case of tag (when c-jet passed c-tag) there is no such partition so the MC weight is simply multiplied by the corresponding tag efficiency SF .

You use "HLT Ele27 WPTight Gsf" for the Z->ee. When I use the W-> e for charm tagging purposes I go up to the "HLT Ele32" because I do not have to deal with unprescaling (may be you do not have to either at "HLT Ele27", did you cross check ?) According to HLT ele27 is unprescaled, as I understand, that means, that each time, when the trigger fires, the event is recorded, so there is no SFs to be applied to take into account lowered events recording rate.

Your offline cut is 28 GeV. The usual is to leave 2 GeV to make sure you are far from the trigger turn on. Beware you might be asked during the publication process to move you offline cut to the standard "trigger_cut + 2" GeV. The electrons part of the analysis is to be done with 29 GeV cut, to fit the usual way of selecting electrons with this trigger.

L120: "signal muons or muons, which ..." Regarding the purpose and details of the muon-jet cleaning procedure : is this to remove Tight-ISO-muons in a delta_R (jet,Tight-ISO-muon) < 0.4 ? The fine but just to make sure this is not to remove Tigh-NONISO-muons in a delta_R (jet,Tight-NONISO-muon) < 0.4. If you remove also NONISO, it is up to you but you must already know that in the range 15 GeV < pt_NONISO_muon < 25 GeV you have a chunk of signal. Nothing to worry though.

We remove only jets, which overlap with isolated muons. In this case we don't remove signal events. Can both results be combined even when having different pt cuts. Which is you fiducial region (I mean your cross section is defined for letpons with pt> XXGeV or for Z with pt > XX GeV)? I must have missed it.

We have switched to new signal definition, which include cuts for pt and eta for leptons (leading pt > 26 and subleading > 10) to match reco level selections. Thus we measure fiducial cross section of the process.


Do you apply any matching between the trigger object (the one that fired the single muon trigger) and the reconstructed muons ? No, there is no matching between the trigger and muons, in order two take into account efficiency for two muons, combinatory formula was used.

What is your definition of a c (b) jet at generator level (L 73)? We use hadron flavor for gen jets, using the same algorithm, used for reco level jets. In .py file :

from PhysicsTools.JetMCAlgos.AK4PFJetsMCFlavourInfos_cfi import ak4JetFlavourInfos process.genJetFlavourInfos = ak4JetFlavourInfos.clone( jets = cms.InputTag("ak4GenJets") )

from PhysicsTools.JetMCAlgos.GenHFHadronMatcher_cff import matchGenBHadron process.matchGenBHadron = matchGenBHadron.clone( genParticles = cms.InputTag("ak4GenJets"), jetFlavourInfos = "genJetFlavourInfos" )

from PhysicsTools.JetMCAlgos.GenHFHadronMatcher_cff import matchGenCHadron process.matchGenCHadron = matchGenCHadron.clone( genParticles = cms.InputTag("ak4GenJets"), jetFlavourInfos = "genJetFlavourInfos" )

and inside .cc file:


In the case of b(c)-tagging you do not correct data and MC separately but keep the analysis at the (let´s say) uncorrected data level and apply the corresponding b(c) tagging SF. I find this treatment quite asymmetric. I would treat lepton and b(c) tagging efficiencies on the same footing. Muons id, isolation and trigger efficiencies are dependant on data samples, thus data was reweighted with respect to this dependance . C- and b- tag match/mismatch also depend on data samples, however, it is impossible to calculate these efficiencies separately for data and MC (we can't define hadron jet flavor for data ), so the scale factors for MC were composed of two scale factors corresponding to two sets of data samples.

Are b(c) tagging SF applied in Figs. 4 to 8 ? Specify it in the text/caption. Yes, c-tag/mistag SFs are taken into account for these plots, will specify it in the text.

Maybe you can test the ttbar MC description with a control sample in the emu channel. We didn't save muons in tuples, so this can't be done soon.

Can you describe in detail how you are treating systematic uncertainties in the c(b) tagging (light mistagging) scaling factors ?. You are following the recommendations of the b-tagging group, but it would be good to have it explained also here. How do you treat correlations among the different SF ? All necessary formulas and conditions for pt, eta and discriminator can be found her The systematics taken into account by changing all formulas, used in calculation of tag/mistag SFs, to formulas corresponding to uncertainties up/down. For example, if one wants to get distributions with SF uncertainty up, the weight for event with c-jet is calculated according to formula from with "comb" measurement type , tight working point, and formula, selected according to the pt of c-jet. We don't take into account correlation between differnt SFs. Update: I found an error in my code: scale factors for c-mistag for b-jets were equal to 0 in some cases, added detailed description, how SFs are calculated in the AN.

Fig. 20. Is this behavior expected ? increasing c-tagging eff. up to ~100 GeV and then decreasing again ? I found some plots with c/b tag efficiency from Btag POG, here (slide 22) : . It seems this shape is typical for heavy flavor tags.

Fig. 19 (acceptance) it has a funny shape, can you give more details of which cut(s) are most relevant in the different pT regions ? This shape is similiar to the shape of c-tag efficiency, so it seems, that the most contribution is from c-tagging .

Same for fig. 18. I am a bit surprised for the first point in the left plot. Does it come from unmatched reco dileptons with gen dileptons or from unmatched reco c-jets to gen c-jets ? May I assume a ~100% correlation between fig. 18 left and right ? This shape comes from pt migration of Z / c-jet, so this form is expected for pt of any objects, without reference to correlation.

As already suggested at the SMP-COM meeting, the sensitivity to pdf should be assessed. Probably a study, similar to the one [1] can be tried. This can also help to define the optimal binning in terms of Yb, Ystar. We can try to do this study, current Yb and Ystar binning is optimal for differential cross-section as a function of Yb and Ystar, since this partition was chosen so that number of events was of the same order and statistical errors were the same for different bins.


I am still a bit unhappy that you use Sep2016 and promptreco… Sorry can you remind me here again the details of what prevents you from using a more recent reprocessing of data ? There are two main reasons, why we use sep2016 data: first are jet energy corrections, which official version is for 23Sep2016, which JET MET confirmed is ok for my analysis , the second reason is that WPs and SFs for b- and c- tag were also reveived usign 23sep2016 data .

Why did you choose herwigpp PS for ST t-channel and pythia8 PS for all other samples ? I couldn't find pythia8 for STt :*%2FAODSIM , only herwigpp version. The event yield from STt (as can be seen from event yield tables) is small , so this difference can be neglected .

If the event has a c-flavour genjet with pT>40 but pt(Z)<40 GeV is the event classified as Z+light ?? Or is it classified as a bkg event ? Cut pt > 40 GeV is applied to both to Z and jet, so events, when either Z or jet has < 40 GeV , are not taken into account.

N is the total number of MC events in the sample ? It should be total N(positive)- total N(negative) to rescale correctly to the lumi. Yes, number of events for rescale is calculated as number of positive-weight events - number of negative-weight events.

Fig 11 - Can you comment on the shape differences shown in some of these plots ? Difference between data and MC after cuts on b/c-tag discribinators is taken into account by applying SFs. But the SFs are calculated for fixed paramaters / WPs, so in this case no SFs are applied, since the discriminator distribution itself doensn't correspond to any WP.

why DeltaR <0.5 and not 0.4 as customary now with run2 0.4 jets ? This parameter will be changed to 0.4 in next AN version (a remnant from another analysis).

62-68 Explain how you classify the selected events in Z+b, Z+c and Z+light. Here you only specify the c-tagging but I think you first apply b-tagging criteria to classify Z+b events. Events are classified according to central jet hadron flavour. There can be only one jet-tag at a time, c-tagging isn't applied after b-tagging.

If the event has a c-flavour genjet with pT>40 but pt(Z)<40 GeV is the event classified as Z+light ?? Or is it classified as a bkg event ? In this case the event is not taken into account at generator level. If there is Z+c-jet at reco level, but either pt(gen Z) < 40 or pt(gen jet) < 40, this event goes to background. It can be seen on background plots (figure 18 in AN v6), when events , with Z/jet pt close to threshold , are above this threslod at reco level, but do no exceed it at gen level.

-- AntonStepennov - 2018-10-12

Topic attachments
I Attachment History Action Size Date Who Comment
PDFpdf bottomLineTest.pdf r1 manage 13.2 K 2020-01-29 - 08:04 AntonStepennov BottomLine test for unfolding
PDFpdf correlations.pdf r1 manage 61.2 K 2020-08-03 - 11:03 AntonStepennov correlation coefficients between SFc and SFb
PDFpdf data_el.pdf r1 manage 13.7 K 2019-10-16 - 14:43 AntonStepennov  
PDFpdf data_mu.pdf r1 manage 13.6 K 2019-10-16 - 14:43 AntonStepennov  
PDFpdf hYstar.pdf r1 manage 18.3 K 2018-10-16 - 19:33 AntonStepennov Ystar distribution, c-tag applied, NLO cross sections used
PDFpdf hYstarNNLO.pdf r1 manage 18.3 K 2018-10-16 - 19:33 AntonStepennov Ystar distribution, c-tag applied, NNLO cross sections used
PDFpdf hYstarNoTag.pdf r1 manage 18.2 K 2018-10-16 - 19:33 AntonStepennov Ystar distribution, no tag applied, NLO cross sections used
PDFpdf hYstarNoTagNNLO.pdf r1 manage 18.2 K 2018-10-16 - 19:33 AntonStepennov Ystar distribution, no tag applied, NNLO cross sections used
C source code filec kFactorsFit.c r1 manage 16.8 K 2018-10-16 - 12:14 AntonStepennov K-factors obtained with shapes fit for Ystar with c- and b- tags.
PDFpdf updates.pdf r2 r1 manage 751.2 K 2020-09-08 - 11:54 AntonStepennov  
PDFpdf updates_stat_err.pdf r1 manage 301.8 K 2020-10-04 - 14:44 VitalianoCiulli MC stat error removed from total stat error
Edit | Attach | Watch | Print version | History: r97 < r96 < r95 < r94 < r93 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r97 - 2020-10-17 - AntonStepennov
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback