http://cms.cern.ch/iCMS/analysisadmin/cadilines?line=SMP-19-011&tp=an&id=2290&ancode=SMP-19-011

draft V6 comments

Main updates after V5

Main changes were done in sections 5, 6 and 7: instead of multiplying bottom and light components by corresponding k-factors and then subtracting from data distribution, number of charm events is found in each pt bin instead in fit of SVM, along with k-factors, which now are used only for validation - data/MC agreement after applying k-factors. This procedure is described in section 5. Light and bottom k-factors uncertainties were removed from section 6, since these k-factors are used only in validation and are not used in calculation of differential cross-section. Section 7 contains updated results and plots.

Emanuela

  • I always find it cumbersome when systematic uncertainties are quoted before being described in the text. It happens here for the SFs in Tables 1-4. I just want to make sure that the systematic uncertainties in those tables correspond to the sources described in Section 6.
    • Yes, these are the same uncertainties.

  • Section 6: can you add references (if journal publications are available) for the systematic uncertainties you use? (for leptons, Jets, ttbar). In particular, where do you take the +-10% uncertainty on ttbar from?
    • The systematics were taken mostly from twiki pages by corresponding POGs. ttbar uncertainty was taken from SMP-16-018 AN2016-379.

  • There are some features in your current paper version (v6), for which the names of generators do not appear, e.g. on line 221, or lines 233-235. This was not the case with the previous version of the paper (v5). Also please state/quote where the predicted cross-section of 524.9 pb (line 223) is coming from.
    • Some bug from CADI, which will be fixed: in git version of the paper the names of the generators are shown properly. Same for 524 pb from MG NLO, which is not shown after loading to CADI from git.

Approval questions

  • - double check the implementation of QCD scales uncertainties as we discussed
    • We use weights with Ids 0-8 for qcd variations: excluding two combinations (2\mu,0.5\mu) and (0.5\mu,2\mu) there are 7 weights. Weight with id = 0 corresponds to (\mu,\mu) combination and concedes with default event weight, which is used as central value, so after unfolding there are 6 copies of distributions, corresponding to variations.

draft V4 comments

Vitaliano

  • line 31: “pseudo rapidity”
    • Fixed

  • line 143: “Estimates”
    • Fixed

  • line 244-245: in the results put the statistical error first, then “exp” followed by “th” as the last one. The whole sentence can be moved to line 238 just before “The predictions from …” and can be also copied just after line 229 at the end of the Results secton.
    • Fixed

Cecile

  • Tables 1-4: Can you format the systematic uncertainty as ^{+…}_{-…}?
    • Fixed

  • l157: SFc, and SFb
    • Fixed (coma in enumeration before and is optional?)

  • l160 and elsewhere: generator-level (when followed by a noun)
    • Fixed

  • Fig 2 and others: Horizontal error bars should be removed for constant-width bins. In the legend and axis title, change DATA to Observed.
    • To be fixed

  • Fig 4: The y axis title is not accurate (the caption indicates it should be a fraction). The CMS simulation and luminosity headers are too small. The labels of both axes are too small.
    • To be fixed

  • Table 5: the table is too wide
    • Fixed

  • l221 and elsewhere: branching ratio -> branching fraction
    • Fixed

  • l244: remove italic font
    • Fixed

  • References: remove no. (eg in [1] and [2]).
    • To be fixed: no. is generated automatically

  • The letter of the journal should not be bold and attached to the number (e.g. in [20] and [22])
    • Fixed

ARC questions

Philip


More... Close

  • I am having a hard time following exactly what was done with the fits to obtain SFc and SFb and how they are applied. I think some more details on how it was done exactly need to be provided.

  • What selections are applied? Is it the full signal region selection? Or is this some orthogonal dataset?
    • No orthogonal dataset can be determined for Z+c-jet, so normialization for Z+b background is obtained from the same sample, which it is subtracted from.

  • In what bins are SFc and SFb measured? (is it one per ptZ / ptC bins? e.g. x < ptZ < y && z < ptC < a?)
    • SFs are measured either as function of ptZ or ptJ. There is not enough statistics to split in Z and c-jet pt simultaneously.

  • Why is the light component kept at 1? Could you provide some justification for this?
    • There are several resons for that: for Z+jet (without c-tagging) there is good agreement between data and MC, while most of the events are light, so k-factor is ~1. The problem with retreiving k-factor from SVM fit is caused by small number of Z+light, so the fit doesn't converge/has big errors. In other analysis with flavor k-factors same assumption was done.

  • For visual aid I think if the histograms are stacked in light -> b -> c order the shape difference would be more easily noticeable.
    • Fixed in draft paper.

  • Itâ?Ts probably my ignorance. What does it mean by "In this analysis secondary vertex mass was corrected for the presence of neutral particles?â? What is the correction exactly and is there an associated uncertainty to the correction?

  • I would like to see individual pre-fit distributions as well in the AN as well.
    • added

  • Figure 51. the bottom right plot seems to have qualitatively different composition is this understood?
    • According to prefit plots there is different data/MC agreeement for muons and electrons channels for c-tagged jet pt > 90 GeV.

  • The shape of the secondary vertex mass seems to be one of the most important quantities which seems to be taken from MC directly. Is there some orthogonal dataset where the modeling of this is verified?
    • Yes, these studies are done by Juan Pablo, he also made corrections for the shape of different flavor components.

  • If the b template or the c template has a shape mis-modeling by say ~10% what is the impact on the final SFc and SFb value? Is there a justification for using the MC shape?
    • There are many studies on modeling of secondary vertex mass, ususally reported on SMP V+J by Juan Pablo. Another way to validate obtaiend k-factors is pluging obtained k-factors back to MC and checking data-MC agreement.

Luca


More... Close

  • Title: I think you are measuring the "Z+c-jet differential cross section not the inclusive one

  • Please write a complete abstract so that it's clear to the reader the purpose of the search and the general key features of the analysis.

  • Section 1.1, line 53-54: To count the b-jets and c-jets at the generator level you have defined what is a b-jet and a c-jet at the generator level. Which method are you using to define them? Are you using hadronFlavour (5 for b-jet and 4 for c-jet) or other methods? If you are using hadronFlavor or any other CMS "centrally maintained" method I think it may be worth at least to cite them. If instead, you are defining by yourself b-jets and c-jets looking to the presence of B and D mesons in the gen-jet, then I suggest to explicitly describe your algorithm in more details.
    • It is hadronFlavor method definition.

  • Section 1.1, line 55-56: You defined the "bottom" and "charm" MC component relying on counting the number of b-jets and c-jets with a pT>10GeV at the generator level. Also, you define the "light" MC component if there are no heavy flavor jets. How do you treat the events in which you have b-jets and c-jets with pt<10? Are you still considering them in the light MC component or are you discarding them? I suggest adding a line specifying this.
    • Yes, objects with pt < 10 GeV are not treated as jets at all.

  • Section 3, line 70: I think here the reference to Table 12 is wrong. Table 12 is on page 54 and is comparing the NLO and LO generators. I guess that the Table you want to refer to is Table 1.
    • Fixed

  • Section 3, Table 2: From reading Table 1, I guess that you used MINIAOD format also for the MC sample. I suggest making it clear in Table 2 as well.
    • Fixed

  • Section 4.1, muon selection: Since you ask for two offline muons to reconstruct the Z boson, have you checked the difference in trigger efficiency using a di-muon trigger vs the single-muon trigger you are currently using? Same as for electrons.
    • We didn't use double lepton triggers: lowering thresholds for leading lepton pt wouldn't add much statistics according to control plots.

  • Section 4.4, line 138. The CvsL and CvsB variable you are using represents a key feature of your analysis. I think that you go a bit more into the details. Your analysis is among the first in CMS to make use of a dedicated charm tagger based on advanced machine learning techniques (I am assuming you are using CvsB and CvsL evaluated from DeepCSV). I suggest a) to describe a bit why we have two discriminators (one is dedicated to separate charm from light and the other one to discriminate between charm from bottom). b) Also, given that these two discriminators are the ratio between multiclassifier outputs (CvsL = p(c)/[p(c)+p(l)] and CvsB = p(c)/[p(c)+p(b)], where p(c), p(b) and p(l) are the multiclassifier scores evaluated per single jet, interpreted as the probability for your jet to be generated by b-quark, c-quark, and light quark or gluon, please specify the algorithm used by the multiclassifier. In particular, if it is DeepCSV or DeepJet based, and maybe give a short description of their architecture or just explain that are multiclassifier based on machine learning techniques/DNN referring to BTV-16-002 if it is DeepCSV.

  • Section 5.5: line 185-186: here you wrote: "Three different weights were used, depending on the flavor of the c-tagged jet". In my opinion, this sentence is not very clear since, if I understood correctly, you are applying a fixed cut on the charm taggers pair (CvsL >0.59 && CvsB >0.05), so events that pass these cuts, all contain at least a c-tagged jets. I would rephrase as "Three different weights were used, depending on the true flavor (or flavor at generation level) of the jet passing the charm taggers working point (or the charm tagger selection)".
    • There is a separate pair of CvsL and CvsB for each jet. We require at least one jet to pass c-tagging, then use one leading c-tagged jet.

  • Section 5.5: I guess that the per-jet-scale-factors are evaluated inclusively in pt and eta of the reconstructed jet, it's correct? If yes, please specify it, otherwise point it out that the scale-factors have been evaluated differentially in pt and eta of the reco jet (it is the case).
    • These scale factors were calculated as functions of pt, this was added to AN.

  • Section 5.5: You describe the data-to-simulation event reweighting to account for the difference in the mistag rate between data and MC, however, you don't provide the corresponding formula for the charm-efficiency. You also write how you have estimated the light and b-jet scale factors (inclusively, T&P, etc), but not the charm ones. Moreover: are you applying the scale factors to all the MC samples, signal and backgrounds? Please specify if it is the case or not.
    • There is no dependance on datasamples, one formula is applied for c-tag case. These SFs are applied to all MC events.

  • Section 6, Control Plots: What is the "take-home" message from these control plots? What have you learned/noticed plotting these distributions? I found that here more detailed comments on the control plot would be useful and expected.
    • The control plots may not contain important information for the analyzis, however in my experience it is always usefull to leave control plots at each intermediate state, so that anyone could reproduce that if needed.

  • Section 7, Monte Carlo k-factors: If I understood correctly the k-factors are correction factor that you apply to MC to restore the agreement with data: a) The plots in figure 27 and 28, showing the "disagreement" between data and MC as function of the Z-pt and c-jet pt, are obtained after having applied the c-tagger efficiency/mistag rate scale-factors? (I guess so smile ). If it is the case, please make it explicit in the text and the caption of Fig. 27/28. b) You show a residual data/MC disagreement vs Z-pt and c-jet pt, but the k-factors are estimated through a fit in RooFit to the secondary vertex distributions of Drell-Yan events, leaving the c and b component free to float, fixing the light component at 1. Could you provide some plots in the AN showing the secondary vertex distributions before/after the fit, highlighting the three components, light, b and c? c) The fit to the distribution of the secondary vertexes has been carried out in bins of Z-pt or c-jet pt? d) Do you know the reason why, even after the application of the c-tagger scale factors, you observe a residual data/MC disagreement?

  • Section 8.1: I see that in lines 238-241 you provide a short description of the method. This method represents the other key feature of your analysis, you may want to spend a few more words on the method itself. For example, you could add a description of what it is exactly a "response matrix" and how do you obtain it.

  • Section 9: line 264-267: I don't understand this sentence: it seems like you are applying systematics uncertainties only to the MC DY samples, is that correct? What is the MC component of the remaining 10% of the events? I think that it would be useful to assess at least the main systematics also on the remaining 10% of events.

  • Section 9: Fig. 41 to 49: Would it be possible to reproduce the plots coloring/filling the inner part of the histograms with dashed lines/light color? It will help the readability especially of those plots that show up/down variation very small.

  • Section 9.3: Make a proper section for Final results, not just a subsection. This is an important section.

  • Section 9.3: You state that the systematic uncertainties relative to the different sources have been added in quadrature in each bin. Have you checked that these uncertainties are not correlated or weakly correlated? If there is a substantial correlation among some of the uncertainties, then the sum in quadrature could lead to an overestimation of the total uncertainty. Also, when you say that the sum in quadrature is done separately for deviation up and down from central value, what do you mean exactly? The deviation up/down vs central value is referred to as the variation of the systematics itself or the up/down variation of the event yield in the corresponding bin? ( In other words, how do you treat the uncertainties, if any, which an up variation lead to a decrease of the event yield)

  • Section 9.4: Please be more specific and add details on how the combination is carried out, at least pointing out the main features of the approach chosen (and then it is ok to refer to Convino)

  • Section 10: Please provide a summary that will conclude the analysis note highlighting your very nice results and pointing out what could be done in the future to even improve these results!

  • References: Clearly need be added. Please remember to put the doi and arXiv or url as well.

  • Appendix: for my understanding: what are the upsilon variables defined in page 58? Are
    • For the next analysis we plan to split events also into different Yb and Ystar variables, sensitive for pdf of c-quark. At some stage we understood, that there is not enough statistics for this partition, so it just shows how results can be improved in future analysis.

  • Check that all the acronym are defined in their first appearance through the analysis note;

  • Please make sure that you are using consistent notation through the whole analysis note, e.g. line 60 pT>30: it should be \text{p_{T}} or if you could start using the cms variable definitions (you would have to do so for the paper anyway) and use just \pt. Another example is in line 64: you specify that eta is for electron or muon, but you don't when you write pt-leading >26... so keep the same convention. (also here, "pt" is different from line 60. Finally, it would be good to have also the labels of all the plots consistent with the notation convention you will chose.

  • line 40: run 2 --> Run-
  • line 46: I would use pp instead of p-p. However, define what p-p is in p-p collision, maybe in line 40 at the beginning of the sentence: "During proton-proton (pp) collision"

  • line 95: I would just remove off-line in front of analysis. There is not an offline and online analysis, there is just ONE analysis that relies on data collected through an hardware (online) trigger and further analyzed through software algorithms.

  • line 167: Title: "Muon identification and isolation" --> "Muon identification and isolation reweighting"

  • line 254: remove comma before "passing both"

  • line 255: remove comma after "and events"

  • line 288: "modeling of of Z+c process" --> "modeling of the Z+c process"

  • line 292: Add space before "Figure 42". Also change "shows dependance" into "shows the dependence"

  • line 337: Correct "Normalizing of Monte Carlo events..." into "Normalization of Monte Carlo events..." also in line 339, use "normalization"

Cecile


More... Close

  • Why dont you use double muon and double electron triggers to increase statistics?
    • Lowering of threshold for leading lepton won't significantly increase statistics: on control plots leading lepton pt distribution maximum is much higher then threshold. For this threshold efficiency of single lepton trigger is bigger.

  • Why do you select electrons with abs(eta) < 2.4. The threshold is usually 2.5 for reconstruction, and 2.1 for some triggers.
    • In other analysis eta threshold for electrons was set to 2.4, will check for the next iteration, if there are any updated for recommendations.

  • l148-151: I cannot really understand how you extract the scale factors SFc and SFb. Are they extracted using exactly the same data as those you do the subtraction in to get the Z+c component? Or do they correspond to an orthogonal dataset? If the dataset is not orthogonal, how do you treat the fact that you use the same data twice (to determine the SFs and to do the subtraction)? If the dataset is orthogonal, can you please clarify the selection?
    • There is no orthogonal dataset for Z+c-jet, so we use one for unfolding and calculating SFs. SFs are intermediate step, so we don't propogate errors, obtained at this step, to total uncertainties.

  • Why dont you extract a SF for Z+light?
    • There are two reasons for that: for Z+jets without c-tagging there is good agreement between data and MC and since most of the events in that case are Z+light, SF for this component is close to 1. If one tries to estimate this SF from fit after adding c-tagging, it will have big errors, because of low statistics for Z+light component. Most of the events after c-tagging are Z+charm and Z+bottom.

  • Tables 1 and 2: there are differences between ee and mumu larger than the uncertainties. Do you understand why some of the SFs depend on the Z boson decay?
    • K-factors doesn't depend on the Z boson decay. However k-factors are calculated as function of reco level objects pt, thus they depend on reconstruction. Shape of Z and c-jet pt is different for muons and electrons (see attachmens Ratio_j.pdf and Ratio_z.pdf), which leads to different fit results.

  • l207: I believe the uncertainty in the ttbar cross section is lower than that
    • I saw different scale at different analysis, the most conservative - 10% - was used e.g. at SMP-16-018.

  • l208: I do not understand how the luminosity is an uncertainty if you fit the normalisation of all background components to data instead of estimating them based on the cross section. Is it related to Eq 1, which has not been introduced yet?
    • Yes luminosity uncertainty was taken into account through eq1, by varying the luminosity, used for normalizing each bin.

  • Figure 3: can you add uncertainty bands for the predictions?
    • Added for DY NLO

  • Figure 2: can you show the predictions if you use SFb and SFc derived from data?
    • The are shown separately for each pt bin in appendix in section Post fit secondary vertex mass distributions. Will be combined at next iteration.

Vitaliano


More... Close

Section 1

  • L7-12: remove the paragraph â?oFor example, â?¦ + LSP. â?: the measurement of Z+c is interesting per se: donâ?Tt put too much emphasis in one particular search it is a background forSection 3

  • An important point in the associated production of vector bosons and heavy quark is the number of flavours included in the PDFs. I think they are all 5 flavours (i.e. you can have bâ?Ts in the initial state) but this must be stated. In particular check what is done with Sherpa, because I believe there are more complicate options than in Madgraph to deal with heavy flavours in the PDF.

  • Try to state the generator version (including PYTHIAâ?Ts one if used for the PS) for each process!

  • L53 Rescaling to NNLO cross section value applies only to LO generators right? If not, I think it should otherwise we loose correct order on jet observables and proper scale uncertainties. Then I suggest to write once for all after SHERPA description â?o
    • All LO event generator samples are scaled to the cross section calculated to next-to-next-to-leading order with FEWZ [10]â? If I understand correctly this is a common practice to normilize both LO and NLO madgraph to NNLO. This is done in other analysis.

  • L53-54 move here the sentence at lines 63-65, describing MG5_aMC ME-PS matching details

  • L55: which version of SHERPA is used? Cited paper is for version 1.1, but this is pretty old. You might want to quote instead https://arxiv.org/pdf/1905.09127.pdf Also, is SHERPA LO for all jet multiplicities or NLO up to 2 and then LO ? These are the most usual configurations. It could be something different, but in any case it must be clearly stated.

  • L57-59 Start paragraph with â?oThe POWHEG [16-18] event generator is used to simulate backgrounds from top quark pairs â?¦ â?o. However, the ttbar sample you have in Table 2 of the AN has been done with MG5_aMC (with Tune CUETP8M2T4 , a specific tune for top) and not POWHEG. Please clarify what you have used and modify the text accordingly. In any case I donâ?Tt see why we should quote references 12, 13 and 14. For single-top in addition to reference 15 I think we should quote â?oSingle-top production associated with a W boson, E. Re, Eur. Phys. J. C71 (2011) 1547, arXiv:1009.2450â? (for more information please check https://twiki.cern.ch/twiki/bin/view/CMS/CitationsForGenerators#PowHeg). Add that the parton shower used with POWHEG is PYTHIA 8 version xx.

  • L59-60 â?oThe background from vector pair production is simulated with PYTHIA 8.xxxâ?

  • L61 â?oThe CUETP8M1 [20] tune is used for all samples done with PYTHIA 8 as parton shower MC, with NNPDF 2.3 [21] LO PDF and â?¦ = 0.119.â? Is this true also for DY samples? Why in the last sentence you mention also NNPDF 3.1?

  • L66 give details on pdfs for each generator, remove â?oSamples are generated â?¦ , andâ?, and start the sentence with â?oGEANT 4 â?¦â?

  • L78 for which fiducial region the efficiency of muon reconstruction is 96%?

  • L80-81 is it relevant for this analysis the resolution for 1 TeV pt muons? if not remove this sentence (but keep the reference for muon performance paper)
    • It is not relevant, sentence will be removed for the next version.

  • L86-94 why isolation isnâ?Tt required for electrons? in the figures 17 and 18 (before requiring charm tag) of the AN there is a clear excess of electrons in data at low rapidity value. Is this effect understood?
    • Electron ID definition includes cuts on isolation, so id and isolation are not separate, as for muons.

  • L111-121 Why dilepton triggers are not used? What is the efficiency of the trigger for the fiducial region defined by offline selection? If it is relevant you should mention that it is measured using tag&probe and add some word about systematics later.

  • L130 Just to be sure, since this question came up during the pre-approval: the c-jet must not be the leading jet in the event, right? If it so, the text is clear
    • Yes, in last version first c-tagged jet are taken into account, then leading among them is used in the analysis. So selected c-tagged jet may be not the leading jet in the event.

  • L135 what is the rapidity cut applied to jets when classifying the event as b-, c- or light-flavour?
    • No rapididy cut is applied to these generator jets, it is applied at signal definition stage, which is different from flavor definition.

  • L144 the â?ojet secondary vertex massâ? is always available for a c-tagged jet? the events in figure 2 correspond to the whole sample of events selected as signal or to a subsample of this?

  • Section 5

  • This is my opinion the most important part of the analysis but there is no detailed explanation of what you have done in the paper and there is even less in the AN.

  • If I understand correctly, you are doing a template fit to the secondary vertex mass in bins of pTZ or c-jet PT. Some general questions/comments:

  • except for Z+c and Z+b, are all the other components kept fixed in the fit? or are they included as nuisance with some constraint? Fit is perfomred for DY and data - top/dibosons, light component of DY is fixed.
    • This is a usual approach, used also in other analysis, for several resons: there is a good agreement between data and MC for Z+jet events, without requireing c-tagging. In this case most of the events are Z+light events so their normalization is very close to 1. When c-tagging is applied however small fraction of Z+light jets is left, so the fit has large errors.

  • each bin is fitted independently from the others? In other words, are the statistical errors in the Tables 1-4 of the paper uncorrelated between different bins and different channels?
  • Yes, fit was done independantly in each pt bin.
  • if it is so, the overall Z+c and Z+b is not forced to be the same, so you can get a different inclusive cross-section (in the same fiducial region) from the fit to pTZ and the fit to c-jet PT. Have you checked this result and compared it between the two?
    • Yes, the integral difference between cross-sections as functions of pt of Z-boson and c-jet is <2% for muons and ~4% for electrons.

  • - systematics among the bins are correlated. How is this taken into account? - the main problem I see, however, is that, if I understand correctly, you use this values as a per-event weighting factor and this introduces a statistical correlation among the uncertainty of nearby bins in figure 3. How do you propagate the uncertainty on SF? When unfolding, later on, the statistical error on each bin is also considered as if it is a counting? It seems to me that this way you would count the same statistical error twice. - why didnâ?Tt you unfold the results in the tables 1-4 instead? the parameter you measure in the fit then would be simply the strength of the signal, i.e. the ratio of the cross-section in the data to the cross-section in the MC. Doing so you can not unfold to a larger number of bins than the ones you measured, but I am not sure by applying the result of the fit as scaling factor the treatment of the statistical uncertainty is correct. We need some feedback from the statistical committee on this point. And if you donâ?Tt consider the statistical error twice, since you apply the same scale factor to events in several bins, it looks to me as if you are doing some kind of regularisation (which might explain why you donâ?Tt need regularisation in the unfolding).
Additional comments:

  • Figures 27 and 28 of the AN: they are supposed to be for muons and electrons, respectively, but they seem to be exactly the same.
    • Fixed. These are also outdated, obtained before changing leading c-jet requirement. New plots were added.

  • Table 2: the first line is repeated twice Fixed Tables 1-2: there are large differences between the values measured with muons and electrons, e.g. for SFc in pt bins 30-35 and 110-200 and SFb in pt bins 30-35, 50-110, and 110-220.

  • Statistical error can not at all explain the difference. Which are the most relevant systematics and how much are they correlated among the measurements in the two channels?

  • L161-167: this paragraph is a bit confusing, but I finally understood that you are dealing here with out-of-fiducial Z+c events that are selected as signal because of detector resolution. I am not sure how the text can be improved. I suggest at least to add â?oThis fraction is estimated [on the simulated Z+c sample] from the number of eventsâ?¦â? .

  • L168: which â?osimulated DY sampleâ? has been used to calculate the response matrix?
    • Main event generator is madgraph amcatnlo, which was used to calculate response matrix, acceptance and background. Preseneted final plots are obtained using this generator. However there are cross-checks, using madgraph MLM.

  • AN L256-7: by the way, the AN does not help much on the same point. The sentence â?oIn order to take into account pt migration effect, data distribution of the variable, which is to be unfolded, is bin-by-bin multiplied by the (1 - background) distribution.â? is confusing because here by background you mean out-of-fiducial Z+c events, while usually you referred to background as events coming from other processes.

  • Figure 4 and 5: do we need these plots in the final paper? they are done on simulation and does not convey much informations: the content can be described with additional words in the text (especially for the acceptance that is quite flat).

Section 6

  • L183-184: all variations are considered? usually those where muR and muF change in opposite directions are excluded
    • No, combination like (0.5μ, 2μ) and (2μ,0.5μ) were not taken into account.

  • L186: why the prescription is taken from CT14, that is not used? NNPDF usually provide both Hessian matrix and replica method. I guess you are using the Hessian matrix, but I would said it explicitly and quote NNPDF

  • L188-192 I did not check through the exact details of c-tag/mistag weights described in section 5.5 of the analysis note. Has this been signed-out by BTAG-POG contact?
    • There wasn't official request for sign-out, but there were consulations with Caroline Collard and Kirill Skovpen as well as Duong Nguyen, who also used b-tagging/mistagging weights.

  • Table 5: The values shown are the average over all the bins or the maximum? Do you have an explanation of why the PDF error for c-jet pT is larger for muons than electrons? Also JER (up variation) for electrons look a bit strange. I would use the same number of digits after the dot for all result, e.g. 4.0 instead of 4 and 0.6 instead of 0.58 Section 7

  • Figure 6: uncertainty band for the NLO MC (scale and pdf variations) will be added? what is the status?
    • Yes, these are going to be added, it required downloading new ntuples.

  • I am wondering if we shouldnâ?Tt also quote an inclusive number for Z+c in a given fiducial region. That could be maybe compared with full theoretical calculations (MCFM?) and not just generator predictions.

Summary

  • As it is now, is mostly a recollection of what has been done. The conclusion that results are in better agreement with MG5_aMC MLM is a bit weak. There is no comparison with NLO predictions that takes into account scale and pdf uncertainty on the predictions. We need to work on it and also I think we should seriously consider to add the inclusive result. =

Style/other comments

Abstract

  • â?oconsistent withâ? â?"> â?oidentified asâ?
  • add text between parenthesis â?oThe measured [differential] cross sections [with respect to the transverse momentum of the Z Boson and the tagged c jet] are comparedâ?

  • Figure 1: I can not see it on my mac either using preview or acrobat. Does everyone else can see it? This figure is seen on mac/acrobat, but when pdf is attached to cadi, it transforms to smth like barcode. Now it is now clear why this happens.

  1. L19: I suggest to change it to â?oIn order to compare the data with different theory predictions, we unfold the measurement to the level of observables defined on stable-particles ({\it generator level} in the following)â?

  • L23 remove â?oestimates ofâ?

  • L45 â?oZ+jets signal and background processesâ? (signal first!â?¦)

  • L71 remove â?oin an eventâ?

  • L106 I suggest â?oThe jet energy resolution (JER) in simulation is degraded to match the resolution in the data: about 15% at 10 GeV â?¦â?

Answers (CMS Statistics Questionnaire)


More... Close

  • you write that you use for PDF systematics some RMS of variations, is this really correct or are the variations being added in quadrature?
    • PDF uncertainty was calculated as it is suggested github.com/UHH2/UHH2/wiki/Recipe-for-PDF-uncertainties-(RunII,-25ns,-MiniAODv2). For each bin of histogram there are 100 entries for different pdf options + one central value. All 100 are then divided into those, which are above central and below central. Then in each set RMS (sqrt(sum squares/n)) was calculated and then used as pdf uncertainty Up/Down. These calculated pdf uncertainties were summed with other sources of uncertainties in quadrature.

  • Chapter 5: The treatment of the light jets background in the fit to the sv mass shown in Figure 2 is obscure. Is this component (fraction) also fitted or somehow just subtracted using the MC prediction? If it is subtracted then what is the uncertainty on this component? If it is fitted it might be hard to separate it in the fit from the Z+c component since the Msv spectra look not so different in Figure 2. Why is the light background not a systematic uncertainty source for the measurement?
    • Light component was subtracted from data distribution, but its normilizing is kept equal to 1 (k-factor for light component = 1). It wasn't obtained from fit due to low statisctics for light component. Normalization of light component will be added at next iteration.

  • Table 1/2: what is the level of anti-correlation of the fitted SFc and SFb? Why don't we measure simultaneously Z+c and Z+b production?
    • After applying c-tagger, number of Z+b events is ~35% less than Z+c, while after applying b-taggers, most of the events are Z+b events, Z+b can be studied with better precision with other taggers.

  • Table 1/2: Can we really unfold well bins 30-35 GeV and 35-40 GeV? Perhaps better to merge these bins in order to have a bin-width that is clearly larger then jet pt resolution.
    • Will be merged at next iteration.

  • Table 1/2: It would be good to see for each differential region the fits to Msv like in Figure 2, to see that the fits work fine in each of the different kinematic regions.
    • Post fit Msv were added to new AN version

  • Fig.3 is there a problem with the fit in the upper left panel near zero?
    • There Z and c-tagged jet pt distributions on figure 3, there wasn't fit of this varibales for different flavor components. These fits were done for secondary vertex mass distributions. Disagreement between data and MC for small Z pt values is seen for inclusive Z and Z+jet without HF tagging, perhaps not related to HF normalization.

  • l.166 TUNfold is usually used with more bins at detector level than at unfolded level and also with applying Tikhonov regularisation. Don't you use it this way?
    • There are two times more bins at detector level than at gen level. Desicion on wheather do regularisation or not was based on condition number - ratio of biggest and smallest eigenvalues of responce matrix. This number was ~200-300 for different pt and channels. According to recommendations at twiki if this number is closer to ~10 (no regularisation), rather than to ~10^5 (regularisation required), so no regularisaion was done.

Answers (pre-approval)


More... Close

fill the stat questionnaire Done (currently is processed by stat committee)

define a Journal Target JHEP

notify pub comm (Boaz) that Joel is serving as CCLE done

Update & upload new AN and paper drafts documentation with most recent status and fixes

fill the data tier survey in https://drive.google.com/open?id=1GE4UJ41RJ8gSQv1YUEG8r-51O20KkXXIqOXoi3aoj1I done

get in touch with Pietro Vischia to obtain GL for the k-factors fit with combine as detailed here https://hypernews.cern.ch/HyperNews/CMS/get/smp/1146.html we ask you to upload the fit setup to gitlab, while Pietroâ?Ts review and GL can go in parallel with the ARC review. We don't use combine for k-factors fit, since this tool doesn't work for our case. K-factors are calculated for different pt bins/MC weights/uncertainties, so there are hundreds of k-factor pairs. Combine tool is too slow to calculate that number of k-factors, so we use RooFit. It was shown here https://indico.cern.ch/event/787288/contributions/3272131/attachments/1778210/2891919/kfactors.pdf that results from combine and Rofit are the same

What if c-tagged is not leading jet? It turned out, there was fraction of events (~20%) which contained c-tagged jet, which was not leading jet. Selections were changed, so that only c-tagged jets were considered, and among them leading pt jet was chosen. The result didn't change much, but stat errors became less.

Selection:

Do you exclude jets that overlap with isolated lepton (deltaR cut of 0.4) ? Yes

Unfolding: why the delta_R(cjet-reco, cjet-gen) cut of slide 14 ? To make sure we make unfolding of the same object why there is a trend in slide 15 bottom plot ? Last slide 27 helps to understand. Charm tagging efficiency vs pt has a flat ratio vs pt-jet Does the acceptance numbers of slide 17 make sense ? It is mostly c-tagging what drives the numbers In slide 18, the closure test uses the same events that produce unfolding matrix [clarified]


Answers (paper draft v1) Juan Pablo


More... Close

have you applied the ctagging SF's you mention on L123 on figure 2 ? I understand that this SF is applied to the MC before the unfolding procedure Yes, ctagging/mistagging SFs are calculated separately for each event and the whole weight is multiplied by them, before filling any histogram.

L10: decays into neutrinos -> decays invisibly into neutrinos Fixed

L21 : There is no need to say that measuring the differential cross section is the main goal (the abstract is already there to mention it). So, I would change --> The goal of this analysis is the measurement of the differential cross section of Z+c jet production as a function of pT of the Z boson and c jet. This is done in several steps by The measurement of the differential cross section of Z+c jet production as a function of pT of the Z boson and c jet is done in several steps. Fixed

L64 : ppinteractions -> pp interactions

L121 : I am fine with the efficiency you quote. In https://indico.cern.ch/event/690703/contributions/2837885/attachments/1579941/2496314/DeepCSV_CTagger_WP.pdf I see eff-c 19.3 % b-mis-id-rate 21.7 % light-mis-id-rate 0.5 % but when I compute the efficiency myself (AN-18-324 table 4, 3rd row from the bottom) I get a 30 % more than the 19.3 %. Again, I am fine with your numbers.

L127 : are neutrinos excluded in the gen level jets ? If that is the case , may I suggest to mention it in the paper draft ? Here is an example of the way I would mention it " Generator level jets are built from all showered particles after fragmentation and hadronization (all stable particles except neutrinos) and clustered with the same algorithm that is used to reconstruct jets in data " I have to check this. In analysis, I don't check overlap of gen jets with generator neutrinos, so if they are excluded, this is done by clustering algorithm, not manually in the analysis. This should be done before mentioning in the text.

L 137: which corrects normalization of bottom -> which corrects for the normalization of the bottom Fixed

L 140 -> along with normalization of charm -> along with the normalization of the charm Fixed

L156 : last sentence is the same is in L133-134. I suggest not to repeat it. Fixed

L169: Efficiency of selections is taken into account by acceptance -> The efficiency of the selection is taken into account by the acceptance Fixed

L170: Nominator distributions is -> The nominator distributions corresponds to the Fixed: -> The numerator stands for the ...

L171: For denominator stands generator... -> The denominator corresponds to the generator... Fixed

L172: Fig 5 shows acceptance ... as a function of ...-> Fig 5 shows the acceptance ... as a function of the ... Fixed

L173 : Efficiency of c-tagging -> The efficiency of c-tagging Fixed

L177 : and then repeating the unfolding procedure, acceptance... -> and then repeating the unfolding procedure. The acceptance ... Fixed

L 191-196 sound quite general to me and I am not sure this is what we are looking for in this particular paragraph but I understand this goes along Joel's comments/suggestions on https://twiki.cern.ch/twiki/bin/view/Sandbox/DifferentialZcJet ( Joel comments : "Described as it is done on correspondign btagging twiki page: methods used for measuring SFs for different types of tag/mistag" ). Maybe this part, which describes methods for measuring SFs , should be moved to object reconstruction and event selection section? There is a paragraph, which describes deep CSV algorithm, there we can mention, that there are scale factors, which take into account efficiency etc... And then in uncertainties section just mention, that efficency scale factors can be varied within systematic uncertainties.

L 191: Depending of the type of jet ... -> [again I am not sure this is appropriate for the paper draft] Different measurements (each of them enriched on each particular flavor of interest) were performed to estimate the data and MC efficiency difference for each flavour of the jet passing the c-tagging requirement: for b-quarks a tag-and-probe technique was on used ttbar events, W+jets sample was used for c-quarks and an inclusive jet measurement for light jets. Depending on the jet flavor, the corresponding tag/mistag scale factor was varied with respect to the nominal value within the recommended range given by each performance measurement.

L 212 missing table number Fixed

L 236 Obtained results -> The obtained results Fixed


Elisabetta


More... Close

The abstract should be written in proper Latex (fix fb-1, mll). In the first like should it be better "a Z boson and at least a jet..."? Fixed

swap references [3] and [4] in References (both in time and sqrt(s) it would fit better) Fixed

line 27: I would not mention here details on Convino, just replace the last line "and to compare with predictions from QCD". Fixed, added "...and to compare with predictions from different MC generators"

64: fix space in pp interactions Not sure what is wrong here, Joel added special character \pp.

155: "overlapping": is there a DeltaR cut, or how is it done? Fixed: Jets, overlapping with one of two signal leptons from Z-boson in cone $\Delta R < 0.4$ are not taken into account.

170: "The nominator distribution is the generator..." changed to Numerator stands for generator level Z-boson or c-jet $p_T$ distribution ...

Table 4: too many digits, do you need the 3rd digit after the comma? Fixed

191: I do not see an "uncertainty" here, only a descritpion of the method.

193: fix ttbar Fixed

196: also here I do not see an uncertainty, but just a description of the correction

204: how large are these uncertainties? Added uncertainties values - 5% for electrons and 2% and 1% for muons. Maybe we should add another table with uncertainties summary, which shows max and min deviations up and down (previous version of table) ?

208: add a reference, why 10% Added reference just like in SMP-16-018 (https://arxiv.org/pdf/1303.6254.pdf)

212: fix Table number Fixed

219: add reference to Convino here Done

223-224: it still needs more physics and comparison. The PDF used in the MC should be quoted. Is there a problem to add MCFM with different PDFs at least at parton level, like in SMP-19-004 (Duong's paper)? This can be done for LO madgraph, NLO contains only NNPDF2.3, while LO has NNPDF2.3, NNPDF3.0, CT10nlo, different flavor schemes etc. But it has a bit unusual reweighting (reweighing strategy 3 http://home.thep.lu.se/~torbjorn/pythia82html/LesHouchesAccord.html). This will take more time to add accurately.

Somewhere in the figure 6 there should be the kinematical cuts, but we can rediscuss this at the pre-approval, as everybody has different opinions on this. I still did not understand from the paper and from your explanation in the twiki to which gen jets you correct, for instance do they have some kinematic cuts in eta or not? And the gen leptons, do they have eta cuts? Your acceptance around 20% makes me think that you have some more kinematic cuts on both jets and leptons than what written at lines 151-157. I think this should be clear, both in the pre-approval presentation and in the paper, to what you correct to. Yes, cut on eta for gen jet wasn't mentioned in draft, now fixed. The kinematic cuts for leptons and jets are close to those, which were used for detector level selection, small acceptance is caused by small fraction of c-jets, passing tight c-tagging.

Figure 6: caption should be more extensive and explain better lines, uncertainties. kin cuts, etc.

ref. [10] still authors name written in different style. Fixed

fix Sj\"ostrand name in ref. [19] Fixed


Answers (paper draft v0) Elisabetta


More... Close

Abstract: it has to be longer. I suggest that you start like the first paragraph that you have in the conclusion now, and you end saying that the resulting differential cross sections are compared to predictions from various Monte Carlo models. updated

page 1: something happened to Fig.1, last time I saw it it was ok. compiled without problems, could be some temporary bug

line 53: is alphas(mZ) really 0.130? Also what do I learn from the two matching scales of 19 GeV and 30 GeV According to NNPDF 2.3 [21] it seems, that central value for alphas(mZ) = 0.119 in nnpdf2.3. Fixed

Section 5: I would leave lines 132-138 as they are now, but change: 6 Background subtraction in 5.1 Background subtraction 7 Unfolding procedure in 5.2 Unfolding procedure This 3 sections (5, 6 and 7) were changed a bit: chapter 5 was small and changed into part of introduction, bacgkround subtraction and unfolding - chapters 5 and 6 respectively. In my opinion background subtraction looks like separate step, independant from unfolding procedure, thus is in separate chapter. What do you think?

178: I think that there is some confusion between acceptance and efficiency. Acceptance for me would mean correct to the overall kinematic region, while efficiency is the correction inside the kinematic region, but it is a question of taste. I think you mean the correction to your kinematic region at gen level. Anyway:

1) here it is signal at reco/signal at gen

2) In the note at lines 250, 251, it is: signal reco+gen/signal gen

3) and in Figure 34 of the note again another definition which I do not quite understand

So what was done exactly? Independently of how you call it, it should be correct. So the unfolding takes into account resolution effects from one bin to another one and so migrations. But still I would say that rec/gen is still the correct definition, or not?

There are 4 possible pt distributions: 1) signal gen pt, whithout any reco lvl requirements, 2)signal gen pt, which are matched with corresponging objects at reco level, which pass our reco level selection criteria 3) reco level pt for Z or c-tagged jet, which pass reco level criteria, without any gen level requirements 4) reco level pt for Z or c-tagged jet for events, that are not matched with signal events at gen level. Fraction 2)/1) is defined as acceptance. Fraction 4)/3) is defined as background. Reco level selected events are multiplied by (1 - background), then transformed with unfolding, using response matrix, then devided by acceptance.

Figure 5: I think this is all simulation, so it should be marked as CMS simulation Fixed

184-185: if I understood from the note, you not only repeat the unfolding, but before also the extraction of the scale factors for charm and beauty and this should be written. Fixed Table 5: I am also confused by table 5, I thought I unerstood it but now I am not sure. Could it be that you vary something up or down and the numbers indicate the range of variations in the bins of that variable? Whatever it is, it should be clarified and maybe reformatted, for instance exchanging columns with rows and putting as rows i.e. QCD down variation, QCD up etc.... changed to integral difference from central value

216: here N_i is the number of corrected events, but where is the acceptance correction here? Why not defining an A_i symbol and add it in the formula? Definition of N_i fixed: now it is the number of events in bin i of unfolded distribution. This already takes into account acceptance.

219: "The results are extracted separately for the muon and electron channels..."maybe add that they are compatible? ".. and combined by a fit using the Convino [33] tool..., taking into account the statistical and correlated/ucorrelated systematic uncertainties..."

Being not familiar with the Convino tools, it would be good to have some details in the note. For instance some uncertainties are correlated (i.e. c-jete energycale), some not (leptons,..) and I guess that this has been taken into account.

225: Of course more discussion is needed to complete this part.

227: "production" mispelled fixed

237: It would be good to have a final sentence that these data will be useful to constrain charm PDFs or something like this. Added some general sentence that existing constrains can be improved.

Acknowlegments missing

References: [9] some problems in the names, not clear why the first names appear Thats how authors are listed on arxiv: https://arxiv.org/abs/1106.0522, but can be changed, if it is better for paper style


Joel


More... Close

  • (l.119) I have guessed a bit here: still needs some work e.g. do you actually check the quark originator or just go by hadron flavour? Yes, we use hadronic flavor definition, thats what was recommended on SMP VJ meetings.
- (l.131) Could this go as a paragraph in the introduction? That's a good idea, this section is very small, I put this to the end of introduction section.

- (l.140) It is not clear what you do with $SF_c$. Is it just for display purposes e.g. Fig 3? charm SF is used to show the agreement between data and MC after applying it along with SFb to corresponding flavor components. For unfolding, only SFb is used for normalizing bottom component.

(l.162) I think you probably need to mention the matching here Fixed. - (l.163) What happens if you have multiple c-tagged jets? We take into account only the leading central c-tagged jet at detector level, and only leading central c-jet at generator level.

- (l.170) Is this (background) calculated from data or just from simulation? Background definition uses generator level information, so it can be calculated only from simulation (MC).

(l.171) The next two paragraphs still need work.

(l.176) Is efficiency incorporated into the response matrix? Not in our case, response matrix shows, how some spectrum is changed because of detector resolution: it takes one distribution and changes to another keeping the integral unchanged. Efficiencies are taken into account in acceptance.

- (l.177) Do we really need to define both acceptance and efficiency? Acceptance is the part which takes into account different efficiencies (selection, c-tagging, etc.) (l.189)Should mention how much the scales are varied Fixed: mu_r and mu_f varied within 0.5 - 2

(l.190)Please complete - the important thing is what prescription is used, not the technical detail that this is done via weights Described the way it is done in other papers: The PDFs are determined using data from multiple experiments. The PDFs therefore have uncertainties from the experimental measurements, modeling, and parameterization assumptions. The resulting uncertainty is calculated according to the prescription of CT14 at the 90% confidence level and then scaled to the 68.3% confidence level.

(l.193) Again, needs a description of how the values were estimated, not the technical detail of weighting Described as it is done on correspondign btagging twiki page: methods used for measuring SFs for different types of tag/mistag.

(l.212) I have to admit I can't work out what is going on with the table - perhaps a different format is needed? New table added, as suggested by Elizabetta and Juan Pablo, it shows integral deviation from central value in %.

- (l.225) Needs a discussion of the results/comparisons

We're still waiting for Sherpa sample to be added, maybe we should add this discussion after all 3 signal models are there

- (l.232) Isn't there a different cut on the lower lepton pt? yes, that's a mistake, subheading lepton pt > 10 GeV.


Answers (paper draft) Elisabetta


More... Close

Title and abstract are missing

Introduction, in my opinion it should be structured in 3 paragraphs: - why Z+c is interesting, you have it already - previous measurements - this measurement, what is new. Also I am not sure that you need all details of all kinematic cuts at this point. Fixed

Fig. 1, there is a strange gray background. I like this diagram when it is drawn more "rectangular" . Fixed

Detector: did you use the standard description, as in the guidelines? I took detector description from Duong's paper, and it seems that it containes standard sentences from https://twiki.cern.ch/twiki/bin/viewauth/CMS/Internal/PubDetector

Lines 108-109. when you talk about c- b-tagging, this part should be expanded and the "tight" point should be defined, usually this is given in terms of the fake rate. Fixed

113-121 I would move this part on the generator level later, when you talk about unfolding. Please also specify that the leptons are dressed and if parton or particle level jets. Fixed. It was specified, that generator leptons pt was corrected to take into account radiated photons in cone of radius dR = ...

I would make 5.1 its own section and describe how you extract the c-component more in detail, i.e. from a fit to the M_SV distribution, which btw has also to be defined precisely. The name k-factor at line 134 is confusing and probably you also do not need it, you can avoid it or call it with another name. Fixed. K-factors were replaced by SF_c and SF_b.

Fig. 2: you show the pre-fit distribution I guess. Why not the one after the fit? Also: - use less bins - use CMS style for figures (see guidelines), all labels must be bigger, CMS is missing on the plot, same for lumi and sqrt(s), i.e. follow guidelines. Fixed

Then after explaining the fit to extract the c-jet contribution, you can go back to the beginning of Section 5 and explain the cross section that you want to extract, line 123-126 and that you do everything in bins of ptZ, pt-cjet. This is now explained twice: there is a short chapter which gives an overview of analysis strategy, then following chapters descibe the process of subtracting backgrounds, unfolding and measuement of cross-section using unfolded distribution.

Lines 126-129 I would move them to a new section and there also explain the gen cuts you have now at lines 113-120. Fixed

In summary: - section: first explain the fit - section: explain which cross section you measure in bins, eventually other backgrounds like top etc. - section: then section on unfolding to gen level and explain what gen level is Fixed

Your captions are also all not CMS style, there should be only 1 caption explaining all (a) (b) (c)... Fixed

Figure 3: do you need it? Can these numbers and uncert. be in a table? Plot removed, k-factors presented in table

Figure 4: too many bins, please reduce - use CMS style etc. Fixed

Figure 5: do you need it? why not put the numbers in a table? It is clear also that the background shoots up at low ptZ, it needs some explanation in the text. Figure 6: do you need or can the numbers be in a table, i.e. a combined table with k-factors, background and acceptance? The shape of the acceptance needs an explanation in the text. There are too many bins for this plot, maybe showing on plot is more compact then table in this case.

Section 5.3, make a own section. Do not make subsubsection for each systematics, just paragraphs. For the c-tagging efficiency scale factors are mentioned, but they were not mentioned before, this must be first mentioned in the selection. This also for leptons and b-jet scale factors and also it must be written how they are determined (you can find it in many other papers). ttbar backgorund is here mentioned for the first time, it should be also before. Fixed

Result, should be its own section. Formula (1) should have N(p_tbin) and not dN/dpt in the numerator and all the formula could be written better. What about distributions also in eta, no intention to produce them? Fixed

Physics is missing! Comparison to MC with a couple od PDFs, with details on them, especially on the HF scheme. PDFs uncertainties will be added to two madgraph models. Sherpa event generator is to be added. Figure 7, again not in CMS style, it has to be redone. In addition I find cofusing that in the ratio the dots indicate MC Fixed


Juan Pablo


More... Close

Fig 6. I think you will be ask to add some uncertainty to the predictions ( typical are statistical , PDF and scale variations [but Z+c at 8 TeV has no scale in the LO calculations] ) so start working on it (whenever you have spare time ... not priority for now... the priority is just adding/modifying the text). In progress

Fig. 6 again: Do you guys have an idea why the LO gives a better normalization (may be not in shape) while we see that NLO gives always better performance in our Z+jets (you do not have to know the answer of course, just open question) ? Is this something we do not understand at GEN level, gluon-splitting related may be? In progress

L16. Jets with charm quark content are identified using (standard?) charm tagging methods developed in CMS [reference] where the presence of c quarks is inferred from the characteristics of jets (denoted as c jets) that originate from their hadronization products and subsequent decays. fixed

L 50. This generator calculates LO matrix elements for five processes: pp -> Z + Njets with N = 0...4. Fixed

Section 3. Forgot to mention that the predictions use PYTHIA for the hadronization. Fixed

L113 : may be add a reference ( see reference 37 in Dan's paper above) : CMS Collaboration, “Measurement of the Inclusive W and Z Production Cross Sections in pp Collisions at $\sqrt(s)$ = 7 TeV ”, JHEP 10 (2011) 132, doi:10.1007/JHEP10(2011)132, arXiv:1107.4789. Fixed

L116: "algorithm [25], using tight working point, which ... passing this criteria" . Working poing is jargon. Remove and instead put -> algorithm [25]. The threshold applied to discriminate c-jets from b-jets and light-jets gives a c tagging efficiency of about 30% and a misidentification probability of 1.2% for light jets and 20% for b jets. Fixed

L128. Please mention that a generator level leptons are dressed. Fixed

L 218 :feducial Fixed

L219: comment a bit about agreement disagreement seen in shape/normalization with different predictions. I think NLO is better in shape than LO but LO is better in normalization than NLO, right ? In progress. There will be also sherpa event generator, once all 3 generators are compared, we'll add conclusion, which one describes data better. We're also checking predictions of number of jets at gen level for different flavors to find out what could cause the difference. Will be added to AN soon.

Did you evaluate the LO cross section to next-to-next-to-leading order (NNLO) calculation computed with FEWZ[*]? If so , mention it . NNLO xsection value was used for both generators (5765 pb). Fixed.

L17-19: you define you fiducial region here, can Z-ee and Z-mumu be combined when having different pt_lepton cut ? I guess so because in L120 the fiducial pt cut is 26 GeV Same cuts were used for leptons at generator level in both channels.

L111 , 112 : different properties of the jet, such as secondary vertex and tracks -> put here that it accounts for displacement and long lifetime of particles w.r.t. light but no so long as b (I might come with a suggestion if I do not forget about it)

Fig 4. Too course binning here. Use less bins ( in fact I would just use the same number of bins as in fig. 3). Fixed

L161: at detector level -> I would say at reconstruction level (sometimes I use detector level to refer to gen level but may be it is just me) Fixed

L195: this is the first time you talk about lepton scale factors ( mention in section 4 what they are: lepton identification, isolation, trigger etc with, mention how they are computed :tag and probe with the Z,and mention how this is used your analysis: via weights and add a reference etc) Fixed

L208: channels were combined by a fit -> which fit ? I guess convino as you mention in L129. Can you put convino as the reference there in L208? Fixed

L209: taking into account statistical and theoretical uncertainties -> you should also consider systematic uncertainties in the combination (as recommended by stats. commitee if I am not wrong), did you get in contact with the stats. commitee already ? In orther words , did you fill the stats. questionnaire ? Ask them in case you have doubts . We talked about this in your last presentation. Stats. commitee reommended using Convino, which takes into account both stat ans syst uncertainties. Stats questionaire will be filled soon.

Fig 7.: I do not like the fact that your k-factor binning is not the same as the final binning. The bottom k-factor does not seem to be flat with pt(c-jet) on figure 3 (b, top plot) K-factros can't have as fine binning as for pt distributions, because fitting SVM for k-factors requires more statistics.


Answers (AN) Elisabetta


More... Close

at line 55 there is a cut of 20 GeV on the jets, while it is 40 GeV at line 49, any reason for that? 20 GeV is a threshold for muons, which are checked for presence inside the jet. for the cuts on lines 66,67 for the discriminants, is there any study, justification how they were chosen? These values were taken from Btag POG page: https://twiki.cern.ch/twiki/bin/viewauth/CMS/BtagRecommendation80XReReco I am not sure that I understand the data-MC comparison in Fig. 9b and in Fig 10b. In Fig 9b the data agree with the overall sum of MCs well. In Fig. 10b they agree less and the figure caption does not help. Is Fig. 10b after applying the kMC factors? Because the agreement looks worse. In fact 9b and 10b are two different plots: 9b shows comparison between all data and MC , and 10b represents (data - Top/Dibosons) and Drell-Yan. However, plots at figure 9 were produced with wrong Drell-Yan normilization: for these two figures I used NNLO Drell-Yan xsection values - 4578 pb, 851 pb and 335 pb for DY 0,1 and 2 jets (I was trying to reproduce Duong's results with NNLO and forgot to change xsections back to NLO while making these two plots), and for the rest of the plots in the AN standart (NLO) xsection values were used - 4754 pb, 888 pb and 348 pb for DY 0,1 and 2 jets. I'll replace plots at figure 9 in new version. Two versions of Ystar with no tag and c-tag with NLO and NNLO xsections are in attachment. In general, the method to extract the kMC factors is based only on number of events. It would be much better to take a distribution which is sensitive to c- and b-tagging and fit that as sum of the 3 components to extract these kMC. I would recommend to try it, it should not be very complicated. We used RooFit to obtain k-mc factors from shapes fit ( see kFactosFit.c in attachment). Is was done as simultaneous fit of two distributions - Ystar with b- and c-tags, with k_MC-factor for light component fixed to 1. As a result, k_mc factors for b and c components were equal to 0.78 and 1.03 respectively , which is consistent with the results, obtained by solving equations with numbers of events.

Figure 19: it would be good to understand the shape of the acceptance cut-by-cut. At low pt this is due probably by the pt lepton cut, the drop at higher pt maybe due to the c-tagging, maybe you could try to understand it. It seems, that the shape is typical for c/b tag efficiency, found similiar shape here, slide 22: https://indico.cern.ch/event/607607/contributions/2449091/attachments/1402391/2140959/kskovpenPPD20170126.pdf

in the closure test, are the same events used or which events are used? Yes, one sample was used to calculate response matrix, background and acceptance and in closure test. Result of appying unfolding procedure to anther sample in closure test a priori won't coincide with generator level distribution from original sample, so it will be impossible to say, whether this difference is caused only by statistics or some errors in the unfolding procedure.

I am surprised that the pileup has such a large effect on the last 2 bins for the c-jets, unless it is just statistics. These effect is seen for most of the uncertainties, not only pileup, because of the statistics.

- what would happen if at page 14, first formula, you would take the N_data-Top/Dibosons-light tagged= k_light*NDY,light,light-tagged+ k_c*NDY,c,light-tagged+ k_b*NDY,b,light-tagged Light tagging requires anti-b and anti-c tagging. However, there are no anti-tag SFs, so the number of events in this modified equations can be incorrect, so that the result of the equation solving wouldn't be correct.

it would be good to have comparisons before and after k-factors, similar to Figs 14-16 of the note, for all the possible cases, for the moment for instance I do not understand what Fig. 19 is, are k-factors applied? These figures show SVM before applying k-factors, additional plots and descriptions added in AN.

It is indeed not good that using the SVfit mass the k-factors come out so different including or not SV jets. Concerning the tagger, did you get any feedback from BTV on the best tagger to use for c-jets? There is feedback from Juan Pablo, who suggested, that there are problems with modeling of reconstruction of SV, and he also has no objections to method 1 - equations solving method.

make the selection similar to Duong's selection and compare to their k-factors I used almost the same selections, as Duong used here https://indico.cern.ch/event/748455/contributions/3125810/attachments/1716338/2769089/zhf_SMP_Vjet_sept_14_2018_v1.pdf (different triggers and muons id and isolations) and flavor defined by the hadron flavor of the leading medium CSVv2 tagged jet. The results of the fit from combine tool (combine -M MaxLikelihoodFit ) are the following : charm k-factor = 0.777 ± 0.023 and bottom k-factor = 0.839 ± 0.007, and Duong's results for muons channel are charm k-factor = 0.81 ± 0.02 ± 0.06, bottom k-factor = 0.88 ± 0.01 ± 0.02, so the results are almost consistent within the errors.

at line 209 you write that the light fraction has a normalization fixed to 1, I can't recall what Juan Pablo does. What happens if you leave it free, is it not possible to constrain it, maybe due to the different shape? Do you have plots in the note showing results of the fit? The result of the fit for light component is close to 1 (if one doesn't fix it), so this component was kept fixed. In other analysis it was done the same way, as I understan, Duong finds k-factors only for charm and bottom too. There are figures 27 and 28, showing agreement between data and MC after applying these k-factors. There are also plots 25 and 26, which show measured k-factors as funcitons of pt of Z-boson or c-tagged jet.

I understand from the answer to Juan Pablo that you do not have a pt cut on the generated leptons. First of all I guess now you are correcting back to dressed leptons. Then it is always better to correct back to a fiducial region which is close to the experimental one, not to have huge unknown acceptance corrections. It would be good to add in the note the exact fiducial region and in principle genjets and genleptons should have kinematic pt, eta cuts similar to the reco ones, + the invariant mass Mll gen cut is needed. We have added leptons pt and eta cuts, close to those used at reco level : leading pt > 26 and subleading pt > 10. In new AN version it is stated, that we measure fiducial cross-secion. It would be good to understand some of your systematics like: - Fug. 44 (c) - why pt c-jet high at high pt and only in muon channel it seems, that in last bins there may be large statistical fluctuations, so if there are few events, change of one parameter can lead to large change for distribution. Plots data_mu and data_el in attachment show, how these fluctuation can appear varied distribution to central distribution fraction. - Fig. 45 (b) - why pileup high only in electron channel To be understood. - Figs 46 (b) and (d) what is happening in the eID so bad compared to the muon ID? It must be an error: for electron pt~65 and eta~-1.5 the efficiency and error (GetBinError) are equal to 1! So the weight changes by 100%. Will ask electron pog. Update: For electrons SFs depend not on electron eta, but on electron supercluster eta, so that bin should be skipped.


Juan Pablo


More... Close

+) about the c-tag/mistag efficiency. You explain in eq. 5 how you apply the weight for the SFc as recommended. Let me explain what I do on my code. Let's imagine that the SFc for your ctagger-T is 0.92 +/- 0.06 +/- 0.01 (overall, then there is the "file" in bins of pt of the jet but for simplicity let's get the single number here). What I do is weightMC *= 0.92; This lowers the MC and improves my data/MC agreement. Could you please check that this is equivalent to the procedure you describe and follow ? The eq.5 looks a bit complicated, because there are 2 weights, which improve data/MC agreement: one improves data/MC agreement for B-F samples and another weight takes into account difference between data and MC in G-H. Since there is only one set of MC samples, the weight to be added to weightMC is made of these two weights, proportional to luminosity of each subset, that what eq.5 tells. In case of tag (when c-jet passed c-tag) there is no such partition so the MC weight is simply multiplied by the corresponding tag efficiency SF .

You use "HLT Ele27 WPTight Gsf" for the Z->ee. When I use the W-> e for charm tagging purposes I go up to the "HLT Ele32" because I do not have to deal with unprescaling (may be you do not have to either at "HLT Ele27", did you cross check ?) According to https://www.epj-conferences.org/articles/epjconf/pdf/2018/17/epjconf_icnfp2018_02037.pdf HLT ele27 is unprescaled, as I understand, that means, that each time, when the trigger fires, the event is recorded, so there is no SFs to be applied to take into account lowered events recording rate.

Your offline cut is 28 GeV. The usual is to leave 2 GeV to make sure you are far from the trigger turn on. Beware you might be asked during the publication process to move you offline cut to the standard "trigger_cut + 2" GeV. The electrons part of the analysis is to be done with 29 GeV cut, to fit the usual way of selecting electrons with this trigger.

L120: "signal muons or muons, which ..." Regarding the purpose and details of the muon-jet cleaning procedure : is this to remove Tight-ISO-muons in a delta_R (jet,Tight-ISO-muon) < 0.4 ? The fine but just to make sure this is not to remove Tigh-NONISO-muons in a delta_R (jet,Tight-NONISO-muon) < 0.4. If you remove also NONISO, it is up to you but you must already know that in the range 15 GeV < pt_NONISO_muon < 25 GeV you have a chunk of signal. Nothing to worry though.

We remove only jets, which overlap with isolated muons. In this case we don't remove signal events. Can both results be combined even when having different pt cuts. Which is you fiducial region (I mean your cross section is defined for letpons with pt> XXGeV or for Z with pt > XX GeV)? I must have missed it.

We have switched to new signal definition, which include cuts for pt and eta for leptons (leading pt > 26 and subleading > 10) to match reco level selections. Thus we measure fiducial cross section of the process.


Isabel


More... Close

Do you apply any matching between the trigger object (the one that fired the single muon trigger) and the reconstructed muons ? No, there is no matching between the trigger and muons, in order two take into account efficiency for two muons, combinatory formula was used.

What is your definition of a c (b) jet at generator level (L 73)? We use hadron flavor for gen jets, using the same algorithm, used for reco level jets. In .py file :

from PhysicsTools.JetMCAlgos.AK4PFJetsMCFlavourInfos_cfi import ak4JetFlavourInfos process.genJetFlavourInfos = ak4JetFlavourInfos.clone( jets = cms.InputTag("ak4GenJets") )

from PhysicsTools.JetMCAlgos.GenHFHadronMatcher_cff import matchGenBHadron process.matchGenBHadron = matchGenBHadron.clone( genParticles = cms.InputTag("ak4GenJets"), jetFlavourInfos = "genJetFlavourInfos" )

from PhysicsTools.JetMCAlgos.GenHFHadronMatcher_cff import matchGenCHadron process.matchGenCHadron = matchGenCHadron.clone( genParticles = cms.InputTag("ak4GenJets"), jetFlavourInfos = "genJetFlavourInfos" )

and inside .cc file:

(*genJetFlavourInfos)[genjetref->refAt(ijet)].getHadronFlavour()

In the case of b(c)-tagging you do not correct data and MC separately but keep the analysis at the (let´s say) uncorrected data level and apply the corresponding b(c) tagging SF. I find this treatment quite asymmetric. I would treat lepton and b(c) tagging efficiencies on the same footing. Muons id, isolation and trigger efficiencies are dependant on data samples, thus data was reweighted with respect to this dependance . C- and b- tag match/mismatch also depend on data samples, however, it is impossible to calculate these efficiencies separately for data and MC (we can't define hadron jet flavor for data ), so the scale factors for MC were composed of two scale factors corresponding to two sets of data samples.

Are b(c) tagging SF applied in Figs. 4 to 8 ? Specify it in the text/caption. Yes, c-tag/mistag SFs are taken into account for these plots, will specify it in the text.

Maybe you can test the ttbar MC description with a control sample in the emu channel. We didn't save muons in tuples, so this can't be done soon.

Can you describe in detail how you are treating systematic uncertainties in the c(b) tagging (light mistagging) scaling factors ?. You are following the recommendations of the b-tagging group, but it would be good to have it explained also here. How do you treat correlations among the different SF ? All necessary formulas and conditions for pt, eta and discriminator can be found her https://twiki.cern.ch/twiki/bin/viewauth/CMS/BtagRecommendation80XReReco. The systematics taken into account by changing all formulas, used in calculation of tag/mistag SFs, to formulas corresponding to uncertainties up/down. For example, if one wants to get distributions with SF uncertainty up, the weight for event with c-jet is calculated according to formula from https://twiki.cern.ch/twiki/pub/CMS/BtagRecommendation80XReReco/ctagger_Moriond17_B_H.csv with "comb" measurement type , tight working point, and formula, selected according to the pt of c-jet. We don't take into account correlation between differnt SFs. Update: I found an error in my code: scale factors for c-mistag for b-jets were equal to 0 in some cases, added detailed description, how SFs are calculated in the AN.

Fig. 20. Is this behavior expected ? increasing c-tagging eff. up to ~100 GeV and then decreasing again ? I found some plots with c/b tag efficiency from Btag POG, here (slide 22) : https://indico.cern.ch/event/607607/contributions/2449091/attachments/1402391/2140959/kskovpenPPD20170126.pdf . It seems this shape is typical for heavy flavor tags.

Fig. 19 (acceptance) it has a funny shape, can you give more details of which cut(s) are most relevant in the different pT regions ? This shape is similiar to the shape of c-tag efficiency, so it seems, that the most contribution is from c-tagging .

Same for fig. 18. I am a bit surprised for the first point in the left plot. Does it come from unmatched reco dileptons with gen dileptons or from unmatched reco c-jets to gen c-jets ? May I assume a ~100% correlation between fig. 18 left and right ? This shape comes from pt migration of Z / c-jet, so this form is expected for pt of any objects, without reference to correlation.

As already suggested at the SMP-COM meeting, the sensitivity to pdf should be assessed. Probably a study, similar to the one [1] can be tried. This can also help to define the optimal binning in terms of Yb, Ystar. We can try to do this study, current Yb and Ystar binning is optimal for differential cross-section as a function of Yb and Ystar, since this partition was chosen so that number of events was of the same order and statistical errors were the same for different bins.


Paolo


More... Close

I am still a bit unhappy that you use Sep2016 and promptreco… Sorry can you remind me here again the details of what prevents you from using a more recent reprocessing of data ? There are two main reasons, why we use sep2016 data: first are jet energy corrections, which official version is for 23Sep2016, which JET MET confirmed is ok for my analysis , the second reason is that WPs and SFs for b- and c- tag were also reveived usign 23sep2016 data .

Why did you choose herwigpp PS for ST t-channel and pythia8 PS for all other samples ? I couldn't find pythia8 for STt : https://cmsweb.cern.ch/das/request?view=list&limit=50&instance=prod%2Fglobal&input=%2FST_t-channel_top_4f_leptonDecays_13TeV-powheg-pythia8%2F*%2FAODSIM , only herwigpp version. The event yield from STt (as can be seen from event yield tables) is small , so this difference can be neglected .

If the event has a c-flavour genjet with pT>40 but pt(Z)<40 GeV is the event classified as Z+light ?? Or is it classified as a bkg event ? Cut pt > 40 GeV is applied to both to Z and jet, so events, when either Z or jet has < 40 GeV , are not taken into account.

N is the total number of MC events in the sample ? It should be total N(positive)- total N(negative) to rescale correctly to the lumi. Yes, number of events for rescale is calculated as number of positive-weight events - number of negative-weight events.

Fig 11 - Can you comment on the shape differences shown in some of these plots ? Difference between data and MC after cuts on b/c-tag discribinators is taken into account by applying SFs. But the SFs are calculated for fixed paramaters / WPs, so in this case no SFs are applied, since the discriminator distribution itself doensn't correspond to any WP.

why DeltaR <0.5 and not 0.4 as customary now with run2 0.4 jets ? This parameter will be changed to 0.4 in next AN version (a remnant from another analysis).

62-68 Explain how you classify the selected events in Z+b, Z+c and Z+light. Here you only specify the c-tagging but I think you first apply b-tagging criteria to classify Z+b events. Events are classified according to central jet hadron flavour. There can be only one jet-tag at a time, c-tagging isn't applied after b-tagging.

If the event has a c-flavour genjet with pT>40 but pt(Z)<40 GeV is the event classified as Z+light ?? Or is it classified as a bkg event ? In this case the event is not taken into account at generator level. If there is Z+c-jet at reco level, but either pt(gen Z) < 40 or pt(gen jet) < 40, this event goes to background. It can be seen on background plots (figure 18 in AN v6), when events , with Z/jet pt close to threshold , are above this threslod at reco level, but do no exceed it at gen level.


-- AntonStepennov - 2018-10-12

Topic attachments
I Attachment History Action Size Date Who Comment
PDFpdf bottomLineTest.pdf r1 manage 13.2 K 2020-01-29 - 08:04 AntonStepennov BottomLine test for unfolding
PDFpdf data_el.pdf r1 manage 13.7 K 2019-10-16 - 14:43 AntonStepennov  
PDFpdf data_mu.pdf r1 manage 13.6 K 2019-10-16 - 14:43 AntonStepennov  
PDFpdf hYstar.pdf r1 manage 18.3 K 2018-10-16 - 19:33 AntonStepennov Ystar distribution, c-tag applied, NLO cross sections used
PDFpdf hYstarNNLO.pdf r1 manage 18.3 K 2018-10-16 - 19:33 AntonStepennov Ystar distribution, c-tag applied, NNLO cross sections used
PDFpdf hYstarNoTag.pdf r1 manage 18.2 K 2018-10-16 - 19:33 AntonStepennov Ystar distribution, no tag applied, NLO cross sections used
PDFpdf hYstarNoTagNNLO.pdf r1 manage 18.2 K 2018-10-16 - 19:33 AntonStepennov Ystar distribution, no tag applied, NNLO cross sections used
C source code filec kFactorsFit.c r1 manage 16.8 K 2018-10-16 - 12:14 AntonStepennov K-factors obtained with shapes fit for Ystar with c- and b- tags.
Edit | Attach | Watch | Print version | History: r63 < r62 < r61 < r60 < r59 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r63 - 2020-06-30 - AntonStepennov
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback