Measurement of the top pair production cross section σ(13 TeV)


Goal

We are working on the measurement of ttbar mass and production cross-section. This measurement uses the data taken from proton-proton collisions produced at center-of-mass enery 13 TeV recorded by Compact Muon Solenoid (CMS) experiment during 2015. In this measurement we make use of events with one isolated lepton (electron or muon) and jets in final state. Thus we focus on semileptonic ttbar decay which have a typical signature of two heavy flavored b-jets, two jets from W boson decay, missing energy and one isolated muon or electron. The selection is however relaxed in order to control the main backgrounds (W+jets, multijets QCD and DY) from sidebands. Different techniques are explored to measure the ttbar mass and cross-section employing different categories and distributions. A statistical analysis based on a simultaneous profile likelihood fit to the distributions of interest is employed to extract the signal strength. After extrapolating to full phase space the mass and cross-section is measured.

Questions by Convenors;(18-01-2016)


?L72 : Not clear if you normalise the ttbar scale up/down samples to the NNLO cross section too? In my understanding the scale systematic should only enter this analysis as a shape effect affecting the acceptance correction that takes you from the visible cross section to the full cross section. Is that correct?

Yes. TTbar scale up/down samples are also normalized to the NNLO cross section.


?Figure1: (but it applies to many pre-fit data vs MC plots). The agreement is poor in terms of normalisation and shape. What is the working hypothesis for the source of this disagreement? From your talk on Friday I think maybe it is the fact that the significant QCD component is obviously mis-modelled in simulation. If this is the case you should make it very clear in the documentation that the disagreement is expected and emphasise and investigate more pre-fit plots where the agreement should be good "out of the box" (like 2 tag events where QCD should be minimal).

We used 69 mb as reference for minimum bias cross section and 5% uncertainty is assigned.


?L102:Not clear exactly which generator weights you are referring to. The nominal powheg ttbar should have only positive weights. aMC@NLO weights should be +/- 1 always. Could you clarify which weights you mean?

These are generator weights and are NLO MC. The way these are computed as sum of weights (after selection) / sum of weights (generated) but the ratio has to be computed using the same sets of weights.


?L130:I'm somewhat surprised that the deltaBeta correction remains a recommendation in the 25ns data where OOT PU can be large (given that we saw as far back as CSA14 that the correction performs badly in 13 TeV conditions.) Have you got the data-MC plot for the corrected and un-corrected relative isolation for you muons? I think it would be worth investigating the rho correction too, but I suspect the Muon POG will change their recommendations.


?L155:I'm not fully clear on which lepton SFs are applied in the end? If not the ones from the POG, are they consistent with the ones from the POG? What is the plan regarding the SFs for the final result?

We use official Sfs provided by POG. Although we have calculated lepton scale factors (both for muon and electron) ourselves using official Tag and Probe package.


?Figure2: In plot (b) there seems to be four points outlying... is this just some kind of plotting problem or do they mean something? The large outer error bars indicate a significant dependence on the fit model... Im guessing this come from only a handful of events at high multiplicities so the overall contribution of the fit model to the uncertainty is small.. is that correct?

The efficiency is low for that bins. That's recommended binning by Muon POG. Its probably something with detector geometry.


?Figure5:I find the outliers here really strange. I guess the very small error bars here are purely statistical? Although it plausible that the outlying runs are small enough as to have negligible impact on your result it amazing that you can see these enormous fluctuations? What prompted this cross check? Is there something special about detector/trigger/reco conditions for the outlying runs?


?L215:Not fully clear on what is done here. You subtract the non-QCD from the sideband and then vary the subtracted component by 100%, right? Why do you say it is a shape uncertainty?


?L217:Can the sentence starting: "This distribution..." be rewritten/clarified? Im not sure what you mean to say about the relative fraction of fakes and leptons from b-hadrons as they are both backgrounds for you right?


?L247:For the flavour dependent JES, you need to be careful that the definitions of b,c,light jets that you adopt as those defined by the JET-MET group when they derive the flavour dependent corrections (same applies for the btag SF).


?L258:The argument against using the aMC@NLO for the template is not convincing. Why not just apply the negative weights and use the resulting template? If it is because the negative weights introduce a larger statistical uncertainty, you do not alleviate this by applying the corrections to MadGraph as the uncertainties will be carried over. Am I missing something here?


?L265:Don't understand the statement here but it might just be the wording. If you scale the MG to agree with aMC@NLO and use the size of the correction as an uncertainty I think you are just removing any advantage of using the NLO prediction and will have unnecessarily large uncertainties. Don't see why you don't just take the NLO predcition and applied it's own NLO uncertainties (which don't have to ovelap with the central LO prediction).


?Figure8:Would be good to have some estimate of the systematic uncertainty as a shaded band on the MC prediciton.


?Figure14:The agreement is far off here. That needs to be thoroughly discussed in the text.


?L313:I don't understand what is meant by the sentence beginning "In the case..."

A typo mistake. In this case...


?Figure18:I would like to see more discussion on the fit. How are the nuisances constrained? Gaussian terms or lognormals? What would be especially interesting is the constraint placed upon the QCD component. It would be nice to see the effect of the associated nuisance parameter being constrained (probably with a gamma function) or left unconstrained. Also a estimate of the goodness-of-fit with toys is crucial to ascertain that there is nothing bizarre going on in the fitting procedure.

All nuisances are constrained log-normally except QCD which is distributed log uniformly.

-- QamarUlhassan - 2016-01-18

Edit | Attach | Watch | Print version | History: r9 < r8 < r7 < r6 < r5 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r9 - 2016-02-02 - QamarUlhassan
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback