Summary of ttH HXSWG meetings (Oct 2014-Jan 2015)

In the following we summarise some aspects that have been pointed out during the ttH HXSWG meetings and that could be pursued within the framework of the HXSWG. This document will serve as a basis for a future working group report. It will be continuously updated by the ttH conveners, including input and feedback from the whole ttH group.

Oct 20 Signal modeling in ttH ( Indico)

Impact of signal modeling on ttH searches: At present MC signal simulations are not a serious source of uncertainty in ttH searches. Nevertheless a good understanding of ttH MC systematics will become relevant in the context of Top Yukawa measurements at higher luminosity. In order to prioritize the needs for future theory developments, we urge the experimental collaborations to quantify the impact of signal modelling uncertainties on the ttH signal acceptance: does it exceed the 20% level (relevant for y_t precision)? If yes, in which ttH analysis? What are the most relevant observables?

Shape uncertainties: thee tail of the pT(ttH) and pT(tt) distributions as well as the eta(ttH), N(jets), and HT(jets) distributions show significant shape discrepancies (20% and beyond) between NLO+PS predictions based on different matching methods, scale choices, and parton showers. Such dependencies should be significantly alleviated using NLO merging methods (FxFx in aMC@NLO/Madgraph5 and MEPS@NLO in Sherpa+OpenLoops).

Scale choices: Two conventional scale choices (a “fixed” and a “dynamic” one) are used in ttH MC simulations within ATLAS. Dynamic scale choices are preferable at large pT, and in general it might be useful to recommend (a set of?) standard scale choices and appropriate prescriptions for shape uncertainty estimates based on different scale choices.

Uncertainty estimates at NLO: methodological aspects related to theory scale choices and uncertainty estimates, especially in the framework of new NLO+PS and NLO merging methods, should be discussed in the framework of the HXSWG.

Official input parameters: Standard input parameters and PDFs for the ttH signal have been requested. A list of input parameters recommended by the HXSWG can be found here. At present Mt=172.5+-2.5 GeV (and no MH value) is recommended. These recommendations might change in the future. No recommendation at present for the electroweak input scheme to be used in the top-mass/top-Yukawa relation. (For ttH also recommendations on the MC modeling of Higgs decays might be useful).

Importance of Jet activity:In order to assess the possible need of theory improvements in the modelling of QCD radiation, we urge the experimental collaborations to assess the relative importance of extra jet emissions, i.e. ttH+1,2,3 jets events, in the framework of specific ttH analyses. Such events could play an important role if jets resulting from top/Higgs decays are often out of acceptance.

Minimal prerequisites for reliable ttH modeling

  • NLO+PS precision

  • spin correlated top decays (off-shell top decays through smearing of on-shell tops)

Recent and ongoing theory developments

  • NLO merging for ttH+0,1 jets is available in Madgraph5_aMC@NLO and Sherpa (in combination with OpenLoops or with code by Dawson, Reina, Wackeroth). Stefan Hoeche and collaborators offered Sherpa support to ATLAS and CMS

  • weak corrections are available at parton level in Madgraph5_aMC@NLO; extension to full EW corrections ongoing. Relevant for boosted regime (-8% correction)

Future theory developments (in tentative order of priority)

  • NLO top/Higgs decays

  • ttH signal/background interferences

Nov 3 Backgrounds and uncertainties in experimental ttH, H-->bb searches ( Indico)

General considerations: tt+jets MC modeling (especially tt+b-jets) is a dominant source of uncertainty in ttH(bb) analyses at 7+8 TeV . Run1 analyses are either based on LO ME+PS or inclusive NLO+PS MC, and in both cases the formal accuracy of tt+jets final states is only LO. On the one hand, tt+jets MC uncertainties should be reduced by means of state-of-the art NLO simulations. On the other hand, given the significant impact of MC uncertainties even at NLO, their estimate requires a transparent and theoretically motivated methodolgy. The issue of MC uncertainties is intimately connected to the methodology employed in the experimental analysis (jet-flavour categorisation, top-pT reweighting, other data-driven procedures,...) and to the subtle interplay between various levels of MC simulation (matrix elements, shower,...). In this context, as a starting point, it is highly desirable to identify and understand all essential aspects (theoretical and experimental) that are relevant for MC uncertainty estimates in ATLAS/CMS analyses, and to document them in a precise and transparent language that could facilitate the exchange between theory and experiment.

In the following we propose a first synthesis of tt+jets MC uncertainty issues emerged from the meeting. This includes also a detailed description of top-reweighting (in ATLAS) and other informations that have been collected after the meeting.

tt+jets categorisation for Monte Carlo uncertainty (MCU) estimate: tt+jets MC samples are split into a certain number of independent subsamples (tt+light-flavour, ttb, ttbb, ttc,...) that are defined in terms of the numbers of b- and c-jets (Nb,Nc) and/or the total number of jets (Nj). Top-decay products are typically not considered in this categorisation, and ATLAS/CMS employ different subsamples and different definitions of Nb,Nc,Nj (see the descriptions for what was used in Run 1 here). The various subsamples can be obtained from a single inclusive tt+jets generator or using dedicated generators for certain subsamples (e.g. for tt+b-jets). It is highly desirable that both experiments adopt a common categorisation approach, based on a proposal from the theory community. This requires a precise definition of:

  • Nb, Nc, Nj: which simulation level (MEs, shower, hadronication, detector)? which definition of flavour jets? What are the relevant pT-thresholds and cuts?
  • a definition (in terms of Nb, Nc, Nj) of the most appropriate subsamples that require independent MCUs

This standard definition should be as simple as possible and should allow for a consistent assesment of MCUs (ideally with a clean separation of perturbative/non-perturbative effects). It should also facilitate comparisons among the various MC tools on the market.

Treatment of MCUs in experimental fits Normalisation and shape variations for each tt+jets subsample are represented in terms of independent nuisance parameters that are fitted to data together with the signal strength. Each theory uncertainty enters the fit as a prior distribution for the related nuisance parameter, and various MCUs (like the normalisation of tt+light-jets) are strongly reduced when MC predictions are fitted to data. Typically tt+HF subsamples feature the largest post-fit uncertainties. Moreover, due the limited shape separation between the small ttH(bb) signal and the large tt+HF background, the fit tends to constrain only their combination, which is dominated by tt+HF, while the signal component remains poorly constrainted.

MCU estimates in CMS, using inclusive LO ME+PS tt+jets sample (Madgraph):

  • normalisation and uncertainty of total ttbar+X cross section from NNLO
  • ad-hoc 50% rate unc. for ttbb, ttb, ttcc subsamples (uncorrelated)
  • factor-2 ren and fact scale uncertainties for subsamples with different parton multiplicity (uncorrelated): weights of events originating from tt+n-parton matrix elements are varied as alphaS^n(Q) at LO keeping fixed the total rate of tt+X => impacts shape of Nj distribution; scalings simultaneously applied in the shower to adjust for variations in the amount of ISR/FSR
  • DATA/MC reweighting of top-pT (impacts shape of leading-jet and lepton pT): a top-pT dependent correction factor K(pT) is introduced, such that MC(x)=MC*x*K(pT) yields agreement with data at x=1 for the inclusive top-pT distribution. The nuisance parameter x is varied in the range [0,2]. This induces a 20% correction and MCU in the boosted-top regime.
  • No additional merging-scale variations are applied
  • tt+c-jets contributions:* for tt+c-jets (20% of background in signal region) a dedicated NLO simulation would be desirable (not yet available)

Dominant sources of MCU in CMS: ttbb rate, top-pT reweighting, ttb and ttcc rates, MC statistics

MCU estimate in ATLAS using NLO+PS (Powheg+Pythia), ME+PS (Madgraph) and S-MC@NLO ttbb (Sherpa+OpenLoops) samples:

  • normalisation and uncertainty of total ttbar+X cross section from NNLO
  • ad-hoc uncorrelated 50% uncertainties for inclusive tt+b and tt+c cross sections
  • DATA/MC reweighting of inclusive distributions in ttbar-pT (yields correct Njet distribution) and top-pT (to correct other shapes) is applied to all tt+jets subsamples (including tt+HF). In this context, MC is compared to unfolded data, which involve a significant dependence on Pythia (and even on the employed tool) and on the related uncertainties. See more details below.
  • tt+b-jets MC predictions and uncertanties are obtained by reweighting the inclusive NLO+PS sample with a dedicated S-MC@NLO ttbb sample in the 4F scheme; in this context, various tt+b-jets subsamples (see slides) that allow for a consistent matching of the two samples are used; an independent and differential reweighting is applied to each subsample; MCUs are taken from variations of PDFS, ren/fact scales (factor-2 and kinematical), and shower parameters in the S-MC@NLO ttbb sample.
  • the employed tt+b-jets categorisation is based on the number of reconstructed b-jets at particle level (MC truth, after hadronisation). It involves pT-thresholds for B-hadrons and b-jets. Consistent matching is ensured by removing b-jets from UE and top-decay showering.
  • comparisons of NLO+PS, ME+PS and S-MC@NLO ttbb are used as a sanity check: S-MC@NLO features an excess in subsamples with “merged HF jets” (more b-hadrons in a jet). Here one should keep in mind that, in the inclusive NLO+PS and ME+PS simulations, b-quarks in tt+b-jet subsamples originate mostly from the shower (unless a small merging scale is used).
  • All comparisons in the ATLAS talk involve reweighted Powheg/Magraph+Phythia predictions, while Sherpa+OpenLoops is not reweighted: it’s a first principle NLO MC prediction. Top/ttbar-pT reweighting significantly improves the agreement with the S-MC@NLO ttbb prediction.
  • tt+c-jets contributions:* for tt+c-jets (20% of background in signal region) a dedicated NLO simulation would be desirable (not yet available)

Dominant sources of uncertainties in ATLAS: ttbb rate, top- and ttbar-pT reweighting, ttcc rate (MC statistics is also an issue)

Top reweighting and related systematics. To compensate for the mismodeling of the top and ttbar pT distributions, MC simulations are reweighted with a pT-dependent correction factor derived from data. The reweighting is applied at the level of the unfolded top-pT distribution(s), which are derived from “data” using a migration matrix obtained from “pseudo data”. In the following, as an illustration of top-reweighting (and related systematics) we sketch the approach employed by ATLAS (see While ATLAS performs a double-differential reweighting of top- and ttbar-pT, here we consider only top-pT. The nominal tt+jets MC sample, generated with Powheg+Pythia, is passed through detector simulation and is used to determine a reconstructed top-pT distribution (pseudo-data). The relation between the top-pT distribution in pseudo-data and the corresponding distribution at MC truth level is encoded in the migration matrix. More precisely, MC truth corresponds to the top-pT in showered (or non showered?) parton-level ttbar events within the Powheg+Pythia.

The migration matrix is supposed to describe pT-distortions resulting from detector-smearing and acceptance cuts, and is also sensitive to QCD radiation effects due to the different QCD-radiation dependence of the top-pT at MC-truth and reconstruction level. The reconstructed top-pT is obtained from a kinematic likelihood fit on the events, where the jets/lepton/missing ET are fitted to the ttbar hypothesis and the different permutation of jets are checked. Events with low likelihood are cut out to remove non-ttbar background. For the events passing the cut the permutation with highest likelihood is taken and the hadronic top-pT and leptonic top-pT are extracted. The reconstructed top-pT is typically more sensitive to QCD radiation (and related uncertainty) wrt the MC-truth top-pT.

Finally, using the Powheg+Pythia based migration matrix, the top-pT distribution reconstructed from real data is converted into an unfolded top-pT distribution. The latter is used to reweight the Powheg+Pythia pT-distribution at MC-truth parton level by a factor rw(x_i,pT)=f(x_i,pT)/MC(pT), where MC(pT) is the MC prediction, while f(x_i,pT) denoted the unfolded distribution. The variables x_i=(x_1,x_2,...) parametrise the dependence of the migration matrix on the various relevant uncertainties, and x_i=0 corresponds to the nominal prediction.

Each independent experimental uncertainty (b-tagging, jet energy scale, etc.) is described by a corresponding x_i variation and a related variation of rw(x_i,pT). All sources of uncertainty could be in principle propagated to the full simulation (MC + x_i dependent reweighting + detector simulation + x_i dependent top reconstruction) in a correlated way, in such a way that x_i variations tend to cancel in the reconstructed top-pT distribution, and the latter always agrees with data within statistical uncertainties (note that the final ttH(bb) fit is based on reconstruction level!). However, since top-reweighting is based on 7 TeV data (and related detector calibration + uncertainties), x_i variations in top-reweighting and top-reconstruction (at 8 TeV) cannot be correlated. In practice, the nominal reconstruction (w.o.) x_i variations is always employed. This tends to overestimate x_i uncertainties.

The MC Generator uncertainty is encoded in a modified reweighting rw’(x_i,pT)=f’(x_i,pT)/MC(pT), where unfolded data f’(x_i,pT) are based on a migration matrix obtained from an alternative generator (MC@NLO). This reweighting is defined for (and applied to) the default MC prediction, MC(pT). The uncertainty associated to the rw-rw’ difference is not correlated since the alternative generator is never used for the simulation.

The ISR/FSR systematic is evaluated in a completely different way. Pseudo-data generated with two MC simulations with ISR/FSR variations (up/down) are unfolded with the nominal migration matrix, and the relative effect with respect to the central MC prediction is used as a systematic. Such ISR/FSR variations shift the unfolded top-pT(ttbar-pT) distribution by about 5%(15%). These variations are not correlated with corresponding ISR/FSR uncertainties of the tt+jets MC sample (for which only the nominal Pythia settings are used). Thus they do not cancel out when the reweighted sample is passed through detector simulation and the tops are reconstructed.

The reweighting of the inclusive top-pT distribution (and related uncertainties) is applied to all tt+n-jet subsamples in a fully correlated way. In particular, also tt+HF final states are reweighted with the same top-pT correction factor. This procedure is supported by the observation (in ATLAS MC studies) that, in tt+b-jets subsamples, reweighted Powheg+Pythia and Madgraph+Pythia predictions for the top/ttbar-pT distributions are in better agreement with S-MC@NLO ttbb wrt non-reweighted ones.

Electroweak contributions to ttbb: it was pointed out that pp->ttbb might receive significant tree-level EW contributions of order alpha^2*alphaS^2. This should be checked.

Nov 10 Theory perspectives on tt+jets and tt+HF production ( Indico)

(1) Overview

The meeting focused on theoretical tools for the calculation of top+antitop pair production with light and heavy-flavor jets (here generically denoted by tt+jets and tt+HF), in particular b-quark jets (here denoted by tt+b jets). Both tt+jets and tt+b jets represent the most limiting background in the detection of a Higgs boson produced with a pair of top+antitop when the Higgs boson decays into a bottom+antibottom pair (i.e. ttH, H->bb).

We had three talks from the three collaborations that have most recently studied these processes and addressed the issue of defining the nature and size of the theoretical systematic uncertainty intrinsic to the tools used in the calculation and to the observables chosen for the comparison with data.

The talks/speakers and main focus of each talk were:

  • PowHel (speaker: Zoltan Trocsanyi) : tt+2b jets (NLO+PS with mb=0)
  • Sherpa+Openloops (speaker: Frank Siegert): tt+jets (MEPS@NLO merging of 0j,1j,2j) and tt+>=1b jets (NLO+PS with mb>0)
  • Madgraph5_aMC@NLO (speaker: Rikkert Frederix): tt+jets (FxFx NLO merging of 0j,1j,2j)
All three collaborations (PowHel, Sherpa+OpenLoops, and Madgraph5_aMC@NLO) calculate the production of tt+jets or tt+b jets interfacing the exact NLO QCD matrix elements for the corresponding parton-level subprocesses with a parton-shower Monte Carlo (PSMC). In PowHel, the matching with PS is based on the POWHEG method, while Madraph5_aMC@NLO and Sherpa+OpenLoops use (different implementations of) the MC@NLO matching method. In the Sherpa implementation, which is referred to as S-MC@NLO, NLO matrix elements are matched to the Sherpa parton shower, while Powheg and MG5_aMC@NLO simulations are based on Pythia or Herwig.

Various sources of theoretical uncertainties are present at the level of matrix elements (MEs), parton showers (PS), and in the procedures used to match and merge these two ingredients. Uncertainties from missing higher-order QCD corrections in the MEs are typically assessed through variations of the renormalisation and factorisation scales. Besides the usual factor-two variations of such scales, for multi-particle and multi-scale processes also different choices of dynamical scale should be considered. Additional uncertainties due to the PDFs and to possible approximations in the MEs (e.g. setting mb=0) need also to be included.

Moreover, the fact that the shower generates extra light and heavy jets leads to non trivial issues when Monte-Carlo simulations are applied to specific event categories based on the multiplicity of light- and heavy-flavour jets. In particular, depending on the event category, the choice of generation cuts and possible restrictions at ME level can lead to an insufficient level of Monte-Carlo inclusiveness, thereby hampering consistent comparisons among different simulations and against data.

The situation after the matching with a PSMC and multi-jet merging becomes more complicated. On the one hand, new uncertainties related to the choice of the shower starting scale and the merging scale are involved. In the context of matching/merging, one should also keep in mind that, for technical reasons, at the moment renormalisation scale variations cannot be applied to jets that are emitted by the parton-shower. Uncertainties related to jet emissions are thus typically underestimated on the paton-shower side, while the merging approach provides, at the same time, a more reliable description and a more conservative uncertainty estimate for jet emissions.

(2) tt+jets

Both Sherpa+Openloops and Madgraph5_aMC@NLO offer the possibility to calculate pp -> tt+0j, 1j, 2j matching the parton-level matrix elements to a PSMC at NLO accuracy (up to 2j) and consistently merging the different populations of jets at the given accuracy.

Multi-jet merging is essential to an accurate description of tt+jets and to a more accurate assessment of the theoretical systematic uncertainty. It was noted however that in current measurements often the comparison is made with inclusive NLO+PS samples, where ME information stops at the level of tt+1j LO MEs, while state-of-the art NLO merging can push the theoretical accuracy up to tt+2j NLO and tt+3j LO MEs.

A careful assessment of the uncertainty from PSMC is missing and comparisons are still made with prescriptions that seem to work but are not motivated on the basis of more careful studies nor by some more well defined theoretical reasons (e.g.: "Powheg with 'hdamp' describe the data best", or similar).

Multi-jet merging at NLO is expected to lead to a drastic reduction of theoretical uncertainties related to the choice of the resummation scale (shower starting scale) and the merging scale. The latter corresponds to a jet-resolution measure, which is used to separate regions of phase space that are populated by NLO accurate MEs and by the PS, respectively. On the one hand, the lower the merging scale, the more phase space associated with jet emissions is described in terms of NLO accurate MEs. Ideally, one would chose the merging scale such that objects which are resolved as separate jets in the experimental analysis are always simulated with NLO accuracy. However, generic theoretical considerations suggest that a too low merging scale can lead to uncontrolled logarithmic effects, which are formally beyond the PS accuracy but can become numerically important. A precise quantitiative analysis of this issue is not yet available for tt+jets. Such studies would be very valuable in order to converge towards a well motivated prescription for the choice of the merging scale and related uncertainty estimates.

This is an area that needs improvement and it is very important that now different tools (Sherpa+Openloops and Madgraph5_aMC@NLO) are available and can be used for comparison (recently also OpenLoops MEs became available at

(3) tt+b jets, more specifically tt+2b jets

Powhel and Openloops+Sherpa presented their studies of tt+b-jets. Madgraph5_aMC@NLO did not show specific results, but discussed the kinds of issues that need to be addressed in order to be able to consistently implement this kind of processes in a PSMC matched with NLO MEs and achieve a more reliable control of the theoretical systematics.

From a theoretical point of view, the problems encountered in the study of tt+b jets are very important and very challenging. They are being addressed for the case of b jets at the moment, and they could be even more serious for c jets (both very important for the study of ttH as we learned from the experimental talks presented in this working group on Nov. 3rd). The understanding of the b-jet case will be therefore extremely beneficial to the description of a whole family of hadronic processes (not only tt+HF) that involves b-jets and c-jets.

Tools like Powhel, Openloops+Sherpa, Madgraph5_aMC@NLO, and any other that will be made available in the future, are the state-of-the-art tools that we need for such studies. Thanks to these tools, interesting aspects of processes like tt+b-jets are emerging and the theoretical description of these processes is fast improving. However, it is still difficult at the moment to fully assess the theoretical systematic uncertainty intrinsic to these prediction.

In particular, we have heard in these talks about that the following issues need to be clarified/improved:

  • (3.a) scale uncertainty: As already known from parton-level studies, the choice of a dynamical scale for both renomalization and factorization scales leads to a better perturbative behavior of the NLO cross section (both total xs and distributions). Powhel and Sherpa+OpenLoops adopt different dynamical-scale choices. The Powhel choice is based on HT and aims at keeping the scale always hard and larger than mt, while Sherpa+OpenLoops uses a CKKW-inspired prescription for the renormalisation scale, such that the scale for b-quark emissions is adapted to their respective pT, while the factorisation and resummation scales are kept harder.

    Both collaborations (Powhel and Sherpa+Openloops) estimate the residual error from scale dependence in the 25-35% range, roughly, which is quite sizable for a NLO QCD calculation. This scale uncertainty is dominated by renormalisation-scale variations, and remain at a similar level when NLO MEs are matched to the parton shower.

  • (3.b) PDF uncertainty: The Powhel collaboration estimates an error of 10% from PDF, based on a study of the CTEQ, MSTW, and NNPDF sets.
  • (3.c) PSMC uncertainty: By matching with Pythia-6 and Pythia-8 the Powhel collaboration estimates a PSMC uncertainty of 10%. Studies of this kind should be expanded, to understand the origin of the systematic, compare with Herwig++, and compare under different conditions (cuts, etc.). Also the systematic uncertainty reletd to the choice of the so-called “hdamp” parameter, which plays a similar role as the shower starting scale in the MC@NLO method, should be investigated.
  • (3.d) effect of t-quark decays: Both collaboration can include the decay of the final-state top quarks accounting (approximately) for spin-correlation effects. As confirmed by the Powhel collaboration, spin correlations can have a substantial effect and should always be included.
  • (3.e) massless vs massive b quarks: The Sherpa+Openloops calculation uses the 4FNS (4 flavor number scheme) where b-quarks massive and are not present in the proton: they arise form g->bb splittings.

    Since collinear g->bb singularities are regularised by the finite b-mass, this approach permits to cover the entire b-quark phase space at the level of NLO MEs, resulting into a fully inclusive description of final states with tt+>=1 b-jets, i.e. including also tt+1b-jet. At NLO+PS level, for ttbb final states with two hard b-jets, the two b-jets are usually arising from the ttbb MEs. However there are also configuration where one b-jet arises from a (rather) collinear bbbar pair in the MEs and the second one is generated through a (rather) collinear g->bb splitting by the PS. The Sherpa+Openloops studies emphasize that the effect of such “double g->bb splitttings” can be quite sizable, especially for m_bb>100 GeV, and need more systematic attention. In the 4FNS the frist g->bb splitting is entirely described by NLO MEs, while the second one can only be simulated at the level of accuracy of the PS. Given its potentially high impact in the signal region, this mechanism and the related uncertainties should be studied in more detail.

    The Powhel ttbb calculation is performed in the 5FNS, where mb=0. The presence of a g->bb collinear singularity at m_bb->0 requires appropriate generation cuts that restrict NLO MEs to a phase space region with sufficiently hard and well separated b-quarks. In particular, it requires an explicit or implicit generation cut on m_bb. In principle, as far as this cut is very low, one would expect little impact on physical observables characterised by large m_bb. However, contributions of the type of the above-mentioned double-splitting mechanism should occur also in a 5F NLO+PS ttbb calculation, and such contributions should be strongly sensitive to the choice of the the m_bb generation cut (they are formally singular in the limit of vanishnig cut). The numerical impact of these issues remains to be investigated. In any case, within the 5FNS, the natural solution to this problem is provided by multi-jet merging, where the singular ME description of collinear bbbar pairs is replaced by the regular PS description below a certain merging scale. This automatically requires the merging of tt+jets MEs wih different b-quark and light-jet multiplicity.

    Actually 5FNS NLO simulations based on multi-jet merging for tt+0,1,2 jets provide a natural alternative (to the 4FNS) for a complete description of tt+b-jet final states with one or more b-jets (and of course also for tt+light-jets). Singularities at ME level are avoided by the presence of a merging cut, and double-counting issues between matrix elements and parton shower are also automatically avoided. There are thus two internally consistent formalism for NLO simulation of tt+>=1b-jets based on the 5FNS (NLO tt+jets merging) and 4FNS (NLO+PS ttbb). The open questions are: How do they compare, and which is the best description for tt+b-jets: 5FNS or 4FNS? Is there a fully consistent prescription to combine a 4FNS simulation of tt+b-jets with a 5FNS (NLO merged) simulation for the rest of the tt+jets phase space without double counting? Is there a “hybrid approach” that permits to avoid the drawbacks of the 4FNS (no resummation of initial-state g->bb splittings) and 5FNS (no Mb effects) is a consistent way? How easely can this be implemented in a NLO PSMC?

(4) Short-term recommendation to the experiments

It is difficult at the moment to provide the experiments with a recommendation that satisfactorily matches the complexity of experimental analyses. Theory cannot yet provide solid predictions for the different populations of events used in the analyses, which differentiate on the basis of Nj, Nb, and even Nc (notice that theory has not provided results for tt+c jets yet). In the following section we will ouline what we expect to be the important steps to be taken to improve the theoretical predictions and control their systematics.

Still, summarizing the final Q/A session, a preliminary recommendation can go as follow:

  • (4.a) generate tt+jj and tt+bb separately;
  • (4.b) work in a 4FNS (remove parton-level processes with b quarks in the initial state, at the matrix element level)

(5) Outlook

Theory studies will continue to investigate the issues that have been raised by the talks heard at this meeting with the aim to provide more solid recommendations to the experiments. We would like this to be achieved in the context of this working group, which will offer the natural ground for coordination and discussion. With this respect, we propose the following steps:

  • (5.a) to acquire experience with the various new NLO tools, possibly in strict collaboration with their main authors;
  • (5.b) to be, for a while, agnositc and conservative, and to try to expose all sources of intrinsic theory uncertainties through sufficienctly generous variations of the various technical scales (including PS parameters);
  • (5.c) to document MC validation studies in a transparent way, i.e. indicating all relevant parameter choices and the considered variations.
Based on quantitative studies, it is important that we converge towards a satisfatory theoretical understanding of the scale uncertainties related to the NLO matching and merging procedures (resummation and merging scales) and to arrive at a global and widely accepted prescription for the choice of the related scales and for their variation.

In particular it is crucial to explore the issues explained in point (3) above and to understand how to consistently provide results for Nb=1 and Nb=2. The same will apply to Nc=1 and Nc=2. The problem of jets made of a (bb) pairs from g->bb splitting is very important and ongoing studies will put it on firmer ground.

Nov 24 Backgrounds and uncertainties in ttH, H-->gamma gamma ( Indico)

Dec 1 Backgrounds and uncertainties in ttH, H-->multileptons ( Indico)

Dec 15 Signal modeling in tHq ( Indico)

Jan 12 Backgrounds and uncertainties in tHq

Feb 2 ttH Combination: Systematics and correlations

-- StefanoPozzorini - 27 Oct 2014

Edit | Attach | Watch | Print version | History: r13 < r12 < r11 < r10 < r9 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r10 - 2014-12-18 - LauraReina
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LHCPhysics All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback