Process generation


In 2HDM there are 5 Higgses (with MadGraph notation): h1, h2, h3, h+,h-. h1 corresponds to the SM higgs in the typeII (couplings) alignment mode (mixh = 0 <--> α = β). In this same approximation, the masses of the rest of higgses are degenerate. We're looking for t t~ t t~ final states. One possible way would be through the decay of h2 -> t t~. If the h2 is on-shell, we would a see a peak around mh2 for the mtt distribution, which a pretty clear signal. In MadGraph, the generation of a on-shell h2 decaying into t t~ is:

import model 2HDMtypeII
generate g g > t t~ h2, (h2 > t t~)
set mh2 MH2
set mh3 MH3
set mhc MHC
set wh2 auto
set wh3 auto
set whc auto
set mixh 0 # sets the alignment mode
set beta BETA # the angle β is one of the key parameters of the model

The particle widths have to be reescaled to the masses chosen. This can be done either by hand or automatically, but there is the following warning about doing it automatically:

Be carefull automatic computation of the width is
ONLY valid in Narrow-Width Approximation and at Tree-Level.

In which case are we? Is setting "QED = 99 QCD = 99" equal to going beyond tree level?

MadSpin Decay

In principle the decays of the tops to w b, and the subsequent hadronic or leptonic decay of the w should be performed within MadGraph, and not using Pythia. Pythia does not keep spin correlation, and the angular distributions of the final states are wrong. However, there's a problem with performing the decays inside MadGraph:

However, as far as Monte Carlo event generators are concerned, the previous approach may not be the optimal one. Indeed, in the case of processes with complicated decay patterns, the efficiency of the generation of unweighted events ( with the same weight) becomes a serious issue

The way of solving this is using MadSpin to perform all the decays. That means that in the Signal generation we have to change the process to:

generate g g > t t~ h2

and let MadSpin decay all the particles.

The MadSpin syntax is very similar to that of MadGraph, where the decay of every decaying particle is requested as:

decay p1 > p2 p3

It seems that to make t t~ decay down to leptons or jets through the w bosons, first the whole decay chain, and then the substeps have to be indicated

decay t > w+ b, w+ > l+ vl decay w+ > l+ vl


decay t > w+ b, w+ > l+ vl

works too.

The following syntax is correct to decay h2 > t t~ and then the tops to w b inside MadSpin either leptonically or hadronically:

decay h2 > t t~, t > w+ b, t~ > w- b~
decay t > w+ b, w+ > l+ vl
decay w+ > l+ vl
decay t > w+ b, w+ > j j
decay w+ > j j
decay t~ > w- b~, w- > l- vl~
decay w- > l- vl~
decay t~ > w- b~, w- > j j
decay w- > j j

However, there's a big problem. The time needed to perform the decays in MadSpin is somehow unpredictable, and the process seems to get stuck. Olivier Mattelaer says that it isn't unexpected that it would get so long. It's ok if we need ~30-40 h. for a single 10000 event run. But if MadSpin stops processing "sometimes", we are screwed. This situation already arises with a decay chain simpler than that above. It seems we have to choose between:

  1. Using MadGraph for the generation of the tops, and let Pythia decay them to w b and the w's to jets/leptons, but lose the spin/angular correlations.
  2. Using MadGraph to generate the tops and decay them using MadGraph as well. This has an extremely low efficiency for the generation of unweighted events (0.1-1%). We would have to generate extremely large amounts of weighted events, or weight the events ourselves.
  3. Using MadGraph for the generation of the tops and MadSpin for the decays, which is (prohibitely) time consuming.

How important are spin-angle correlations? Is it ok to generate processes lacking them if we're not going to use any angle-related observable?

31.3.2015: A patch was released this week that seems to solve, at least partially, this problem I have installed it and I'm trying to run the decays now. Among other things it allows now to use the syntaxis

decay h2 > t t~, (t > w+ b, w+ > all all), (t~ > w- b~, w- > all all)

to define the decay chain of h2 into tops into w ...


The reason of generating the process Signal+Background+Interference (SBInt) is that there's the possibility that both process interfere destructively. This would lead to a suppression, instead of an enhancement, of the observables (ie, less events instead of more events). For every S sample we have generated the corresponding SBInt sample. To check the effect of the interference we can do two things:

  1. Substract sigma(SBInt)-sigma(S)-sigma(B)
  2. Plot the distributions of observables for SBInt and for S+B

For 1, we always find that the cross-section corresponding to the interference is always positive and very small. For 2, we see that the distributions of the observables for the SBInt sample are basically equal to those of the S and B samples summed (equal number of events generated, weighted by the relative cross-section). Considering that, it seems reasonable to generate only S and B samples.



The effect on the pT/ET/Emiss distributions are non-existant al low masses of h2. For example, at tan &beta=1, mh2=500 GeV the distribution for S is shifted slightly to smaller values than that for B, with a mean a bit smaller and tails not reaching values so high. For masses above ~700-900 GeV the distributions are actually different, shifted to larger values, being a great difference for masses above 1TeV. The differences come from the subset of particles emerging from the decay of h2. In fact, if we plot separately the summed pT of final (@MG level) particles coming from h2, and the rest of particles, the distribution of the latter are equal for B and S for all masses of h2, whereas for the former, there's a big difference for all masses except 500 GeV


The spatial distribution seems pretty much different, at least at parton level, for all masses of h2. The &Delta y distributions for each couple of tt are very different in S and B


  • It would interesting to see this same effect while plotting rapidity distributions for jets and leptons. If any correlation of this kind is lost, there is no way we can use it -> the correlations are maintained, but diluted the further we go "down the line"
  • The values of &theta±, defined as the angle between the direction of flight of l+ (l-) in the t (tbar) rest frame and the positive beam direction (computed using the tops rather than the leptons) are different too. The individual distributions as such are not very different, but the relation between the angles (angle of t vs angle of tbar) is modified.

This differences are much more patent in the particles coming from the tops produced at the same level than h2. Rapidity distributions, differences, angle plots, ... are pretty much the same for the particles coming from the decay of h2 compared to those coming from the decay of any of the top pairs for the B sample

question are we going to be able to match couples of ttbar after event reconstruction? The observables I'm defining depend on being able to work with one pair, compute something, take the other pair, compute the same, and compare. Even if we're able to reconstruct 4 bundles, 4 group of particles that could be assigned to a top quark each, which are the pairs? I guess we need to match them to reconstruct the tt invariant mass distribution. The problem I see is that if sometimes we match them correctly (by chance) but sometimes not, we are going to lose separation. We will have instances of the observable that aren't significant/informative simply because it's not comparing the right thing.

SameSign dilepton cut: TCut L2("Sum$(id==11||id==-11||id==13||id==-13||id==15||id==-15)==2&&(Sum$(id==11)==2||Sum$(id==-11)==2||Sum$(id==13)==2||Sum$(id==-13)==2||Sum$(id==15)==2||Sum$(id==15)==2)")

We're working on the Same-Sign dilepton channel. When we select l+l+ / l-l- we're implicitly selecting W+W+ / W-W- and hence tt / tbar tbar. Each t/W/l is going to belong to a different pair, so one of them will come from the decay h->ttbar (in the signal sample). We can compare distributions (whatever) between one lepton and the other, meaning we are comparing something coming from the standard ttbar and something coming from the BSM ttbar.

Does it make sense to plot rapidity differences? Is rapidity something I can add and substract? Even if not, the correlations in rapidity distributions are different anyway. If instead of the difference we plot y_t:y_tbar (or even y_l1:y_l2 in the derived sample) there's a correlation in the signal and signal+back+int samples that isn't in the background.

-- AlbertoGasconBravo - 2015-03-24

Edit | Attach | Watch | Print version | History: r8 < r7 < r6 < r5 < r4 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r8 - 2015-04-14 - AlbertoGasconBravo
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback