Difference: BFrancisNotes (8 vs. 9)

Revision 92016-01-18 - BrianFrancis

Line: 1 to 1
 
META TOPICPARENT name="BrianFrancis"

SUS-15-009: Search for natural GMSB in events with top quark pairs and photons (8 TeV)

Line: 194 to 194
 

Changed:
<
<
The 'regular' hierarchy problem is the 1:10^34 loop corrections to the bare higgs mass, and with SUSY this is down to 1:10^2. There is still a 'little' tuning between SUSY sparticle masses to keep EWKSB unchanged, and it is related to the stop mass. In literature I've seen from CMS this has gone un-referenced, but a suitable reference might be http://arxiv.org/abs/hep-ph/0007265 .
>
>
The 'regular' hierarchy problem is the required precision of order 1:10^34 from the loop corrections to achieve a 125 GeV higgs mass, and with SUSY this is down to 1:10^2. There is still a 'little' tuning between SUSY sparticle masses to keep EWKSB unchanged, and it is related to the stop mass. In literature I've seen from CMS this has gone un-referenced, but a suitable reference might be http://arxiv.org/abs/hep-ph/0007265 .
 
  • l 7: and are left largely un-explored at the LHC. -->
and have been left largely unexplored at the CERN LHC.
Line: 517 to 517
 

Changed:
<
<
Initially for technical reasons, but beyond that I'm not certain that including the control region systematics here is the correct thing to do. The way Figure 4 is presented, the signal regions are as they are independant of the CRs and the "extra" CR systematics are not shown to continue the suggestion that they are indeed "extra", or conservative user-driven estimates of the error. I feel that if Figure 4 did include them, many readers would want to see the plots without them, and not everyone can be happy. For now these plots will be produced and we can discuss this further.
>
>
Initially for technical reasons. Now that I've made the plots including these, they are not so different so I've included them in the PAS.
 
  • l 200/201: observed indicating the presence -->
observed that would indicate the presence
Line: 815 to 815
 

Changed:
<
<
This is a fair point to be discussed more; it did not come up during pre-approval. The argument to keep it is that it does show the reader that there is a fair agreement between data and MC in CR2, even though the sample is too limited to extract just how quantitative that "fair agreement" is.
>
>
There still seems value in showing CR2, since the agreement is fair and the message is that it's unused for reasons only of statistics and not an obvious failure in the method. It might be more questionable to leave it out entirely, since its construction is pretty natural compared to the rest of the analysis.
 
Lines 168-172: How is this additional shape systematic determined? Table 3 shows significant excesses in three out of four channels. This strongly suggests that the background is inadequately modeled. SR1-ele constitutes a 3 sigma excess; SR1-muon a 2 sigma excess, and SR2-muon a 2.3 sigma excess.
Line: 830 to 830
 

Changed:
<
<
There are tt+gg events simulated in the samples used; what is not used is an explicit, specialized tt+gg sample where both photons are high-PT and with large radiation angles. The cross section for such events is very very small. You often have high-PT photons (a single one) which is why the specialized sample of tt+gamma is needed, but you do not expect both photons to be in the tails of quickly falling PT distributions. What most of the selected SR2 events contain are mis-identified jets (as photons) or prompt photons where only one or zero of them are considerably high in PT or radiation angle. Remember that in all samples there are additional jets simulated, so "tt+gamma" is in reality "tt+gamma+jets" where some of those "+jets" do include photons ("+a") in MadGraph parlance.
The real question then is: do the "+jets" from MadGraph have the right number of photons, the right amount of electromagnetic fluctuation in jet hadronization? The resolution the analysis came to and that was discussed in pre-approval was that without a precise measurement of the SM tt+gg cross section, you could not be sure -- however if the MET distribution is the same between "+jets" contributions, then you could do a shape-based analysis independant of the absolute rate. What the analysis should accomplish is an estimate of the MET from the selected photons, and not try to pin down how much is due to actual photons.
>
>
There are tt+gg events simulated in the samples used; what is not used is an explicit, specialized tt+gg sample where both photons are high-PT and with large radiation angles. The cross section for such events is very very small. You often have high-PT photons (a single one) which is why the specialized sample of tt+gamma is needed, but you do not expect both photons to be in the tails of quickly falling PT distributions when the total yield itself is very small. What most of the selected SR2 events contain are mis-identified jets (as photons) or prompt photons where only one or zero of them are considerably high in PT or radiation angle. Remember that in all samples there are additional jets simulated, so "tt+gamma" is in reality "tt+gamma+jets" where some of those "+jets" do include photons ("+a" in MadGraph parlance).
The real question then is: do the "+jets/a" from MadGraph have the right number of photons, the right amount of electromagnetic fluctuation in jet hadronization? The resolution the analysis came to and that was discussed in pre-approval was that without a precise measurement of the SM tt+gg cross section, you could not be sure -- however if the MET distribution is the same between "+jets/a" contributions, then you could do a shape-based analysis independant of the absolute rate. What the analysis should accomplish is an estimate of the MET from the selected photons, and not try to pin down how much is due to actual photons.
 This is the crux of the shape-based analysis and why the background normalizations are allowed to float. If it were possible to measure the SM tt+gg cross section with the 2012 dataset (ie real ttbar + prompt + prompt with a complete di-photon purity measurement), it would be possible to do this analysis with absolute background normalizations.

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback