TWiki> Main Web>TWikiUsers>BrianFrancis>BFrancisNotes (revision 6)EditAttachPDF

SUS-15-009: Search for natural GMSB in events with top quark pairs and photons (8 TeV)

Contact

Links

CADI
HN

Timeline

Analysis Summary

The analysis searches for an excess of high MET events in the lepton (e/mu), jets, and photon final state. The search targets the direct production of light stops in a GMSB scenario with a very bino-like neutralino NLSP.

The dominant background is Standard Model top-antitop pair production with associated photons or additional jets that may be mis-identified as photons. Additional backgrounds include typical backgrounds to ttbar searches: W/Z + gamma, diboson, W/Z + jets, and single top production. All backgrounds are simulated in Monte Carlo and several scale factors are determined to best fit the data in several control regions. An additional control region using poorly isolated photons ("fakes") is examined to characterize the MET shape of the MC.

The results of the search are interpreted as a shape comparison between data and expected backgrounds. To eliminate dependance on the SM ttbar+gamma+gamma cross section and the jet-->photon fake rate, the total background normalization is allowed to float freely to be an entirely shape-based comparison. Upper limits are calculated against a private MC sample of very bino-like NLSP GMSB models with the stop being much lighter than all other squarks or gluinos.

Signal MC

The search uses a privately generated FastSim set of samples. For information about these, see:

ARC review:

Manfred Paulini on AN v3:

  • Sec. 2.2: What checks were performed to gain some confidence that the privately produced GMSB signal samples can be trusted as if they were centrally produced?
    These samples were created similarly to those for the di-photon inclusive searches (SUS-12-001 and SUS-12-018), in which the production was not stops but first/second generation squarks and gluinos (scalar udsg). Thus the checks performed for the "stop-bino" samples were mainly against these well-known older samples, which were also private FastSim.
    We found that for stops and binos in our sample, the kinematics agreed favorably to that of squarks and binos with the same masses produced in these older samples. For an executive summary see Dave Mason's xPAG presentation and the twiki for the scan.
    If he'd like, perhaps Dave Mason could comment on this since he oversaw their creation firsthand.

  • Sec. 2.5: why was the tt= sample re-weighted by the weights squared and not by a variation of no re-weight and 2x the weight (instead of weight squared)?
    Weighting by the weight squared is the TOP PAG recommendation for estimating the upwards systematic fluctuation of this effect: see their twiki on the matter.

  • Sec. 3.3 & 3.4: what bothesr me a bit is the fact that the eta regions for e (2.5) and mu (2.1) are different for tight but for loose you use 2.5 for mu, too. Can this difference in choice cause any effect on the CR estimates?
    The requirement |eta|<2.1 for tight muons is due to the SingleMuon trigger requiring it, and is not necessary for other/additional muons in the event. The loose lepton veto is kept constant between signal and control regions, so this should not affect control regions. Where it could affect the analysis is if the object kinematics or MET differed greatly between ttbar-->(e+e, e+mu, mu+mu), in which case the different efficiencies for each combination would be important; however this is not the case.
    The |eta|<2.1 cut in the trigger does make the tight muon requirement tighter than for the tight electron, which is one cause of the difference between electron and muon event counts. Beyond all this, these vetoes are what is recommended by the TOP PAG for semi-leptonic selections.

  • Tab. 8: why is there a lower cut of 0.001 on sigma_IetaIeta? Is this standard photon ID? I don't recall ...
    This cut, and the one on sigma_IphiIphi, are for ECAL spike removal and general anomaly protection. They are not required by EGamma but are fairly common; for example the 8 TeV inclusive search used these as well. Concerning its effect, zero otherwise selected photons in the TTGamma MC sample fail these cuts.

  • Fig. 9: the fits seem okay around the Z region but are less from optimal away from the Z. Is this anything to worry about? Was is treated in a systematic uncertainty?
    While not considered important for the signal regions, what you are seeing is the lack of Drell Yan for 20 GeV < M(lep lep) < 50 GeV, which in Figure 9 is exaggerated compared to signal regions due to the di-lepton selection here. You can see in Figure 9 that when requiring b-jets (the top two plots) this is not an issue.
    What can be done to study the effect of this is to re-do the template fit excluding this low-mass region, and see that the scale factor doesn't change much (it should be dominated by on-mass Z-->dilepton). Furthermore, since these events are more accurately Z/gamma* Drell Yan, the fit range can be extended to higher masses to observe how much the scale factors change. Keep in mind here that the non-btagged muon channel (bottom right of Fig. 9) is not used in the analysis: the non-btagged electron sample is only useful as an input to the electron mis-id rate measurement. When varying the fit range of Figure 9, the scale factors for this are:
Z(gamma) SF in channel Normal (0 - 180) 50 - 180 50 - 600
ele_jjj 1.24 1.26 1.25
ele_bjj 1.38 1.39 1.39
muon_bjj 1.60 1.62 1.62
These are within the fit uncertainties summarized in Table 14 of the AN. So in short this is not seen as a cause for concern as the Z peak dominates the fit result, and was not given its own systematic. When other systematics are fluctuated, these fits are re-performed and so there is a reflection of these fits in the final results beyond just the fit/stat uncertainties. Plots of the results for this are shown below for the ele_jjj channel:
Normal (0 - 180) 50 - 180 50 - 600
z_mass_ele_jjj_0_180.png z_mass_ele_jjj_50_180.png z_mass_ele_jjj_50_600.png

  • Sec. 4.4.1, bottom of p. 21: how is the overall scale adjustment taken into account in the analysis? From Fig. 15 is seems to be a good 10% effect.
    In lines 357-360 and 376-378 explain, this scale adjustment is not actually applied to the final result. The goal of this section is to ask: if we were to adjust the photon purity with this scale factor, would the distribution of MET change noticeably? In isolating only the shape of MET in the final evaluation, the extra 100% systematic on background normalization would wash away this overall 10% effect, but would not wash away a change in the shape. You can also see this is a 10% effect from the scale factors in Table 16.

  • Tab. 15: the discrepancy between Fit and MC seems to be bigger in sigma_IetaIeta? Why not just using chHadIso? Or at least having a systematics that using only one or the other? Back to the previous answer, neither is actually used in the final results so a systematic reflecting the difference isn't warranted. As for the discrepancy in sigma_IetaIeta here, the tt+gamma cross section measurement also encountered this and treated it in the same way. As you say, the way both analysis handled this was to just use chHadIso. The low sigma_IetaIeta is seen to be from some error in the shower evolution of photons in GEANT4.
    If you look in the PAS in lines 147-151 indirectly touch on this question, because the 5% variation is from a very maximal case where you completely replace the MET from ttjets with tt+gamma's shape, or vice versa -- ie, if you were to perform a template fit like chHadIso or sigma_IetaIeta and find a maximal disagreement, the effect on MET would just be 5% bin-by-bin variations.

  • Tab. 17 & 18: There is a significant excess in the data compared to the total background prediction - in CR1 and if I take the background errors at face value, also in CR2. I assume this came up in the pre-approval. What was decided then?
    In the HN I noted these tables did not include the correct uncertainties, so in short this did not come up in pre-approval and nothing was decided. To further temper this issue, compare to Figure 29 to see that the event counts are well within uncertainties for most channels. Related to a previous question of yours, you can also look at the photon purity measurement in Section 4.4, which in simplified terms can be considered a normalization of the tt+gamma/jet rate to data: it is roughly a 10% effect, which is about the order of the differences you speak of in Tables 17-20. You also might consider the public CMS measurement of the tt+gamma cross section (Public Twiki, CDS) which was higher than predictions by about 30% 30% for a similar (but not exactly the same) selection as this. Also, the uncertainties on the theoretical cross section of tt+gamma used here is 50%, and when all combined the theory systematics for ttbar-related rates alone are ~25%, well past the differences of which we're speaking. Lastly, the differences in CR1 are close to the systematic uncertainties therein (see Figure 16), and are used conservatively as an additional systematic in the signal regions -- ignoring the unfortunate presentation of uncertainties in the tables, the variations in all channels are fairly consistent with a tt+gamma rate that is slightly higher than predicted, an effect that in Section 4.4 we found to have minimal effect on the shape of the MET distribution.

  • Tab. 19 & 20: Same comment for SR1 and certainly for muon SR2. What conclusion did the discussion about this data excess come to during the pre-approval?
    See the previous answer for SR1. The table uncertainties seemed to have been overlooked in pre-approval and it simply did not come up. As for the muon channel in SR2, this was briefly touched upon in pre-approval as only an interesting notice. As a shape-only comparison however, this did not drive the limits as it was not compatible with the high-MET signal nor compatible with the other channels. The conclusion in pre-approval was that with higher statistics this might be good to explore, and with that CMS should be able to precisely measure the tt+gamma+gamma cross section and not rely on the shape-only provision. A significantly different (mu+jets):(ele+jets) ratio in tt+gg events would be exciting to see but this dataset is not powerful enough to approach that, and with the overall method isolating the MET shape we feel it's best not to address this in the PAS.

  • Sec. 7, p. 44/45: why do you use all MET bins in your definition of your signal region? I thought the low MET bins were used for background normalization? Wouldn't it make sense to start the signal region at moderate MET, say > 50 GeV or so? From Fig. 29, the data-bg discrepancy seems to be at low MET. I think restricting the signal region to not include the low MET bins will also help in getting a better agreement between the data and bg predictions in Tab. 19&20. Was this discussed?
    The reason for including these background-dominated bins, especially in SR1, is to allow the limit-setting machinery to constrain these backgrounds (with the 100% log-uniform "float" parameter, this makes it basically a normalization) in the high MET bins. For SR2, removing the low MET bins could be very dangerous for this analysis because if you only have 1-3 bins, you lose most of the "shape" information and you just have a log-uniform free-floating +/- 100% estimate, giving you no sensitivity.
    As for "double-using" the low MET (< 50) SR1 region, recall from a previous question that the photon purity scale factor method is not applied to the final estimate. You can consider that method to be simply a check that if you were to change the composition of tt+jets and tt+gamma, would it indeed just be a normalization and not a big change in the MET shape? With that independant check giving a fairly flat 10% effect, you can just allow the limit-setting tool to fit the normalization for you using the log-uniform 100% float parameter and find that the post-fit value is very similar. Once again for Tables 19 and 20, if you include the correct uncertainties there is reasonable agreement and the discrepancy is of order 10% like all these effects. This was discussed in our group also in the context of avoiding "double-using" this low-MET region, and is why the photon purity scale factor is only a check.

Manfred Paulini on PAS v0:

  • use CMS convention of GeV for mass and momentum and remove all GeV/c^2
    Done.

  • pp collisions: use pp in roman and not italic
    Done.

  • do not use 'fake ...' or fakes and replace all with misidentified or similar
    All references replaced with "misidentified photon"

  • look up PubComm recommendations for use of hyphens in b quark, b jet but b-quark jet ... and correct all
    I went through the whole text and made many corrections governed by the PubComm hyphen rules.

  • I know we talked about this ... this is just a reminder about the plot beautification and CMS figure standards ...
    All plots have been recreated as closely as possible to the recommended style macros.
  • title: it is not good to have an abbreviation such as GMSB in the title. My suggestion: Search for natural supersymmetry in events with top quark pairs and photons in 8 TeV pp collision data (or: ... in pp collisions at sqrt(s) = 8 TeV)
    I agree, I believe the original title was a place-holder of sorts until the ARC began. It has been changed to your suggestion.

  • abstract: We need to add that we don't find an access and set some limits. My suggestion for the abstract wording:
    We present a search for a natural gauge-mediated supersymmetry breaking scenario with the stop squark as the lightest squark and the gravitino as the lightest supersymmetric particle. The strong production of stop quark pairs and their decays would produce events with pairs of top quarks and neutralinos, with each decaying to photon and gravitino. This search is performed with the CMS experiment using pp collision data at sqrt(s) = 8 TeV, corresponding to an integrated luminosity of 19.7 fb-1, in the electron + jets and muon + jets channel, requiring one or two photons in the final state. We compare the missing transverse energy of these events against the expected spectrum of standard model processes. No excess of events is observed beyond background predictions and the result of the search is interpreted in the context of a general model of gauge-mediated supersymmetry breaking deriving limits on the mass of stop quarks up to 750 GeV.
    I agree with the comment and have made the abstract to be very similar to your suggestion.

  • Fig. 1: Since this is not a real Feynman diagram where time arrows play a role and need to be correct, I would remove all arrows and just show lines
    Okay.

  • Fig. 1 caption: What is GMM? My suggestion for a less redundant caption:
    Feynman diagram of the GMSB scenario of interest. With stop quarks as the lightest squark, their pair-production would be the dominant production mechanism for SUSY in pp collisions at the LHC. Assuming a bino-like neutralino NLSP, each stop would decay to a top quark and a neutralino, with the neutralino decaying to a gravitino and a photon. Shown above is the electron+jets or muon+jets final state of the top pair decay.
    For the GGM comment I agree, however in small points in your suggestion I would disagree. I feel it's best to keep the language of "lightest squark or gluino" versus just "lightest squark". The stop being much lighter than the gluino is important to the analysis, otherwise any allowed gluino production would be very close to those in the inclusive photon searches (ie no third-generation decays) we've published previously. "Squark or gluino" is a bit confusing I accept, so if there are any recommendations how to clean this up while retaining the gluino caveat I'd be happy to change it.
    I also prefer the language of "top squark" over "stop quark" for clarity that it is not a quark. Somewhere else a comment was made that "stop squark" is redundant so I have edited those instances to be "top squark". The updated Figure 1 caption now reads:
    "Feynman diagram of the GMSB scenario of interest. With top squarks as the lightest squark or gluino, their pair production would be the dominant production mechanism for SUSY in pp collisions at the LHC. Assuming a bino-like neutralino NLSP, each stop would decay to a top quark and a neutralino, with the neutralino decaying primarily to a photon and gravitino. Shown above the the electron~+~jets or muon~+~jets final state of the top pair decay."

  • l 6: what is "a new little Hierarchy problem"? How does it differ from the known 'regular' hierarchy problem? Can you explain or give a reference?

Anthony Barker on AN v3:

Anthony Barker on PAS v0:

-- BrianFrancis - 2015-12-10

Topic attachments
I Attachment History ActionSorted ascending Size Date Who Comment
PDFpdf preappHWresponses.pdf r1 manage 423.6 K 2015-12-11 - 17:23 BrianFrancis  
PNGpng z_mass_ele_jjj_0_180.png r1 manage 100.8 K 2015-12-10 - 17:41 BrianFrancis  
PNGpng z_mass_ele_jjj_50_180.png r1 manage 96.4 K 2015-12-11 - 16:19 BrianFrancis  
PNGpng z_mass_ele_jjj_50_600.png r1 manage 113.7 K 2015-12-11 - 16:23 BrianFrancis  
Edit | Attach | Watch | Print version | History: r9 < r8 < r7 < r6 < r5 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r6 - 2015-12-22 - BrianFrancis
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback