Search for chargino and stop (2LOS) with Run2 data


Post Full Status Report Comments:

Follow-ups on impact plots - Email from Halil Saka (22/11/2022):

  • Regarding the explanation of the asymmetry on the WW shape uncertainties [...]. Looking at figure 9, I don’t really see sufficient motivation to enter these as one sided. All deviations are within statistical uncertainties in the tail, but I agree there is a hint of a systematic effect. If the trend is to be taken seriously, I think it might be better to correct for it first. If not, it might be better to conservatively assign a symmetric uncertainty to allow for the fact that data could have just easily fluctuated the other way.I suggest to symmetrize these WW shape uncertainties and quantify the loss, if any, on the sensitivity (i.e. limits) using the most affected signal mass points/regions.
    • We are not sure to follow your argument. Data could have fluctuated the other way, if the observed trend is indeed a fluctuation, but we do shoulnd't put a systematic uncertainty to cover statistical fluctuations on a CR. We are just worried about possible systematic effects. All deviations are within statistical uncertainties in the tail (so we do not correct for them), but the opposite deviations (the ones covered by symmetrised uncertainties) would be two sigma away from the observed data/MC ratio, so it seems less likely that there is a “down” systematic effect rather than an “up” one.
    • We symmetrized the WW shape uncertainties and quantify the loss by computing the exclusion regions for the TChipmSlepSnu and T2tt models. Results with symmetrised (not symmetrised) uncertainty are shown by the black (red) curve. As one can expect, the effect is larger for signal phase space with large mass splitting between the prompt SUSY particle and the LSP. For the TChipmSlepSnu model, we observe a worsening of about 20 GeV of the blinded exclusion boundary at high mass. For the T2tt model, it’s around 2-3 GeV.

  • Regarding "jet energy correction/resolution/unclustered “ uncertainties being asymmetric, I think we should see plots of “up/nominal” and "down/nominal” ratio distributions with the statistical uncertainty on the nominal shown as a band around “1”. This will reveal a) how systematic are the shifts wrt the nominal distribution, and b) how significant they are wrt the statistical uncertainties. Statistical spikes in such up/down variations may indeed cause asymmetric pulls later on, and you seem to hint at such effects here as well. These typically need to be taken care of manually, case by case, by smoothing/averaging or putting a cap on the maximum value etc.
    • We remade the plots adding a ratio canvas where the systematic “up” (“down”) variation versus the nominal shape is shown as a red (blue) band around 1, and the statistical uncertainty is also shown as a gray band ((link)).
    • For the unclustered energy, we observe an asymmetry in up and down variations above the statistical uncertainty and consistent across the search regions and data-taking years. An example is given by events with 160<pTmiss<220 GeV and tagged jets in 2016HIPM, 2016noHIPM, 2017 and 2018 data.
    • We studied the pTmiss distributions in inclusive two-lepton events to check whether the asymmetry in the unclustered energy variations is an artifact of our selection. We do see an asymmetry in different flavor events that is not present for JES and JER variations. Also, in same flavor events, the unclustered energy variations become more symmetric at low pTmiss, where Drell-Yan process dominates. It seems therefore that the observed asymmetry is a real property of unclustered energy variations in events with large hadronic activity.
    • As for the JES and JER variations, they are generally symmetric when statistical uncertainties are smaller (example), while fluctuations appear when they become smaller than the statistical uncertainties (example). While one could argue that the impact of such fluctuations is shadowed by the impact of statistical uncertainties, we are of course open to think/discuss with you the appropriate way to treat such cases.

  • Thanks for compiling them, but I found the systematics plots quite hard to review. Maybe using a style like above (or something along these lines) would facilitate this (where we can actually see up/down shifts wrt nominal, and in comparison to stat uncertainties, for the "WW shape", and "jet energy correction/resolution/unclustered“ uncertainties).
    • We remade all the plots using the suggested style (link).

Comments on impact plots/datacards - Email from Nadja Strobbe (22/11/2022):

  • Thank you for these extra plots. There are several nuisances that have asymmetric impacts, some that seem fully one-sided, and some where one side is larger than the other side (e.g. 3rd nuisance in the stop plots). Do you understand why this is the case?
    • We have three types of systematic uncertainties that can have an asymmetric impact.
      • Systematic uncertainties that are defined as one-sided. This is the case of the top-pt reweighting and of the mT2 shape systematics derived from the CRs. This is the case for instance of the 1st nuisance in the chargino impact plot, which corresponds to the uncertainty on the mT2 tail modeling derived from ANv8 Figure 9 (“For the backgrounds with a kinematic endpoint at the W boson mass, a one-sided uncertainty is set corresponding to 20% in the 100-160 GeV range, 40% in the 160-240 GeV range, and 50% above 240 GeV”). Variations on one shape entering the data cards is exemplified here for 2018 e-mu events with pTmiss>380 GeV and no b-tagged jets.
      • MC statistical uncertainties. These are treated in the ML fit through the automatic statistical uncertainties method (autoMCstats of the combine tool. This is the case for instance of the 3rd nuisance in the stop impact plot you mentioned, which corresponds to mT2 bin 5 in 2016 e-mu events with pTmiss>380 GeV and no b-tagged jets. In this bin, we have an expected SM yield of 1.2 events from 14 “effective” events, which gives an asymmetric poissonian error.
      • Uncertainties from jet energy correction/resolution/unclustered which modify the values of pTmiss, mT2, and (b)jet multiplicity (via the jet pT>20 GeV cut). The shape variations induced by these uncertainties can have an “intrinsic” asymmetry (due to the exponentially falling shape of the pTmiss and jet pT distributions) and a (possibly small) “statistical” one (in bins with very high pTmiss and mT2 values). One should also note that the effect of varying these nuisances on the pTmiss and (even more) on the mT2 is not trivially dependent on the event topology.

  • It is also not entirely clear to us exactly how the systematic uncertainties are implemented (e.g. whether some are one-sided). To address this, could you include in the AN plots of all your systematic shapes that are put into the data cards, along with a description of how the correlation is treated?
    • Plots for all systematic shapes entering the data cards for stop and chargino are about 3000, which we feel is a bit too much for an AN :). We uploaded them all in this web page, where it’s also more easy to browse them. In that page there is a directory for the chargino search regions and a directory for the stop search regions. Each of them contains one directory for data-taking year. In the latters, a directory for each systematics is given, containing a plot for each search region. Each plot shows the expected background yields with two systematic band variations: the red one corresponds to shape variations induced by the variation up of the nuisance parameter, the blue one corresponds to the variation down. Both linear and log y-axis plots are given.
    • We propose to add in the AN a sentence along these lines: “In the ML fit, correlations of each systematic uncertainty source across search regions and data-taking years are taken into account as follows: uncertainties of a theoretical nature (pp inelastic cross section, renormalization and factorization scales, PDFs) are treated as correlated across all data-taking years and search regions, uncertainties related to the detector performance (trigger efficiencies, object reconstruction efficiencies, jet energy scale and resolutions, etc.) are treated as uncorrelated across data taking years (but for specific POG recommendations such as luminosity and b-tagging efficiencies where the uncertainties are broken down into correlated and uncorrelated contributions), uncertainties on the modeling of the mT2 observable at high pTmiss (from the CR studies of Section 5) are treated as uncorrelated across data-taking years and pTmiss bins”.
    • It is possible to understand how a correlation of a particular nuisance is treated in the impact plots and the data cards: if the name of the nuisance include the year (search region name), then that nuisance is uncorrelated across the data-takin year (search regions) otherwise it is correlated.

  • It would also be helpful to send over your data cards so we can have a look.
    • We uploaded the data cards here, split in two folders:
      • A combined folder where a combined data card for the full Run2 is available for the chargino and stop search regions.
      • A per year folder where for both chargino and stop, we share the individual data cards, split per data-taking year and search regions.

Comments posed during Full Status Report ( Link to the slides in Indico):

  • Please update the impact plots while letting the signal strength take negative values:
    • The new impact plots are shown for the chargino and top squark, also repeating the procedure while using a signal injection of 15 for the chargino and top squark in order to ensure that the fit is well behaving under this scenario as well.

Comments to AN-19-256 v7:

The content in the AN-19-256 referred to in the following answers has been implemented in v8.

Email from Halil Saka (24/10/2022):

  • Please update your lumi values with the latest calibrations from Lumi POG, see https://twiki.cern.ch/twiki/bin/view/CMS/LumiRecommendationsRun2 (-> 138/fb).
    • We updated the lumi values quoted in the abstract, introduction and summary. The values used in the analysis are up-to-date, apart for the results presented in section 8 (exclusion plots), which will be updated once we have the UL signal samples.

  • Figures: Throughout the AN, please clarify in figure captions whether the plots are prefit or postfit (and if postfit, what other plots are being simultaneously fit).
    • All plots in the AN show data from CRs and are pre-fit (a part for the studies in Appendix F where we explicitly say that the plots are post-fit). We added a statement at the end of the first paragraph in Section 5 to clarify that all plots there are pre-fit.

  • SR MT2 bins: How are these bin boundaries decided? Are they driven by a specific criterion? Please clarify.
    • The optimization of the pTmiss and mT2 boundaries in the SRs is described in Section 4.1.
    • We further merge the last mT2 bins dependent on the pTmiss bin for the chargino SRs in order to avoid having no meaningful mT2 bins: we require at least one subregion in a certain pTmiss bin to have >1 SM events or a significantly large signal contribution in a mT2 bin to keep that mT2 bin for the considered pTmiss.

  • L160-175: How is the selection used for such studies different from the signal phase space targeted here?
    • The selection used for such studies are inclusive (Z->ll events, or ttbar->emu+jets events), being designed to improve the modeling of ISR of generated events in EWK (Z events) and strong (ttbar events) production processes. What is crucial here is that the weights are applied to the signal samples prior to any selection, and a normalization factor is computed such that the inclusive cross section is not affected.

  • L282, Table1: is the third lepton veto pt cut 10 GeV or 15 GeV ?
    • It is indeed 10 GeV. We corrected the value quoted in Table 1 in the new version of the AN (v8).

  • L224-228: Could you elaborate on this ad-hoc method mitigating the FASTSIM MET resolution issue? This appears as one of the larger uncertainties on the signal models (tables 9, 10). Are these driven by small signal MC statistics?
    • This is a special treatment recommended by the SUS PAG for EOY FastSim samples, based on FastSim resolution studies in 80X. We plan to review it with UL samples, when available.
    • For the T2tt model (Table 10), there might certainly be an overall contribution from MC statistics, which is not very rich in pre-UL signal T2tt samples in the phase space we are investigating. UL T2tt samples will have 10 times more statistics in this mass range.
    • For the chargino model (Table 9), the uncertainties get a bit large at low pTmiss. This is because the mass point shown in the table has a large mass splitting (600 GeV) so that signal events populate more the large pTmiss SRs.

  • FIG 6-7-8: Please clarify the uncertainties shown in the plots. Are they statistical only?
    • No, they include both statistical and systematic uncertainties, as clarified in the caption of Figure 6. Of course, the systematics derived in this section (mt2ll tails) and the ones described by rate parameters are not included.

  • L381: How does the cross-section applied to your WW MC compare to the theoretical cross-section?
    • We are using the theoretical cross-section to normalize the WW MC throughout all the AN. The sentence “have an enhanced contribution from WW production” just refers to the fact that by vetoing events with b-tagged jets we increase the relative contribution of WW events in the CRs shown in Figures 7 and 8. We rephrased the sentence to “which have a larger contribution from WW production than the CR in Fig. 6”.

  • L394: Which lepton? Do you pick one of the Z leptons randomly? Please clarify.
    • Out of the two leptons coming from the Z boson, we pick the one that has the same charge as the lepton coming from the W boson.

  • L413: Why do you require 1 bjet for studying non-prompt leptons? Isn't the same-sign requirement enough to create a CR enriched in such leptons?
    • We chose to require events with 1 bjet because we observed that ttbar events were the main source of nonprompt leptons. We updated our estimate of nonprompt leptons in Section 5.1.2 by removing the bjet requirement, and setting a systematic uncertainty from the observed dependence on various observables used to define the SRs.

  • Table 6: Indeed the nonprompt background contributions are subdominant, but the uncertainties on nonprompt MC background are a bit on the low side (<10%). Analyses that study fake rates in detail typically achieve a 20-30% precision at best. It would be better to study the MC modeling of nonprompt backgrounds as a function of MET, Njet, NBjet, and the leading jet pt (for the ISR jet) variables and assign an appropriate non-statistical uncertainty.
    • We updated our estimate of nonprompt leptons in Section 5.1.2 by removing the b-tagged jet requirement, and setting a systematic uncertainty from the observed dependence on various observables used to define the SRs.

  • L413: What is the motivation behind requiring a bjet in the SC CR (since there can be an interplay between modeling of bjets and fake leptons)? You have SRs with no bjets as well, so it would be good to check nonprompts in more comparable regions.
    • We chose to require events with 1 bjet because we observed that ttbar events were the main source of nonprompt leptons (indeed, because of the interplay between bjets and nonprompt leptons). We updated our estimate of nonprompt leptons in Section 5.1.2 by removing the bjet requirement, and setting a systematic uncertainty from the observed dependence on various observables used to define the SRs.

  • L441: There are some systematic issues in the Njet modeling it seems. How do you account for this? Is WZ allowed to float freely in each SR "Njet/Nbjet/MET" region?
    • The WZ normalization in the ML fit is described by rate parameters that are constrained in CRs constructed as in Section 5.2.1 and binned in MET and Njet.

  • L506: You mention the discrepancy due to DY MC in EOY reco as an issue that got fixed in the UL reco (which is great). What is the origin of the large error bars at low MT2 values on the left hand side plot in Fig.21? Are these entirely driven by the EE-issue on MET?
    • The large uncertainties at low MT2 values in Fig. 21 left are due to the fact that when analyzing the EOY samples we were taking the whole difference in MC yields when applying/not-applying the jet energy smearing as a JER uncertainty. This was motivated by the fact that jet energy smearing factors for EOY production were considered preliminary (plus of course the effect of EE noise in 2017). For the UL production, we are now just taking the effect of scaling the jet energy smearing factors by their own uncertainty as a JER systematic.

  • L547: How is this 10% motivated?
    • We are using a common rate parameter to describe the normalization of ttbar and tW events in the ML fit. If that was all, the relative normalization of ttbar and tW would be overconstrained to MC expectations. We therefore introduce a normalization uncertainty for tW production (encompassing theoretical production uncertainties for ttbar and tW production) to allow a degree of freedom in the relative normalization of these two processes.

  • L559: Do you assign any systematic uncertainties on the fake rates? Please see comment above on Table:6.
    • We were assigning a systematic based on the observed difference between lepton pairs with both positive or both negative charges. We updated our estimate of nonprompt leptons in Section 5.1.2 by removing the bjet requirement, and setting a systematic uncertainty from the observed dependence on various observables used to define the SRs.

  • L562: Top pt weights: How is the region where these weights are derived orthogonal to the SRs? Please clarify.
    • These weights are derived in ttbar events in the single-lepton and dilepton channels, with an inclusive selection including at least a b-tagged jet. Our SRs with no bjets are orthogonal to these regions, while the ones with bjets are subregions with large pTmiss. We follow the TOP PAG recommendations and use these weights just to set an uncertainty.

  • L590: What is the ETA on the UL signal samples?
    • We are monitoring the UL signal production at this web page. Most of the samples are complete and we are post-processing them, but still FastSim correction factors are missing. We will discuss about this in more details at the Full Status report.

  • Fig 22-26: Please remake these plots lowering the ymin value such that expected backgrounds in all bins is visible. At the moment you seem to have bins with no backgrounds in full Run2. What are the expected background yields in these bins?
    • We updated the plots lowering the ymin from 1 to 0.1 when needed. The expected background yields in the bins not visible in AN v7 range between 0.4 and 0.9.

  • Appendix F: Thanks for these studies. Can you also provide Asimov pulls and impacts for the SRs to show that their background fit is also well behaved? This can come at the full status report.
    • We added the required studies in Appendix H. We uploaded here the plots for chargino and top squark SRs.

Comments to AN-19-256 v6:

The content in the AN-19-256 referred to in the following answers has been implemented in v7.

Email from Valentina Dutta (31/08/2022):

  • L251-2: Do you have any plots of the relevant observables for the analysis, e.g. MET, mT2, before and after this selection in a suitable CR?

  • Fig. 4: What selection is used here?
    • We studied events with at least one jet with pseudorapidity in the noise region (2.650<|eta|<3.139), separating them in events with or without b-tagged jets. No difference in the noise behavior was observed in the two categories. In figure 4, we are showing the distributions for events with at least a b-tagged jet. Equivalent plots for events without b-tagged jets are given in the following links for jet raw pT, DeltaPhi(jet,pTmiss) and HT forward.

  • L297: Just curious about the choice of 2.5, since 2.4 is the tracker limit.
    • Phase1 upgraded tracker extended coverage till 2.5.

  • L375-6: Did you check explicitly how low the signal contamination is for a representative selection of signals?
    • The signal contamination in the region 100<mT2(ll)<140 GeV is typically below 1% for the mass points close to our exclusion region boundary. For lower prompt masses and large mass splitting, where a high sensitivity is given by large cross sections and the mT2(ll) shape details are less relevant, the contamination in the tails can reach 5-10%.

  • L382-3: Are you in contact with the team that reported the excess to compare notes?
    • We discussed this topic extensively with people from our institutions that are amongst the H->WW authors. They do confirm they see an analogous behavior, but a quantitative comparison is tricky because of different selection and techniques used by the two analyses.

  • L401-2: Is any correction or uncertainty applied for any observed deviations?
    • We were hesitating to introduce a systematic based on the statistical level of a test. Anyway, since we observe a 1 sigma excess in two consecutive mT2(ll) bins between 160 and 370 GeV, we propose to add a shape uncertainty based on the observed data/MC difference and the statistical uncertainties in the mT2(ll) bins. We tentatively set the shape uncertainty to 20% in the 100-160 bin, 40% in the 160-240 bin, and 50% in the last two bins. We found that the degradation of the blinded exclusion limits for chargino and stop production is small. We did not yet propagate these uncertainties in the results presented in Section 7 and Appendix G. We prefer first to iterate with you on the details of this uncertainty.

  • L416-9: It’s not clear to me what this means. First, since it is a single number that you are deriving, what is being fit/what is the purpose of it? Secondly, what is the selection you use for opposite charge events? Wouldn’t this overlap with your SR?
    • Indeed, this line is obsolete since what is done is just computing the nonprompt scale factors based on the yields of same-sign events with pTmiss>160 GeV. We corrected the text in the AN.
    • We do not understand the last question about opposite charge events. We are using same-sign events only in this paragraph.

  • Section 5.2: If I understand correctly, the normalization of the sub-leading backgrounds are constrained in the fit by including the CRs in pTmiss and jet multiplicity bins, but MC is used for the modeling of mT2(ll). Is that correct? Is the full SR binning in terms of pTmiss and jet multiplicity used? Can you comment a bit more about how exactly this is set up in the fit wrt correlations between bins etc.?
    • This is correct, the normalization of the sub-leading backgrounds is constrained by including in the fit CRs with the same ptmiss and jet multiplicity binning as the SRs, where relevant (ttZ CR is defined selecting events with a b-tagged jet, so there is not a no-jet CR for this process).
    • The constraints are introduced in the fit through a rate parameter for each CR, normalizing the corresponding background process in that CR and in the SR with the same pTmiss and jet multiplicity. Correlations between the background estimates across the bins are taken into account through the same systematic uncertainties considered in the SRs (Section 6).

  • Fig. 14: There appears to be considerable disagreement in the last bin of the middle plot. Is this taken into account in the fit somehow? That wasn’t clear to me.
    • The disagreement affects the WZ prediction in the last mT2(ll) bins, which becomes relevant only in the last ptmiss bin for the chargino search, where the WZ background accounts for about 25% of the expected SM yields. Similarly as for the comments at line 401-402, we propose to add a systematic in this bin of the size of the discrepancy. The effect on the blinded exclusion region for the chargino pair production is shown here.

  • Is the large discrepancy for ttZ at low MT2 a reason for concern, and if so how is it addressed?
    • It’s not a concern: at low mT2(ll), the ttZ contribution is less than 1% with respect to the one from ttbar. It’s only at high mT2(ll), where the ttbar background is suppressed by the W-mass endpoint, that the ttZ production can overcome the cross section penalty.

  • Fig. 21: Wrong caption?
    • Yes, the caption in Fig. 21 is wrong, thanks for pointing this out. What is shown in Fig. 21 is not the MT2 distributions for 2016, 2017 and 2018 DY CRs, but a comparison of the MT2 distributions for the 2017 DY CR in pre-UL and UL samples. This is meant to highlight the improvement of the data/MC agreement given by the fixing of the EE noise issue. We fixed the caption in AN v7.

  • L549: Is there any justification for the choice of 50%?
    • Not a deep one. We use this value for minor backgrounds, as they are not a concern for the final result.

  • Table 11-94(!!!): Please find a way of condensing this information, both in a (much!) smaller set of tables, as well as in plots. There appear to be many cases where the expected signal yields are 0 for the signal models shown in the last 1 or in some cases 2 mT2(ll) bins. Have you checked if you can simply merge some of these bins? Also, in some cases the total background prediction is also 0. How well are you able to constrain this? Put another way, are you not simply running out of MC stats for the background prediction in the tails?
    • It’s not easy to condense the information without losing information on the actual information entering the final ML fit. We moved the tables to a new appendix G, and made plots to condense the results in a few pages in Section 7. We merged signal regions from different years in the plots to further condense the information, leaving the reader who wants more details the possibility to find them in appendix G.
    • We were keeping the mT2(ll) binning constant across the MET bins for simplicity. In AN2019_256_v7, we merged the last three mT2(ll) bins in the first MET bin, and the last two mT2(ll) bins in the second and third MET bins (no merging in the fourth MET bin). This also avoids having bins with total background prediction equal to zero. The effect on the blind exclusion region for the chargino pair production is shown here.

Comments to AN-19-256 v5:

The content in the AN-19-256 referred to in the following answers has been implemented in v6.

Email from Tova Holmes (12/07/2022):

  • L211: Is the same removal applied to b-jets near leptons, or is this handled differently?
    • The same removal is applied to all jets, regardless of their flavor. This is a standard requirement used to avoid counting isolated leptons as jets, as the PF algorithm would use their track/calorimetric deposit in the clustering algorithm.
  • L226: Can you provide more detail on this? Why is averaging between reco and gen a reasonable choice? Does the choice of uncertainty applied based on this cover the effect?
    • This is a special treatment recommended by the SUS PAG for EOY FastSim samples, based on FastSim resolution studies in 80X. We plan to review it with UL samples, when available.
  • Fig. 8: It looks like the worst agreement here right around the kinematic endpoint, but perhaps that is just where WW is most dominant. Would it be possible to show this agreement with ttbar subtracted to see if the trend looks ~flat for WW assuming ttbar is correctly modeled?
    • We prepared some studies along the line you suggested. In this document, we have three slides for each data taking year. In the first slide, the left plot compares the mT2(ll) distribution of the excess of events (Data-SM expectation) with the one expected for the WW background. The right plot shows the original distributions (all data and SM backgrounds), but we added in red squares the SM expectation that we obtain by scaling the WW predictions to match the data. This allows for an immediate comparison of the observed excess with the one we would have if the “it’s all WW” hypothesis were true. The second slide shows the original plot (left) and the one we obtain by applying a scale factor to the WW background to match the excess (right). The third slide shows the same as the second, but for the azimuthal distance of the two leptons. We chose this variable since its distribution for WW events is more flat than those of the other backgrounds, so that the WW relative contribution is higher at low azimuthal distance than at higher one (the caveat is that there is of course some correlation with the mT2(ll)).
  • L390: In general here (and in later sections) you are selecting well-measured leptons (correctly reconstructing the Z-peak) and taking this as a proxy for high-MET regions. This is fine for real MET, but doesn’t capture additional MET due to mismeasurement from jets. Do you have any verification that this is sub-dominant and/or well modeled in MET tails?
    • We are not sure to understand your concern. These CRs were designed mainly to check that jet measurements were not spoiling the background modeling at the high MET values we select for the SR. Selecting well-measured leptons should not prevent us from capturing the additional MET due to mismeasurement from jets that might enter our CRs.
    • Let’s consider Section 5.1.1. Here, your concern seems to be that the selected events in our CR only contain real MET. We do not think it is the case: an event with two W bosons (or a W boson plus a Z boson with a lepton treated as neutrino) would hardly reach MET>160 GeV just because of the real MET due to neutrinos, unless the diboson system recoils against an hadronic system. The only assumption we make is that the jet (mis)measurement of this hadronic system is independent from the details of lepton selection in WW and WZ events.
    • As for the CRs described in Sections 5.2.1 (WZ background), 5.2.2 (ZZ) and 5.2.3 (ttZ), similar considerations hold. So for instance, the only assumption for the ZZ CR is that the jet mismeasurement in ZZ->4L events (however small or large it might be) be a good proxy for the jet mismeasurement in ZZ->2L2Nu events (once the neutrinos’ contribution to the MET has been taken into account before the MET>160 GeV selection).
  • L418: What makes comparing the positive/negative charge results representative of systematic uncertainty? Are there reconstruction effects that the different signs are capturing, or is this really just picking up statistical uncertainty? Is there a more significant dependence on e.g. lepton pT that could be studied and used here instead?
    • Honestly, we do not have a proper motivation for that. The positive/negative difference was suggested as an uncertainty during the review of the 2016 analysis. Initially, we were using the whole size of the correction as a systematic.
    • We didn’t make too detailed studies on the systematic as overall the effect of this scale factor is small, as we showed answering to the second answer to a previous set of comments.
  • Table 6: Where is this SF applied? Ttbar only? Something else?
    • It is applied to all MC samples, but only to events with a nonprompt lepton. We added the following sentence to clarify this in Section 5.1.2: “These scale factors are applied as an additional weight for events with a nonprompt lepton in all simulated samples.”
  • Fig 11, 17, 19: Could you show before plots here as well? Especially where the stats are too poor to get a good sense of the overall agreement, it would be helpful to see if this process is improving agreement.
    • A comparison between those plots before and after the corrections has been attached here.
  • L486: Why was the 10 GeV Z window chosen here as opposed to the 15 used elsewhere?
    • We messed up a little bit here. At an earlier stage of the analysis, we were using a mass window of 10 GeV to define the CRs with a Z boson. Then we decided to enlarge the window to 15 GeV to gain statistics, as we saw that the purity in the selected CRs remained very high. We checked what we are doing in Section 5.2.3 and indeed we are using a window of 15 GeV in Fig. 18 but a window of 10 GeV in Fig. 19. We replaced Fig. 19 in the AN v6 by using a 15 GeV window. The change makes hardly any difference within the available statistics, as shown here.
  • L499: Is the region defined here included in the fit?
    • No, we are not including this region in the fit: the Drell-Yan contribution is only around 20-30% so we would not get a good constraint.
  • L560: Do you have documentation of what the ttbar pT spectrum looks like with your selection and the size of this uncertainty? In general in this section, could you give uncertainty size for all entries?
    • The spectrum of top pT in ttbar events with MET>160 GeV is shown here. The size of the uncertainty in the yields of ttbar events in the SRs ranges between 3 and 6%.
    • We added a few tables in AN v6, showing the size of the systematic uncertainties in the predicted yields of our SRs. For your convenience, we uploaded them here.
  • Section 7: In general, are the normalizations from section 5.2 all applied here? And will you ultimately make plots in addition to all these tables?
    • No, the normalizations from the CRs in Section 5.2 are not applied for computing the yields shown in the tables of Section 7 as we include those CRs in the fit instead of applying scale factors to our predictions for the WZ, ZZ and ttZ backgrounds (so that the scale factors are sort of dynamically applied by the fit itself). We might instead show the yields with scale factors applied, but the idea here is to show the pre-fit yields of the backgrounds in the SRs.
    • We definitely plan to make plots in addition to the tables when the SRs will be unblinded, as plots are very effective at giving an overall idea of the agreement between observed and expected events. We think that tables are more helpful for a quantitative comparison of the expected yields for background and signal processes.
  • Table 7 (etc): For entries where the SM process is 0 +/- (>0), what is going on here?
    • In general, it is possible to have cases like that because some of the systematics (JES, JER, unclustered energies) modify the values of the variables (essentially the MET) on which the SR selection is applied, so an event that does not make it to a SR can make it when, for instance, the JES is scaled up.
    • We agree that extreme cases like 0 +/- 2 as observed for the total SM processes in EOY production look suspicious. Looking at AN v3, we see that the culprit (here and in other Tables) was the EOY DY background, so we suspect that this was due to an event in the lower HT bin dataset (with very high cross section) that accidentally made it to the SR when varying the JES.
    • The UL samples seem to behave better in this respect. However, we did find a couple of events that migrate from MET~160 GeV to ~20 TeV due to JES (see Table 29). This is due to jets with |eta|>5.4 (apparently, last eta value covered by JEC uncertainty bins). Since these events do not enter our (central) selection, we added a cut MET(jes)-MET(central)<10000 to avoid them.

Comments to AN-19-256 v2:

The content in the AN-19-256 referred to in the following answers has been implemented in v3.

Email from Jae Hyeok Yoo (25/01/2021):

3 Physics objects event reconstruction

  • L 201: pt>20 GeV. Have you checked we have no PileUp jets with this. Has PileUp JetID been applied? This is most relevant for low ptmiss bins.
    • We are applying loose PU jet ID (we added the information in the AN v3). We basically only use events with ptmiss>100 GeV.
  • L252-253: Are these numbers only for 2018 or all years combined?
    • Those numbers are only for 2018. We clarified this in the AN v3.

4 Search strategy

  • L270-272: Why is the veto selection so high in pt. Any 3l bkg leaking in will cause problems. So maybe it is better to use a lower pt cut on the veto lepton?
    • Indeed, we select veto leptons requiring pT>10 GeV, not pT>15 GeV. We corrected it in the AN v3. The pT>15 GeV is the requirement we apply on the analysis (“tight”) leptons when we pre-select them at the time of producing our trees, hence the typo in the AN.
  • Table 1: Here you are using MET>140 GeV while MET>160 GeV is used for CR definitions. 140 GeV is typo?
    • Table 1 shows our baseline selection before the optimisation of the search regions (which is described in Section 4.1). This baseline selection is essentially what was used in the 2016 paper. In Section 4.1 we describe the further studies done to optimise the search regions for the legacy analysis, which led to the choice of MET>160 GeV. We tried to make this point more clear in the AN v3.
  • L316: how is average calculated, weighted by cross section or assuming the same cross section?
    • We assume the same cross section, in order not to bias the optimisation towards lower mass regions.
  • L 322: I assume you split this non-interesting region in so many bins to help your fit. It would be useful to show this with a fit plot.
    • Yes, as the top and WW backgrounds are normalised in the low MT2 region in the fit itself, we split this region in more bins to take advantage of different MT2 shapes among the backgrounds. This might also help to constrain the nuisances for shape systematics.
    • We are not sure we can provide a fit plot at this stage, as the analysis is still blinded. We tested the effect of merging the low MT2 bins on the expected chargino exclusion limits for the SUS-17-010 paper, and found some improvement at low mass splitting.

5 Background estimation

  • You check the modeling of MT2 in regions inclusive in Njets. Have you checked the modeling for Njets=0 and >=1 separately?
    • We added this check in AN v3. Instead of Figure 7, showing the MT2 distributions in events with no b-tagged jets, we put two figures where those events are split in Njets=0 and Njets>=1.
  • L358: You are saying that you test ttbar, tW, and WW backgrounds, but the contribution from tW is still small in these CRs. So I am not sure how its modeling can be tested.
    • In this section, we study the modeling of MT2 for backgrounds with two W bosons. The concern here is that for these processes the MT2 distribution has a kinematic endpoint at the W boson mass, so events enter the high MT2 signal region mainly because of detector resolution effects.
    • We study therefore dedicated control regions to check how well the simulation models the tails of the MT2 distributions against jet mismeasurement and lepton misidentification.
    • It is true that the contribution of the tW events is always rather small with respect to the of one of ttbar production, but we just assume here that these two processes would be affected in a similar way by the detector resolution effects of concern, as their final states are rather similar.
    • Finally, as the proportion of ttbar and tW remains similar going from the CR to the SR, a validation of the overall top (tW+ttbar) MT2 shape seems to be enough.
  • L 375-379: can we show MC closure of this method. Also, this cannot work perfectly since there is some differences between W and Z bosons, so is there an uncertainty associated with this big grin
    • We guess that by closing you mean a comparison between the MT2 distributions in WW and WZ (with WW emulation) events. It is certainly true that the two distributions cannot match (we made this study for the 2016 paper, see for instance the third bullet here).
    • The point is that we are not assuming them to be identical, as we are not using this method for estimating the shapes of the background from processes with two W bosons in the signal regions. These shapes are taken from the simulations.
    • We use this method only for validation purposes, in particular to check the MT2 modeling in high MET regions, which can be populated by a larger fraction of events with jet mismeasurement. The idea is to complement the check we do in the validation region with 100<MET<140 GeV, which is good to check the general modeling of MT2, but cannot prove the highest MET regions.
    • So in short, the only assumption of this check is that the possible mis-modeling of the detector resolution effects on the MT2 shape does not depend on the difference between W and Z bosons.
  • L420: How is MT2 calculated with 3 leptons?
    • We use just two leptons to compute the MT2. The plot in Figure 8 in AN2019-256-v2 is done by reconstructing a candidate Z boson by the pair of same flavour oppositely-charged leptons with closest invariant mass to the Z boson mass, and computing the MT2 by using the third lepton and the lepton in the Z boson pair with opposite charge respect to it . For completeness, we added in AN2019-256-v3 the plot obtained by using the Z boson lepton pair to compute the MT2.
  • L434-437: If this is for testing WZ, please move it to 5.2.1.
    • This is for testing the ZZ background. We changed “As for the WZ production,” to “Just as we do for the WZ production,”
  • L471: Discrepancy is still 50% level even without the smearing, so I wouldn’t say that “mismodeling is largely covered … ”.
    • What we mean by “covered” is that the mismodelling is smaller than the variation observed when removing the JER smearing, as the data/MC ratio moves from 0.3 to 1.5 (so it is 50%, but “on the other side”). We tried to make it more clear in the AN v3.

7 Observed results

  • WZ production: so you are focusing on the 3l final state and showing the shape works there. But if I look at your results tables, then there is a strong difference between DF and SF for WZ background. This seems to indicate that we usually do not see the W boson. Can you check this and also see why we are not seeing this. Are we sure we do not have any W decaying hadronically and then mismeasurements here as well. Or tau decays...
    • We studied the composition of the WZ background in the search region (plot here) in terms of the origin of the two reconstructed leptons. Events with a lepton from the W boson are the majority and are symmetric between DF and SF channels. The difference between the channels in the tables appears at high MT2 because of the contribution of events where both the reconstructed leptons come from the Z boson decay. Indeed, the MT2 for these events does not have a kinematic endpoint (as is the case for events where one lepton comes from the W), and can acquire large values when the dilepton pair is back-to-back with the neutrino from the W boson decay.
  • For the non-prompt leptons I wanted to understand why there are no W+jets in your regions.
    • We observed in the 2016 analysis that the contribution from W+jets was negligible. We added this process in the same-sign regions for AN v3. Its contribution is found to range between 2 and 5% of the expected SM background through the data taking years.
  • In general, I am wondering what you do with your cross-checks in the CRs. It seems that you just verify that the MC looks fine, but do not assign any systematic uncertainty to this. Is this correct? I am wondering whether we want to consider using statistical uncertainty on the cross-check since this is the level we have checked the MC.
  • Just out of curiosity. Did you ever look at some other variables (e.g. the angle between the leptons) to see whether there is anything that could give some extra discriminating power between WW and the signal, since even at very high MT2 we have a considerable amount of WW left
    • Yes, we did look at other variables, especially in the first iteration of the analysis (razor variables, angular variables, sum of object transverse momenta).

Comments from 27 November 2020 presentation ( Link to the slides in Indico):

The content in the AN-19-256 referred to in the following answers has been implemented in v2.

  • Which triggers are used in the analysis?
    • Tables summarizing the trigger paths used in the analysis have been included in the Appendix B of AN-19-256.

  • Same-sign control region (slide 12): would it be possible to include this control region in the fit and let it decide the value of the scale factors for the rate of nonprompt leptons?

    • This is not of easy implementation as the nonprompt lepton scale factors affect a component of each background rather than a specific background as a whole. As a first approach, we evaluate the impact of the nonprompt scale factors by repeating the fit using a non-prompt SF = 1 +/- measured deviation in the same-sign control region. The test is done for the T2tt and the TChipmSlepSnu models. In each plot, the black line shows the (blinded) exclusion region obtained from the new fit, while the red one shows the original result obtained by setting the nonprompt SF to the value measured in the same-sign control region. Given the very small impact of the nonprompt lepton SFs to the fit, we think that the integration of the same-sign control regions in the fit would be an overkill.
  • Drell-Yan mismodeling in 2017 (slide 17): what is the contribution of this background to the search regions? How much does its mismodeling affect the fit?

    • The yields for Drell-Yan production and other background processes in the search regions are compared in Section 7 of AN-19-256. For 2017 (page 32), Drell-Yan production is one of the main contributors in the lower ptmiss regions and mt2 bins, becoming increasingly less relevant at higher ptmiss and mt2 values.
    • To evaluate the impact of the 2017 Drell-Yan mismodeling, we repeat the fit by using Drell-Yan estimates before JER smearing. No significant change is found in the (blinded) exclusion regions neither for the T2tt or the TChipmSlepSnu models (as before, black lines refer to the new fit with no JER smearing for the Drell-Yan background, while red lines refer to the original fit).

  • For the normalization of WZ, ZZ, and ttZ production, you measure global scale factors in suitable control regions with ptmiss>160 GeV and use them in all the search regions (slides 14 to 16). Is there any dependence of these scale factors on the ptmiss and jet multiplicity bins used to define the search regions?

    • We compare observed and expected yields as a function of ptmiss and jet multiplicity in the control regions used to study the WZ, ZZ, and ttZ backgrounds in Sections 5.2.1, 5.2.2, and 5.2.3 of AN-19-256, respectively. No significant trend is observed.
    • We modify the fit used to extract the signal in our analysis, by adding the WZ, ZZ, and ttZ control regions to it, and letting the fit itself to determine the normalization of these processes in each ptmiss and jet multiplicity bin (for ttZ production, only ptmiss bins are considered, as the corresponding control region is defined by requiring at least one b-tagged jet; this process is anyway relevant only in search regions with b-tagged jets). The new approach yields very similar (blinded) exclusion regions for the T2tt and the TChipmSlepSnu models (as before, black lines refer to the new fit with WZ, ZZ, and ttZ control regions, while red lines refer to the original fit).

-- PabloMatorrasCuevas - 2020-12-14

Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng CharginoSignalRegionsGroupFitCRVetoesUL_WZBin_vs_CharginoSignalRegionsGroupFitCRVetoesUL_TChipmSlepSnu_mC-100to1500_mX-1to750_Blind_Contours_2016-2017-2018.png r1 manage 27.7 K 2022-09-30 - 14:32 PabloMatorrasCuevas  
PNGpng CharginoSignalRegionsMergeGroupFitCRVetoesULFast_WWSimm_vs_CharginoSignalRegionsMergeGroupFitCRVetoesULFast_TChipmSlepSnu_Blind_Contours_2016-2017-2018.png r1 manage 29.8 K 2022-12-15 - 15:43 PabloMatorrasCuevas  
PDFpdf CharginoSignalRegionsMergeGroupFitCRVetoesUL_TChipmSlepSnu_mC-1050_mX-50.pdf r2 r1 manage 177.7 K 2022-11-09 - 11:31 PabloMatorrasCuevas  
PDFpdf CharginoSignalRegionsMergeGroupFitCRVetoesUL_TChipmSlepSnu_mC-1050_mX-50_S15.pdf r1 manage 174.8 K 2022-11-09 - 11:31 PabloMatorrasCuevas  
PNGpng CharginoSignalRegionsMergeGroupFitCRVetoesUL_WWshape_vs_CharginoSignalRegionsMergeGroupFitCRVetoesUL_TChipmSlepSnu_mC-100to1500_mX-1to750_Blind_Contours_2016-2017-2018.png r1 manage 29.4 K 2022-09-30 - 14:32 PabloMatorrasCuevas  
PNGpng CharginoSignalRegionsMergeGroupFitCRVetoesUL_vs_CharginoSignalRegionsGroupFitCRVetoesUL_TChipmSlepSnu_mC-100to1500_mX-1to750_Blind_Contours_2016-2017-2018.png r1 manage 28.9 K 2022-09-30 - 14:32 PabloMatorrasCuevas  
PNGpng CharginoSignalRegionsSmearNoDYOptimPtmMT2HighExtraEENoiseDPhiHEM_vs_CharginoSignalRegionsSmearOptimPtmMT2HighExtraEENoiseDPhiHEM_TChipmSlepSnu_mC-100to1400_Blind_Contours_2016-2017-2018.png r1 manage 26.7 K 2021-01-05 - 10:27 LucaScodellaro  
PNGpng CharginoSignalRegionsSmearOptimPtmMT2HighExtrabinFitCREENoiseDPhiHEM_vs_CharginoSignalRegionsSmearOptimPtmMT2HighExtrabinEENoiseDPhiHEM_TChipmSlepSnu_mC-100to1500_Blind_Contours_2016-2017-2018.png r1 manage 29.1 K 2021-01-05 - 10:27 LucaScodellaro  
PNGpng CharginoSignalRegionsSmearnonpromptSFOptimPtmMT2HighExtraEENoiseDPhiHEM_vs_CharginoSignalRegionsSmearOptimPtmMT2HighExtraEENoiseDPhiHEM_TChipmSlepSnu_mC-100to1500_Blind_Contours_2016-2017-2018.png r1 manage 28.4 K 2021-01-05 - 10:27 LucaScodellaro  
PNGpng StopSignalRegionsGroupFitCRVetoesULFast_WWSimm_vs_StopSignalRegionsGroupFitCRVetoesULFast_T2tt_Blind_Contours_2016-2017-2018.png r1 manage 26.0 K 2022-12-15 - 15:43 PabloMatorrasCuevas  
PDFpdf StopSignalRegionsGroupFitCRVetoesUL_T2tt_mS-500_mX-375.pdf r2 r1 manage 136.5 K 2022-11-09 - 11:31 PabloMatorrasCuevas  
PDFpdf StopSignalRegionsGroupFitCRVetoesUL_T2tt_mS-500_mX-375_S15.pdf r1 manage 134.8 K 2022-11-09 - 11:31 PabloMatorrasCuevas  
PNGpng StopSignalRegionsGroupFitCRVetoesUL_WWshape_vs_StopSignalRegionsGroupFitCRVetoesUL_T2tt_mS-150to800_dm-80to175_Blind_Contours_2016-2017-2018.png r1 manage 24.8 K 2022-09-30 - 14:32 PabloMatorrasCuevas  
PNGpng StopSignalRegionsOptimisedPtmAndMT2ISRSmearEENoiseDPhiHEMFitCR_vs_StopSignalRegionsOptimisedPtmAndMT2ISRSmearEENoiseDPhiHEM_T2tt_mS-200to800_Blind_Contours_2016-2017-2018.png r1 manage 29.1 K 2021-01-05 - 10:27 LucaScodellaro  
PNGpng StopSignalRegionsOptimisedPtmAndMT2ISRSmearEENoiseDPhiHEM_mismodel_vs_StopSignalRegionsOptimisedPtmAndMT2ISRSmearEENoiseDPhiHEM_T2tt_mS-200to800_Blind_Contours_2016-2017-2018.png r1 manage 29.8 K 2021-01-05 - 10:27 LucaScodellaro  
PNGpng StopSignalRegionsOptimisedPtmAndMT2ISRSmearEENoiseDPhiHEM_nonpromptSF_vs_StopSignalRegionsOptimisedPtmAndMT2ISRSmearEENoiseDPhiHEM_T2tt_mS-200to800_Blind_Contours_2016-2017-2018.png r1 manage 30.0 K 2021-01-05 - 10:27 LucaScodellaro  
PNGpng TChiSlepSnu_7BinsVs4Bins_contours.png r1 manage 12.3 K 2021-07-29 - 14:23 LucaScodellaro  
PDFpdf VR1_NoJet_em.pdf r1 manage 600.8 K 2022-07-28 - 17:57 PabloMatorrasCuevas  
PNGpng WZComposition_mt2ll.png r1 manage 9.5 K 2021-04-16 - 10:31 LucaScodellaro  
PNGpng cratio_Veto0_Veto_HTForwardSoft.png r1 manage 30.1 K 2022-09-30 - 14:32 PabloMatorrasCuevas  
PNGpng cratio_Veto0_Veto_dPhiEENoisePtMissHard.png r1 manage 30.3 K 2022-09-30 - 14:32 PabloMatorrasCuevas  
PNGpng log_STtW,WW,ttbar_WW_SR4_Veto_em_mt2llSR4.png r1 manage 17.0 K 2022-11-25 - 17:46 PabloMatorrasCuevas  
PNGpng log_c_SR4_Veto_em_mt2llSR4.png r1 manage 17.0 K 2022-11-25 - 17:41 PabloMatorrasCuevas  
PNGpng log_cratio_SR1_NoTag_em_mt2llSR1_jer_2018.png r1 manage 25.7 K 2022-12-15 - 15:35 PabloMatorrasCuevas  
PNGpng log_cratio_SR1_Tag_sf_mt2ll_unclustEn_2016HIPM.png r1 manage 25.6 K 2022-12-15 - 15:35 PabloMatorrasCuevas  
PNGpng log_cratio_SR1_Tag_sf_mt2ll_unclustEn_2016noHIPM.png r1 manage 25.6 K 2022-12-15 - 15:35 PabloMatorrasCuevas  
PNGpng log_cratio_SR1_Tag_sf_mt2ll_unclustEn_2017.png r1 manage 26.0 K 2022-12-15 - 15:35 PabloMatorrasCuevas  
PNGpng log_cratio_SR1_Tag_sf_mt2ll_unclustEn_2018.png r1 manage 26.0 K 2022-12-15 - 15:35 PabloMatorrasCuevas  
PNGpng log_cratio_SR4_Veto_sf_mt2llSR4_jes_2018.png r1 manage 25.5 K 2022-12-15 - 15:35 PabloMatorrasCuevas  
PNGpng log_cratio_TwoLep_em_ptmissSR_jer_2018.png r1 manage 27.8 K 2022-12-15 - 15:35 PabloMatorrasCuevas  
PNGpng log_cratio_TwoLep_em_ptmissSR_jesTotal_2018.png r1 manage 27.8 K 2022-12-15 - 15:35 PabloMatorrasCuevas  
PNGpng log_cratio_TwoLep_em_ptmissSR_unclustEn_2018.png r1 manage 27.8 K 2022-12-15 - 15:35 PabloMatorrasCuevas  
PNGpng log_cratio_TwoLep_sf_ptmissSR_unclustEn_2018.png r1 manage 27.8 K 2022-12-15 - 15:35 PabloMatorrasCuevas  
PNGpng log_cratio_VR1_Tag_em_mt2llOptim.png r2 r1 manage 29.3 K 2022-09-30 - 14:32 PabloMatorrasCuevas  
PNGpng log_cratio_VR1_Tag_em_mt2llOptim_NoEENoiseVeto.png r1 manage 29.2 K 2022-09-30 - 14:32 PabloMatorrasCuevas  
PNGpng log_cratio_VR1_Veto_em_mt2llOptim.png r1 manage 29.0 K 2022-09-30 - 14:32 PabloMatorrasCuevas  
PNGpng log_cratio_VR1_Veto_em_mt2llOptim_NoEENoiseVeto.png r1 manage 29.2 K 2022-09-30 - 14:32 PabloMatorrasCuevas  
PNGpng log_cratio_Veto0_Veto_jetRawPtEENoise.png r1 manage 34.2 K 2022-09-30 - 14:32 PabloMatorrasCuevas  
PNGpng log_cratio_WZ_3Lep_ZLeps_ptmiss-160_mt2llOptimHighExtra.png r1 manage 29.7 K 2021-04-16 - 10:21 LucaScodellaro  
PDFpdf mt2llshapesNoCorr.pdf r1 manage 301.6 K 2022-07-28 - 17:57 PabloMatorrasCuevas  
PDFpdf systematicsTablesANv6.pdf r1 manage 311.1 K 2022-07-28 - 17:57 PabloMatorrasCuevas  
PNGpng toppt.png r1 manage 5.8 K 2022-07-28 - 17:57 PabloMatorrasCuevas  
PDFpdf ttZ_MassWindow.pdf r1 manage 131.3 K 2022-07-28 - 17:52 PabloMatorrasCuevas  
Edit | Attach | Watch | Print version | History: r21 < r20 < r19 < r18 < r17 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r21 - 2022-12-15 - PabloMatorrasCuevas
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback