Welcome to the private TWiki of SUS-18-007

This is the working twiki for the search for SUSY in finals states with a H->gg.

Make sure that all internal comments have been removed before copying the answers to the analysis review twiki.

Color code:

  • Answered
  • Open discussion
  • Not answered

Comments from CWR

Comments from Andrea Carlo Marini : on paper draft CMS-SUS-18-007-001, dated 27 May 2019:

Type B (Stat):

-ln 177. -191. Why two different strategies? Can you harmonize them?

  • The two strategies are used to address two different types of signals; one has been optimized for strong SUSY production while the other has been optimized for electroweak SUSY production. The object and event selection are harmonized, and different optimizations are obtained with different event classification and background estimation. The two strategies strengthen the robustness of the result.

-Fig. 2 -* Use Garwood CI for data. -* put the uncertainties from the fit. -* is there a reason why you cannot put together the two figures? You can show both the signal+bkg and the bkg-only fit in the same canvas. The signal+bkg fit and bkg-only fit are different but are pretty close. If we plot them on the same canvas, the curves would overlap significantly and obscure each other making it difficult and confusing for the reader to understand what's going on.

-* why only one bin? Can you tell us -- at least -- which is? Wouldn't it be better to show the sum of all categories (or the weighted sum with S/(S+B) ) This figure is shown primarily as a pedagogical tool to help the reader visualize what exactly is done in the extraction of the signal. We do not intend for this plot to summarize the result.

. The message of this plot should be that if we would have an excess we see a peak.

  • Yes correct - that's what would happen. But the message of this plot is primarily to help the reader understand visually what is being done to extract the signal.

-*ln 255. procedure -> criterion.

  • Done

-* Fig 3-6 why are we publishing the same exclusion (observed and expected) twice? from which plot you took the description you put in the text and with which criterion? In general selecting the best observation or let the reader do it, may result in under-coverage of the UL.

  • We are publishing the results of both strategies. In the text, we are quoting the limits of the STP analysis for strong SUSY production, and the limits of the EWP analysis for electroweak SUSY production

Type B (Physics):

-* (ln 87) why do you use only barrell photons? what is the sensitivity including the encap ones (that are anyhow used in measurement like the Hgg)?

  • The sensitivity from endcap photons represent a negligible contribution to the total sensitivity as the signals are primarily central, and the signal to background ratio for events involving endcap photons are signficantly worse due to larger fake rate and worse energy resolution.

-* Are the energy smearings propagated to the simulation? Is the primary vertex association accounted as well? This is unclear, expecially from Eq. 4 where you use only the photon energy, and you didn't specify if extra corrections are accounted for and what happens if you miss the right PV association.

  • The energy smearings are propagated to the simulation, as well as the primary vertex association. Equation 4 defines the mass resolution estimator used to define event categories, and does not represent the actual measured mass resolution.

Comments from Sijin Qian : on paper draft CMS-SUS-18-007-001, dated 03 June 2019:

(1) Throughout the paper (including in the Abstract and the captions and header rows of Tables, etc.), at many places, the "Higgs" should be followed by the "boson" per the guidelines from the CMS PubComm which is

= The term "Higgs boson" should be used throughout the text of CMS papers making reference to this particle. "Higgs mass" is what the Scottish theorist would read off = his bathroom scale ...

For example, in the Abstract, the 3rd line should be changed from (also to follow the good example on L6, etc.)

"of charged leptons and additional Higgs candidates ..." --> "of charged leptons and additional Higgs boson candidates ..."

Other places where the similar changes should be made are

L50-51 (two places), L54-55 (two places), L66, L97, L137, L153, L200, Table 3's caption (the 1st line), L211, L274, Tables 4-6 (the header row and the right-most column of each Table), etc.


(2) Throughout the paper,

(a) the Razor variables MR(non-italic) and R(italic)2 have different fonts as shown in Eqs.(1)-(3) and the line above

Eq.(1), etc.;

* Fixed. We have made R use italic font consistently.

(b) the "j"s for jets also have different fonts (i.e. the "italic" in Eqs.(1) and (3) (in the superscripts) and the text below Eq.(2), and the "non-italic" in the header columns of Tables 6 and 8, etc.)

I'm not sure whether they should be consistent, i.e.

either all italic, or all non-italic, instead of a mixture.

* Fixed. We have changed the italic j's to non-italic j's.

(3) Tables 2, 4 and 7. In the header column of Table 2 and the 2nd columns of Tables 4 and 7, to be consistent in this paper, the font of "b" quarks should be changed from

(a) Table 2: (in the header column and the 1st two rows below the header row; also, the font of "H" in the 2nd row should be changed too)

"Z -> bb(italic) H(italic) -> bb(italic)" -->

"Z -> bb(non-italic) H(non-italic) -> bb(non-italic)"

(b) Tables 4 and 7: in the 2nd column of each Table, and rows corresponding to the Search region bin of 7-16 are similar.


Page 0, in the Abstract

(4) The first two lines, the 5th and 10th lines,

(a) the "SUSY" can be introduced on the 1st line, i.e.

"A search for supersymmetry is presented where at ..." --> "A search for supersymmetry (SUSY) is presented where at ..."

  • DONE.

(b) Afterward, two places can be shortened from

(i) the 2nd line: "the decay chains of the pair-produced supersymmetric" --> "the decay chains of the pair-produced SUSY" * DONE.

(ii) the 5th line: "sensitive to different supersymmetry scenarios." --> "sensitive to different SUSY scenarios." * DONE.

(c) The 10th line, the "LSP" should be explained at its 1st appearance in the Abstract here; however, since it has not been used again in the Abstract, so can be simply spelled out, i.e.

"530 GeV and an LSP mass of 1 GeV;" --> "530 GeV and a lightest SUSY particle mass of 1 GeV;"

  • DONE.

Pages 1-6

(5) L4, L7-9, L11, L14-15, L20-21, L89-90, L104, L111, L120 and L140. These lines may be shortened from

(a) L4: (as the "BSM" has not been used afterward in whole paper, and as the "SM" has been introduced on L3)

"physics beyond the standard model (BSM) postulate ..." --> "physics beyond the SM postulate ..." * DONE.

(b) L7-9, L11 and L14-15: (as the "SUSY" has been introduced on L6)

(i) L7-9 and L14-15: (three places) "supersymmetric part..." --> "SUSY part..." * DONE.

(ii) L11: "are gauge-mediated supersymmetry breaking" --> "are gauge-mediated SUSY breaking" * DONE.

(c) L20-21: "corresponding to 35.9 fb-1 and 41.5 fb-1 respectively." --> "corresponding to 35.9 and 41.5 fb-1 respectively." * DONE.

(d) L89-90: (three places, the 2nd one is due to the introduction of "PF" on L84)

"if the sum of the pT of the particle flow candidates from charged hadrons, neutral hadrons and photons each ..." -->

"if the pT sum of the PF candidates from charged and neutral hadrons and

photons each ..." * DONE.

(e) L104: "are selected from the particle-flow candidates with" --> "are selected from the PF candidates with" * DONE.

(f) L111: (as the "pmissT" has been explained on L109)

"can give rise to large missing transverse energy pmissT" --> "can give rise to large pmissT" * DONE.

(g) L120: (as the "EWP" has been introduced on L115)

"to electroweak production of charginos and neutralinos." --> "to EWP of charginos and neutralinos." * EWP is only defined as a label of the analysis optimized for electroweak SUSY production. We do not intend the abbreviation to also refer to the electroweak production mechanism itself. So we will leave the full word here.

(h) L140: "0–0.6, 0.6–1.0 and 1.0–infinite." --> "0–0.6, 0.6–1.0 and > 1.0."

  • DONE.

(6) L16, to be consistent with References Section,

the "collaborations" should start with a capital letter, i.e.

"the ATLAS and CMS collaborations using ..." --> "the ATLAS and CMS Collaborations using ..."

  • DONE.

(7) L24, L50, L101, L106 and L166. The "QCD", "MC", "CSV" and "phi" should be explained at their 1st appearances in text on L24, L50, L101 and L106, respectively; i.e.

(a) L24: "are QCD diphoton and photon+

..." --> "are quantum chromodynamics (QCD) diphoton and photon+ ..."

  • DONE.

(b) L50: (together with the item (1) above for Higgs boson)

"Simulated Monte Carlo event samples are used to

model the SM Higgs backgrounds and the" -->

"Simulated Monte Carlo (MC) event samples are used to model the SM Higgs boson backgrounds and the"

  • DONE.

(c) L166: (then can be shortened correspondingly)

"SUSY signal is modeled from the Monte Carlo simulation ..."

--> "SUSY signal is modeled from the MC simulation and ..."

  • DONE.

(d) L101: "the CSVv2 tagger algorithm

[27] ..." --> "the combined secondary vertices (CSVv2) tagger algorithm [27] ..."

  • DONE.

(e) L106: (also, as the numerical values of the angle phi have

been implicitly shown on these lines, etc. and an angle can be measured in either radians or degrees; therefore, the unit of phi may should be specified)

"= sqrt(... phi ...) > 0.4." --> "= sqrt(... phi ...) > 0.4, where phi is azimuthal angle in radians."

  • DONE.

(8) Fig.1, in the caption, the first three lines,

(a) the 1st line, a "Feynman" may should be added at the beginning;

  • DONE.

(b) the 2nd-3rd lines, to be conformed with the position indicator "upper" in the middle of the 2nd line, and to avoid any possible confusion with the "bottom quark", the position indicator "bottom" between the 2nd-3rd lines should be changed from

"Figure 1: Diagrams displaying the simplified ...

considered. Upper left: bottom squark pair production; upper right: wino-like chargino-neutralino production; bottom:" -->

"Figure 1: Feynman diagrams displaying the simplified ... considered. Upper left: bottom squark pair production; upper right: wino-like chargino-neutralino production; lower:"


(9) L114-115, two pairs of brackets may should be swapped forward for three words in front of them, i.e. (also, the 2nd "one" may be better to add an "another" before it)

"one focused on electroweak production of charginos and neutralinos (EWP analysis), and one focused on strong production of bottom squarks (STP analysis)." -->

"one focused on electroweak production (EWP analysis) of charginos and neutralinos, and another one focused on strong production (STP analysis) of bottom squarks."

  • DONE.

(10) Table 1, in the header row and the 3rd column, an extra space in the unit brackets should be removed, i.e.

"( GeV)" --> "(GeV)"


Pages 8-10

(11) Table 3, in the header row and header column, the non-leading words in each cell should be in the lower case, i.e.

"Uncertainty Source | Uncertainty Size

... PDFs and QCD Scale Variations ... Lepton Efficiency ... Photon Energy Scale . . . Signal ISR Modeling " -->

"Uncertainty source | Uncertainty size

... PDFs and QCD scale variations ... Lepton efficiency ... Photon energy scale . . . Signal ISR modeling "


(12) L223-224, to be consistent with elsewhere in this paper, it should be changed from

"times branching ratio for ..." --> "times branching fraction for ..."


(13) L233, L235-236, L241, L246, L271 and L285. These lines may be shortened from

(a) L233: (as the "chi02" has been introduced on L226)

"and the next-to-lightest neutralino chi02 are mass-" --> "and the chi02 are mass-" * DONE.

(b) L235-236: (two places, as the "chi01" and "NLL" have been already introduced at the beginning of L235 and L228)

"decaying to a Higgs boson and the LSP (chi01). The production cross sections are computed at NLO plus next-to-leading-log (NLL) precision ..." --> "decaying to a Higgs boson and the chi01. The production cross sections are computed at NLO+NLL precision ..." * DONE.

(c) L241: (also, a space should be added after a comma at the end of line)

"both the chi02 and the chi+-1 will decay to chi01 and other low-pT (soft) particles,leading to a" -->

"both the chi02 and chi+-1 will decay to chi01 and other low-pT (soft) particles, leading to a * DONE.

(d) L246: (similar as the item (b) above)

"are computed at NLO plus NLL precision in ..." --> "are computed at NLO+NLL precision in ..." * DONE.

(e) L271 and L285: (two places)

"290 GeV and 230 GeV ..." --> "290 and 230 GeV ..."


(14) Fig.2

(a) In the horizontal axis label of each plot, to be consistent with all other Figs. in this paper, the unit should be put into the square brackets instead of the round ones, i.e.

"mgammagamma (GeV)" --> "mgammagamma [GeV]"


(b) In the vertical axis label of each plot, two extra spaces inside the brackets should be removed, i.e.

"( 1 GeV )" --> "(1 GeV)"


(15) L239, to be consistent with elsewhere in this paper, the 1st letter on this line may should be in the lower case, i.e.

"Higgsino-like charginos and ..." --> "higgsino-like charginos and ..."


(16) L240, Fig.3's caption and L278. Three spaces should be added on these lines, i.e.

(a) L240: "combinations: chi01chi02,chi01chi+-1," --> "combinations: chi01chi02, chi01chi+-1,"

  • DONE.

(b) Fig.3's caption: (the 3rd line)

"standard deviations (1sigma)of their experi-" --> "standard deviations (1sigma) of their experi-"

  • DONE.

(c) L278: "such as the Razor variables MRand R2," --> "such as the Razor variables MR and R2,"

  • DONE.

(17) L255, the font of "CLs(non-italic)" here is different from the one in the article title of [46], where it is "italic". I'm not sure whether they should be consistent.

  • We checked other publications and our usage is consistent with past publications from CMS. Only the reference has 'italic' CL.

(18) L274 and L283, per the PubComm guidelines, some acronyms or variables (e.g. "H" and "LSP", etc.) in the Summary Section may should be explained (or spelled out if it would not be used again in this Section) at their 1st appearances in the Section, since some readers may only read the Summary Section instead of whole paper, i.e.

(a) L274: (together with the item (1) above for the Higgs boson)

"in the final state with a Higgs decaying to a" --> "in the final state with a Higgs boson (H) decaying to a" * DONE.

(b) L283: "below 530 GeV for an LSP mass of 1 GeV;" --> "below 530 GeV for a lightest supersymmetric particle mass of 1 GeV;"


Pages 11-13

(19) Figs.5 and 6.

(a) In each caption, the 6th line, the colors of "green" and "yellow" are mentioned, but can not be displayed in black/white; the problem can be solved by using the darkness, e.g.

"the green and yellow bands represent the ..." --> "the green dark and yellow light bands represent the ..." * DONE.

(b) Two captions are almost identical except differing on the 3rd-4th lines. Thus, two Figs. may be combined with an extended 3rd-4th line of Fig.5's caption, i.e.

"Figure 5: The observed 95% CL upper limits on the production cross ... ... charginos and neutralinos undergo several cascade decays producing either Higgs bosons. We present limits in the scenario where the branching fraction of the chi01 -> HG decay is 100%. . . ..." -->

"Figure 5: The observed 95% CL upper limits on the production cross ... ... charginos and neutralinos undergo several cascade decays producing (upper plots) either Higgs bosons, and (lower plots) a Higgs boson and a Z boson. We present limits in the scenario where the branching fraction (upper plots) of the chi01 -> HG decay is 100% and (lower plots) of the chi01 -> HG and chi01 -> ZG decays are each 50%. . . ..."


(20) L289-357, in the Acknowledgments Section, this paper is not too long (only 287 lines without counting the Acknowledgments and References Sections), thus a short version of Acknowledgments Section may be sufficient. Many other CMS papers with similar or longer lengths still use the short version of Acknowledgments. Please consult with some published CMS papers on this.

One important CMS paper you may consult with is our Higgs boson discovery paper HIG-12-028, which has 689 lines (i.e. > twice longer than the length of this paper), but its Acknowledgments Section has only 25 lines that is less than half of Acknowledgments in your this v7.

  • DONE. We have changed to use the standard letters version of the acknowledgements

Pages 14-17, in the References Section

(21) L374, in [6], to be consistent in this Section, an extra index after the year number should be removed, i.e.

"Eur. Phys. J. C 75 (2015), no. 5, 208," --> "Eur. Phys. J. C 75 (2015) 208,"

Other ones which also need to be changed by the similar way are [7], [20] and [34].

  • DONE

(22) L406-407, in [18], to be consistent with the PAS Refs. in all other CMS papers, the document name should be changed and the names of city and institute should be removed, i.e.

"Technical Report CMS-PAS-GEN-17-001, CERN, Geneva, 2018." --> "CMS Physics Analysis Summary CMS-PAS-GEN-17-001, 2018."

  • DONE

(23) The "year" number should be given for Ref.[35]. If there would be problems to display the year number with the default bib file, it may be fixed by changing from "article" to "unpublished" in the bib file.

  • DONE

(24) L484-485, in [48], the duplicated document names may be removed, i.e.

"CMS NOTE/ATL-PHYS-PUB ATL-PHYS-PUB-2011-011, CMS-NOTE-2011-005, 2011." --> "ATL-PHYS-PUB-2011-011, CMS-NOTE-2011-005, 2011."

  • DONE

Page 18, Table 4

(25) In the caption, the 3rd line, to be consistent in this paper, the last letter in an analysis name may should be changed from

"region bin of the EWK analysis." --> "region bin of the EWP analysis."


Pages 19-22, Tables 5-8

(26) Tables 5-6, in the header row and the 3rd column of each Table, the 2nd word should be in the lower case, i.e. (together with the item (1) above for the Higgs boson in the right-most column)

"Fitted Nonresonant bkg | SM Higgs bkg" --> "Fitted nonresonant bkg | SM Higgs boson bkg"


(27) Tables 5 and 8, in the header column and the row below the header row of each Table, to be consistent in the paper, the font of lepton "l" in the subscript of Zllbar should be changed from

"Zllbar(roman)" --> "Zllbar(special font as the 3rd line below L123)"


(28) Table 7, in the caption, the 3rd line, the word of "label" may should be plural, i.e.

"The label HH and ZH refer to the signal models for" --> "The labels HH and ZH refer to the signal models for"


Comments from Albert De Roeck : on paper draft CMS-SUS-18-007-001, dated 04 June 2019:

General question:

can these revolts also be recast in searches for Dark Matter in eg Higgs +DM channel? Some diagrams in Fig1 seem to be re-castable. This would add extra spice to the paper (if the results would be competitive with other dedicated studies)

  • They could certainly be recast, but would require too much additional delay due to processing of signal samples. We will leave the re-cast to interested readers in the community.


- line 58 & 61 I assume we are talking about the FXFX and MLM scheme? I recommend to say that explicitly.

  • DONE

- line 67: Add some motivation/discussion on why the fast simulation is adequate for signal study.

  • We have added this sentence: To cover the large SUSY signal parameter space in reasonable computation time, the signal model samples are simulated with the CMS fast simulation package~\cite{fastsim}, which has been validated to produce accurate predictions of object identification efficiencies and momentum resolution.

- line 103: "loose working point": add what is the efficiency to tag a b-quark jet at this point

  • DONE

- line 191: why are two different methods used? Is one preferred over the other? Are these two individually tailored the specific problems of these two channels? Or did the analysis groups in for two channels just chose a different method from the start? The reader is kinda curious at this point and needs some information.

  • They are simply two different methods that were adopted by the two different analysis groups. We say in the paper that: "The use of an alternative method is intended to increase the robustness of the background modeling."

- line 192-199: This is some kind of repetition of what was said already before...

  • We shortened the paragraph to the following sentence: "The dominant systematic uncertainty in this search is the shape and normalization of the nonresonant background, propagated by profiling the associated unconstrained parameters."

- line 207: I do not want to check ref 35 necessarily: what is the method proposed for calculating the missing higher orders? I recommend we add 1/2 sentence on that in the paper

  • We have added this phase to the sentence: "where the factorization ($\mu_F$) and renormalization ($\mu_R$) scales are varied
independently by factors of 0.5 and 2.0."

- How do include PDF uncertainties? just the NNPDF band of uncertainties? Or was a different estimator used? This information can be added.

  • DONE

- line 210: "10-24%" of uncertainty seems a rather large number. Any comment why this is so large? I think the reader would be interested to understand this a bit better.

  • The cause is explained in the preceding sentence. There is a mismodeling of the estimated mass resolution estimator due to pile and transparency loss effects. We changed the sentences a bit to make the logical connection more clear.

- Figure 3: our observed limits are generally much worse that the expected. We have some excesses of data events in a number of bins as we see in Table 4/5. Are we missing a discover? smile

  • The observed limits are on or within the one sigma band of the expected limits. There are a couple of excesses, but they are not very significant and do not occur in a pattern consistent with a specific signal model considered. The results are fully consistent with statistical fluctuations of the standard model only - unfortunately.

- For the final results, we always chose the most sensitive of the two analyses for a given channel. No attempt was made (or worthwhile) to try to combine the analyses?

  • We did not combine the analyses as they were optimized for different signal model production. In the end, we quote the STP results on strong SUSY production, and the EWP results on electroweak SUSY production.

- Ref 18 CMS-PAS-GEN-17-001 is published as arXiv:1903.12179


Comments from Franco Ligabue (INFN Pisa) : on paper draft CMS-SUS-18-007-001, dated 05 June 2019:

Type A comments:

- ABSTRACT the data taking year is not a relevant abstract information

  • Removed

- lines 3,4 too many “models”, I would start the second sentence with “Many scenarios …”

  • DONE.

- line 33 to avoid repetitions of words, remove “search”

  • DONE.

- Figure 3: dotted black lines are mentioned in the caption, but are not in the plot. Same for solid red lines. Also (as a minor cosmetics comment) the black line lying on the kinematic limit is visible on the left plot, but not on the right.

  • Changed to "bold and light solid black lines", and "analogous dotted red contours". The exclusion limits now follows the kinematic limit also for the right plot.

- line 278: insert a space after MR.

  • DONE.

Type B comments:

- Figure 1, is double chargino production not relevant ?

  • Since the final state has to have two photons from a Higgs boson decay, and a chargino cannot decay to a Higgs boson and the LSP (it decays to a W boson and the LSP), the model with double chargino production is not considered in this analysis.

- line 72, line 77: “we apply a correction” and then “the full size of the correction is taken as a systematic uncertainty”. In the first sentence it sounds like “correction” is used to mean “weight” or something like that, so saying that the full size of the correction is taken as systematic error sounds a bit strange to me. Maybe it should be rephrased, as in “the full effect of the correction”?

  • DONE

- line 74 why is top pt reweighing relevant for the SUSY signal ?

  • The effect is viewed as a QCD ISR modeling issue, and a correction is derived with a ttbar/Z+jets sample which has a similar initial state as for the signal models.

- line 89: “sum of the pt of the PF candidates” inside a cone, presumably? Not stated (see following comment)

  • We have added a phrase regarding the isolation cone

- lines 89-92 the isolation cone size is not given

  • We have added a phrase regarding the isolation cone

- line 109: shouldn’t it be the negative of the sum of the PF transverse momenta?

  • Fixed

- lines 150-153: The dominant background (QCD diphoton/photon+jet) is said to be always exponentially decaying and to be always modeled with an exponentially falling functional form, but the following section (section 6) states that the non-resonant QCD diphoton/photon+jet is modeled using a complicated weighting procedure choosing from a pool of functions including polynomials and power-law functions, while section 7 (lines 194-95) says that the non-resonant bkg is fitted with a sum of exponential functions. This is very confusing to me, since it’s not clear if the three conflicting statements concern the same background source or not.

  • To address your confusing, we have moved the definition of "non-resonant" background to line 150-153, and refer to that term in all cases. The three statements are now making the same statement: The non-resonant background is exponentially decaying. We model them by a set of varying exponentially decaying functions in order to capture potentially more complicated behavior, but all functions considered are still exponentially falling. Line 194-195 states that the non-resonant background is fitted with a set of exponential functions, not a sum.

-lines 208-12. Not very clear why this systematic uncertainty is labeled “sigma/M categorization” in the table, although from the explanation it appears that both resolution mismodelling and categorization are involved.

  • We have changed the text a bit to make the relationship more clear: "Due to the effects of pileup and transparency loss in the ECAL crystals, we observe some simulation mismodeling of the estimated mass resolution, which can migrate events between the High-Res and Low-Res
event categories of the EWP analysis. As a result, a systematic uncertainty of $10$--$24\%$, measured using a $Z\to\Pep\Pem$ control sample, is propagated to the prediction of the SM Higgs boson background and SUSY signal yields in the High-Res and Low-Res event categories.

- Figure 2, the razor variables are typical of this analysis (M_R, M_T^R, M_T2) it would be much more interesting to see data for these observables instead of (or in addition to) the diphoton invariant mass.

  • This figure is intended to show the reader how the mass fits are done in order to extract the signal in each bin.

- For the results corresponding to Fig. 3 and 4, why is a value of LSP mass of exactly 1 GeV chosen, which is a value not even visible in the figures? Why not a massless LSP ?

  • The Monte Carlo simulation samples go down to 1 GeV for the mass of the LSP, since the simulation cannot go to 0GeV of the mass for technical reasons. 1 GeV is used as a proxy for an LSP that is negligibly light.

- Figure 6, The (left) at the fifth line seems wrong (for both panels the 50% branching fraction applies)

  • Fig 6 and Fig7 have now been combined based on suggestion by Sijin Qian and we have also fixed this mistake.

Comments from Greg Landsberg : on paper draft CMS-SUS-18-007-001, dated 05 June 2019:


- LL5-6: new physics in events containing Higgs boson candidates. [Higgs boson is not a final-state particle!]

  • that part of the sentence has been removed.

- LL12-13: in most of our GMSB papers we refer to the LSP as the gravitino, not goldstino. I suggest that we use consistency with our pearlier papers, particularly, since you use "gravitino" in the abstract!

  • DONE

- L31: ... number of jets and the number of jets identified as originating from the fragmentation of b quarks (``b tagged").

  • DONE

- L55: the current best measurement of the Higgs boson mass is not the combined ATLAS + CMS Run 1 paper, but our own H(ZZ) Run 2 paper. Please, either switch the reference to that paper, or add it as the second reference.

  • Added the reference

- L98: give the jet distance parameter, 0.4, here.

  • DONE

- LL98-100: add a standard sentence about jet energy corrections here.

  • DONE

- L109: The transverse component of the vectorial sum ...

  • DONE

- L116: it would be logical to use the acronym SP for the strong production. Suggest changing STP to SP throughout the paper, including the legends in Figs. 3-6 (right).

  • DONE

- L140: ... 0--0.6, 0.6--1.0, and >1.0

  • Done

- L151: the single photon and diphoton production are as much QCD induced as it is electromagnetically induced; to avoid the complicated nomenclature, just call it "SM production of diphoton and photon+jets events".

  • DONE

- LL161-163: similarly: "... from the SM production of a diphoton or photon+jet events, and a resonant background from the SM Higgs boson production."

  • DONE

- L191: give a reference to the discrete profiling method.

  • We have given the reference at the start of the paragraph

- LL203-204: what about the JER uncertainty?

  • The JER uncertainty yields a negligible effect on the analysis.

- Tables 4-9: please move them in the body of the paper, not after the references.

  • DONE



don't capitalize "supersymmetry";

  • DONE


L3: additional Higgs boson candidates;

  • DONE.

L6: at the LHC at a center-of-mass energy;

  • DONE.

L7: corresponding to an integrated luminosity of 77.5 fb−1

  • DONE.
. L8: to the standard model;

  • DONE.


L4: beyond the SM postulate [the acronym BSM is never used];

  • DONE.

LL16-17: ATLAS and CMS Collaborations using proton-proton (pp) collisions at the CERN LHC at center-of-mass energies of 8 [6,7] and 13 [8,9] TeV.

  • DONE.

L19: using pp collisions at the LHC at a center-of-mass energy;

  • DONE.

LL19-20: corresponding to integrated luminosities of 35.9 and 41.5 fb−1, respectively.

  • DONE.

L24: add a comma before "which";

  • DONE.

L28: charged-lepton candidates.

  • DONE.

Event simulation:

Fig. 1 caption, LL2-3: lower: the two;

  • DONE.

L50: Monte Carlo (MC) [used on L155];

  • DONE.

L54: The Higgs boson mass is assumed to be 125 GeV;

  • DONE.

L56: The Higgs boson production;

  • DONE.

L57: up to two extra patrons in the matrix element calculations to model;

  • DONE.

L66: The SM Higgs boson background samples;

  • DONE.

LL68-69: additional pp interactions;

  • DONE.

L75: for the ISR jet multiplicity;

  • DONE.

Event selection:

LL84-85: information from the tracker, calorimeter, and muon;

  • DONE.

LL89-90: if the sum of transverse momenta of the PF candidates;

  • DONE.

L90: add a comma before "and";

  • DONE.

L93: The cutoff values;

  • DONE.

L97: single Higgs boson candidate.

  • DONE.

L98: The PF candidates are;

  • DONE.

LL104-105: from the PF candidates using a loose working point.

  • DONE.

L105: add a comma before "and";

  • DONE.

L111: to large pmissT

  • DONE.

Analysis strategy:

L120: sensitivity to EWP of charginos and neutralinos.

  • DONE.

L120+2: to the ``razor" megabit;

  • DONE.

L120+5: The razor variables;

  • DONE.

L121: The razor variables;

  • DONE.

L122: add a comma before "while";

  • DONE.

L123+5-123+6: to be consistent with the other notations, please use: "High-pT" and "Low-pT" subcategories;


L123+12: High-pT and Low-pT subcategories;

  • DONE.

L123+13: into the High-pT category;

  • DONE.

L123+14: similarly, please use "High-Res" and "Low-Res" here;

  • DONE.

L127: add a comma before "which";

  • DONE.

L128: the High-pT

  • DONE.


L129: into the High-Res and Low-Res categories;

  • DONE.

L131: from the SM background;

  • DONE.

L133: suppress the SM backgrounds.

  • DONE.

L133+3: add commas after "employed" and "[31]";

  • DONE.

L139: and the SM background.

  • DONE.

L153: The SM Higgs boson background;



LL161,163,168,175,183: nonresonant [CMS Style];

  • DONE.

Table 1 caption, L2: add a comma before "and";

  • DONE.

Table 1 body, second column: use High-pT, Low-pT, High-Res, Low-Res throughout the table.

  • DONE.

L166: from MC simulation;

  • DONE.

L169: delete ", respectively".

  • DONE.

Table 2 body, first column: Z→bb¯, H→bb¯

  • DONE.

[note Roman and a bar above one of the b's!];

LL177-178: Akaike information criterion (AIC) [33].

  • DONE.

Systematic uncertainties:

LL194,198: nonresonant [CMS Style];

  • DONE.

L200: uncertainties in the;

  • DONE.

Table 3 body, first column: Integrated luminosity; PDF and scale variations; Lepton efficiency; Photon energy scale; b tagging efficiency; σM/M [M in italics]; Signal ISR modeling;

  • DONE.

L204: b tagging efficiency. lepton identification efficiencies, fast;

  • DONE.

L205: add a comma before "and";

  • DONE.

L210: on the razor variables.

  • DONE.

L212: the High-Res and Low-Res;

  • DONE.

Results and interpretation:

LL223-224: branching fraction for; pair production;

  • DONE.

Fig. 2 caption, L5: best fit signal [superlative compound modifiers are not hyphenated];

  • DONE.

LL234-235: and the χ̃ 01 LSP, and the ... and the LSP.

  • DONE.

L236: at NLO+NLL precision [NLL has been already introduced on L228];

  • DONE.

L237: fix the subscripts for charginos and neutralinos - they should be aligned with the superscripts;

  • DONE.

L240: add a space after the first comma;

  • DONE.

L241: add a space after the comma;

  • DONE.

LL242-243: and the G̃ LSP, or to a Z boson and the LSP.

  • DONE.

L246: at NLO+NLL precision;

  • DONE.

L252: the different simplified SUSY models;

  • DONE.

L258: add a comma before "as";

  • DONE.

LL257-258: For the simplified models of higgsino-like chargino-neutralino production, the limits;

  • DONE.

L271: below 290 and 230 GeV in the;

  • DONE.

Fig. 3 caption, L3: add a space after "(1σ)";

  • DONE.


L274: Higgs boson decaying to;

  • DONE.

L278: the razor variables MR and R2

  • DONE.

L280: nonresonant;

  • DONE.

L285: up to 290 and 230 GeV;

  • DONE.


Replace "centres" with "centers" and "programme" with "program";

  • DONE.

L335: start the sentence "Individuals ..." as a new paragraph;

  • DONE.


Refs. [6,20,34]: remove the issue number, e.g. ", no. 5,".

  • DONE.

Refs. [46,47]: revers the order to follow the chronological order.

  • DONE.

Ref. [49]: add the Erratum.

  • DONE

Table 4 body, header row: nonresonant bkg.; SM Higgs boson bkg.; second column: typeset all bb¯ in Roman; use "High-pT", "Low-pT" throughout the table.

  • DONE.

Tables 5-6 bodies, header row: nonresonant bkg.; SM Higgs boson bkg.;

  • DONE.

Table 7 body, second column: typeset all bb¯ in Roman; use "High-pT", "Low-pT" throughout the table.

  • DONE.

Comments from Maria Cepeda (CIEMAT) : on paper draft CMS-SUS-18-007-001, dated 05 June 2019:



In general, we miss some more motivation of the strategy and goals through the paper to connect the different sections and guide the reader. We also find some general editorial improvement would help to understand the analyses (we imagine this will come already with the collection of all the different CWR reviews).

One of our main concerns comes from L114-116, which read "Two complementary analysis strategies are employed: one focused on electroweak production of charginos and neutralinos (EWP analysis), and one focused on strong production of bottom squarks (STP analysis)." But, in fact the two analysis (EWP and STP) are done for the two searches (charginos and neutralinos, and strong production of bottom squarks) and results from the two of them are presented on an equal footing. And the results are not so different. So, at the end, each of the analysis strategies is not really focusing on a given search. Probably L 114-116 should be rephrased.

Furthermore, we understand it is probably late to change the logical flow of the paper, but it is evident that two different analyses on the same channels are presented (likely done by different groups). The two analyses are different in what respect the main analysis variables/classification and even the background treatment. On the other hand the difference in sensitivity of both analyses is really subtle, reduced to a couple of ad-hoc choices (more b-tagging multiplicity splitting in categories or more background control sample when using low M_R events). It is not clear that the option of having the two analyses merged in the same paper is better than having two separate papers, and it could have been more clear to present them more distinctly even in the same paper. In particular, the systematics description is confusing, because the reader can not know when a given statement refers to the EWP analysis, the STP analysis, or both.

* There are two analyses presented with different focus, and optimized for different production modes. We present results of both analyses on each of the models presented, simply because we have them. The STP analysis is clearly more sensitive (not subtle) for strong production, while the EWP analysis is more sensitive for electroweak production - as they were designed for. The systematics procedure is the same for both analyses as the two analyses have been mostly harmonized in object and event selection. To improve the clarity on that aspect we have added the sentence: "These systematic uncertainties are approximately the same for the SP and EWP analyses as the object and event selections are identical."

Type B comments (physics)


Abstract: A careful rewrite of the full abstract to make it flow better would be useful. For instance, the first sentence is run on - not incorrect, but it takes two reads to understand well the process in question. A search for supersymmetry ... supersymmetry is very general, do we search for supersymmetry or rather for evidence supporting a supersymmetric model ? Also, the last sentence takes five lines. Does the “,for the case…” clause affect only the last “;”, or all?

  • To clarify the "for the case" clause, we have changed the last part of the last sentence to : "and higgsino-like chargino-neutralino production in the case where the neutralino decays to a Higgs boson and a gravitino $100\%$ of the time for neutralino masses below $290~\mathrm{GeV}$"

L2: it is not proven that the Higgs is a "known fundamental particle". It is so in the SM, but there is no experimental proof that it is fundamental. Whether or not nature will point to a fundamental or composite Higgs is actually one of the main hot topics for future accelerators. So either embed this statement in the context of something "allowed" by the SM (because the SM as an effective theory at low energies could still host a composite Higgs) or just drop the comment ", the only known fundamental scalar particle,".

  • Removed the phrase

L8-9: "a bottom squark, ..., which is produced through the strong interaction and decays to a Higgs boson, quarks, and the lightest supersymmetric particle (LSP)". The bottom squark decays to a Neutralino Chi_2^0 and a b-quark, with the Neutralino Chi_2^0 going to H plus the lightest supersymmetric particle (LSP) In general, the first paragraph presents an enumeration of possible processes in the context of SUSY, without any further explanation of why these diagrams and not other different ones, or what is the motivation for such processes, other than, there are BSM models proposing them .

  • The analysis searches for SUSY production with cascade decays to Higgs bosons. The diagrams presented are the scenarios where this can happen.

L78: It would be good to mention explicitly either in the event simulation or the event selection sections that SM diphoton backgrounds are taken from data, and explain which ones they are, to anticipate the background modelling discussion

  • We discuss the SM diphoton backgrounds and the data driven prediction in the introduction in Section 1.

L55: You are citing only the ATLAS-CMS combined Higgs paper at Run1 for the best Higgs mass, but this is not the best one. The most precise value - unless a new published combination appears before publication of this analysis - is the one from CMS for ZZ*->4 leptons at 13 TeV (HIG-16-041, http://arxiv.org/abs/1706.09936).

  • We have added the reference to the CMS H->ZZ measurement.

L51: Are SUSY signal samples finally produced for 2017? In the AN it is mentioned only samples corresponding to 2016 were produced and used?

  • Yes, the 2017 samples have been produced

L61: Ref. 15 is a comparison of the matching methods available in the market, i.e. you have to really specify in the text which matching method is actually used.

  • DONE

L64: Just out of curiosity... Did you observe differences (in kinematic variables, cross sections, etc) between samples generated with NNPDF3.0 and NNPDF3.1 ?

  • No significant differences were observed between the samples when it comes to shape and yield for the variables relevant for this analysis.

L75-76: How are these corrections for ISR jet multiplicity and pT applied? In a multiplicative way per evt, depending on the number of jets and their pT? Are they only applied to the analysis of 2016 data (since you only have signals for that year and configuration) or applied to both 2016 and 2017 analyses?

  • The weights are applied on an event per event basis, depending on nJets and pT for the strong and ew production modes, respectively. As stated above, we are now including the 2017 samples.

L77: "The full size of this correction is taken as a systematic uncertainty." The AN reads this is so in the weakly produced signal, while the strongly produced one considers half this correction as systematic uncertainty. What is correct?

  • We have clarified this issue with the following sentence: "For the botton squark pair production signal model the full effect of the correction is propagated as a systematic uncertainty, while for the chargino-neutralino production one half of effect of the correction is propagated as a systematic uncertainty."

L79: This section reads as Event reconstruction, not event selection. In fact, some general description of the selection flow would be good here to prepare the next section (which mixes strategy and selection)

  • We have changed the section title to "Event reconstruction and selection". The next section discusses only the strategy and event categorization.

L83: "in order to reduce the rate of background" - which background? since you have not introduced diphoton backgrounds yet, this sentence is odd: you cannot supress Higgs production which is the only background you have described in detail (and would be the signal for other analysis)

  • We have changed it to "reduce the trigger rate"

L86: "two photons reconstructed in the barrel region". A sentence justifying the selection only in the barrel would be good (is the signal mostly central? how much acceptance is lost with this requirement?).

  • We have added a phrase indicating that the signal is mostly central. Less than 10% of signal acceptance is lost by not including the endcap, which moreover have more fake photon background as well as much worse energy resolution.

L91, L94: which isolation threshold? if the selection is loose, how much background rate does it let through?

  • We use the loose isolation thresholds defined by the EGM POG. We are already using a loose selection working point.

L93-95: you need a minimum fixed energy cut for the photons, in order to ensure that you are above the trigger thresholds. Or at least a minimum diphoton mass such that pT(leading)>0.33*mass>30 GeV and pT(subleading)>0.25*mass>22 GeV. We see later in the analysis that you actually plot the diphoton spectra starting at 100 GeV, which will ensure that the trigger conditions are satisfied. If so, please write at this level that you only look for diphoton invariant masses above 100 GeV.

  • DONE

L101: This tagger is used for 2016 data analysis? Was it also present in 2017? If a different tagger was used, either you mention both or remove both specific names, and explain the working point, not only in terms of mistag probability but also tag efficiency. Please give as well the bjet efficiency and not only the mistag rate.

  • It is used for 2016 and 2017. We have added the efficiency.

L103: "We identify any jets with pT > 20GeV and satisfying the loose working point as a b-tagged jet." You have just mentioned Jets are identified when having more than pT = 30 GeV.

  • We have moved the sentence concerning jets with pT > 30 GeV after the above sentence as followsing: "Other jets with $\pt>30$\GeV and $\abs{\eta}<2.4$ are considered in this analysis for the purpose of jet counting.

L104-105: what is a "particle-flow candidate with a loose working point" ?

  • We have added a clarifying phrase to this sentence as follows: "Electrons and muons in the region $\abs{\eta}20$\GeV are selected from the PF candidates, and a loose identification working point is used."

L104: is the value |eta| < 2.4 applied to the selection of electrons, or |eta| < 2.5?

  • We are using 2.4

L107-108: How are these deltaR cuts performing in data and MC? Do they need corrections or Scale Factors?

  • These vetos have efficiency larger than 99.9% so do not need corrections.

L109: The definition of pT^miss needs a change of sign.

  • DONE

L113 - General comment to the analysis strategy section: it is very difficult to follow what is done for each analysis and why: consider expanding the first paragraphs to motivate the decisions and explain the reasoning to have two selections

  • We have changed the first paragraph to clarify the two complementary analyses: "Two complementary analysis strategies are pursued employing two alternative event categorization schemes: one focused on electroweak production (EWP analysis) of charginos and neutralinos; and another one focused on strong production (SP analysis) of bottom squarks. For both strategies, we define event categories that are sensitive to the \pt of the diphoton Higgs boson candidate, and
the presence of additional $\PZ$, $\PW$, or \hbb candidates. Within each event category, we define search region bins based on the number of jets and \cPqb-tagged jets, and the values of kinematic variables that discriminate between SUSY signal and SM backgrounds events. "

L114-116: we find these confusing. You should say clearly that both EWP and STP analyses are used for ALL the channels considered. The way it is written gives the impression that the sbottom analyses are addressed by the STP and the electroweak production by EWP, which is not the case, and leads the reader to find out what is going on only at the end of the paper.

  • We believe the new draft after incorporating all CWR comments clarifies the situation.

L117: You need to briefly explain the analysis strategy and not only its improvement. At least sketch what was done in the previous paper in broad lines.

  • DONE

L120: it would help to remind the reader around here that the first selection requirement is a diphoton compatible with a Higgs as discussed in the previous section - and that looking for leptons or bjets is done afterwards to complete the event description.

  • We have revised the first paragraph of this section which clarifies the diphoton Higgs boson candidate requirement.

L120-L133: Can you connect the description to the feynman diagrams given in Figure 1? Explicitly say that there is always a Hgg decay, that the other Higgs (if there) is only considered to Hbb; that the two lepton category corresponds to the diagram with the Z, the one lepton the one with the W, etc, describe what happens in the decay chain of the particles to describe the full event… This would help to follow the selection. For example, it is not clear in line 120 where are the leptons coming from in the decay chain and how does line 126 connect to the two lepton/one lepton/no lepton discussion.

  • We have added this sentence: "These enhancements improve the signal sensitivity to electroweak production of charginos and neutralinos. By isolating events with a $\PZ$, $\PW$, or \hbb candidate in addition to the \hgg candidate, we improve sensitivity to the simplified signal models shown in Fig.~\ref{fig:SMSDiagrams}."

L122 and others: this SM background has not been described (you only do so in line 151). The only SM background you have talked about so far is the Higgs one. Can you distinguish which of the cuts suppress SM Higgs and which try to deal with diphoton and photon+jets?

  • The new draft discusses the dominant diphoton and photon+jets background in the introduction section

Paragraph after line 123: how do you classify events with one muon and one electron? It is not clear whether they are "Electron", "Muon" or even something else.. If they are excluded from the analysis it should be stated explicitly. What is the motivation for the pT = 110 GeV border line in the Higgs boson pT? Where is this number coming from? Similarly, was the definition of the H (95-140 GeV) and Z (60-95 GeV) mass ranges optimized ?

  • Events with one muon and one electron are included in the Two-Lepton category. We have removed the phrase "same-flavor" from the sentence to make the description more accurate. The 110 GeV border is employed in the previous iteration of the analysis. That cut value was obtained as a result of an optimization done for the Run1 version of the analysis. The H and Z mass ranges are also optimized for dijet mass resolution.

L126: How important is each of the categories? Can you give an idea of their relative purity and significance here? In the appendix there are tables, but a sentence summarizing the information would be useful to the reader.

  • We have added this sentence: "For signal models in the noncompressed region of parameter space the High-$\pt$ category provides the best sensitivity, while for signal models in the compressed region of parameter space the categories with additional leptons or Higgs boson candidates provide the best sensitivity. "

L 131: ... each event category is further divided into bins in the M_R and R^2 ... and Table 1. The reader may wonder why the Muon Low pT is not split into R^2 bins and the Electron Low pT is. Background suppression ?

  • We added this phrase to clarify the issue: "provided there are a sufficient number of data events in the diphoton mass sideband to be able to estimate the background"

L133: Why is an alternative method used in the STP approach to generate the two hemispheres? What is the gain/optimization relative to the Razor megajet algorithm presented before (EWP approach)?

  • This is a result of an alternative optimization by respective past teams who have worked on Razor and MT2 analyses respectively. It is primarily historical.

L149: this paragraph would be useful earlier since it explains more generally the processes and the idea behind the selection.

  • We have moved this paragraph to the first paragraph of the section as suggested.

L 150: "...we perform a combined simultaneous fit using all of the search bins...". The fit is to the nb. of events in each bin?. Later on we learn it is done in terms of the diphoton inv. mass distribution. Maybe it would be worth to make it explicit.

  • DONE

Table 1: reformat so that it is clearer (use multirow and shuffle columns/rows as necessary to give a better idea of the hierarchy of the categories). Add explicitly the meaning of High Res, High Pt, etc for clarity.

  • They have been reformatted and a scematic has been added to the paper to better explain the hierarchy.

Table 2: explain the “or” in the caption

  • We have harmonized tables with the categories between the two analysis.

L161: A justification of why each analysis uses a different background modelling is missing. What are the advantages/disadvantages of each approach? Furthermore, Figure 2 shows the background modelling performance for EWP, but not for STP. A figure comparing both background modellings, or at least one showing the performance in STP would be good.

  • We have added this sentence: "The use of an alternative method is intended to increase the robustness of the background modeling. Similar accuracy is expected of the two alternative background fit methods."

L 187-189: "A set of possible functions is chosen from sums of exponential functions, sums of Bernstein polynomials, Laurent series, and sums of power-law functions."These are the "an exponentially falling functional form" advanced in L 153, can it be connected?

  • DONE

L200-207: how is it justified that we have a common set of systematics for the EWP and the STP analyses? Particularly in what respects theory uncertainties, it is far from obvious, as well as in the case of b-tagging, which obviously affects more the sbottom analysis

  • We have clarified this confusion with the extra sentence: "These systematic uncertainties affect the event yield predictions of the SM Higgs boson background and SUSY signal
in the different search region bins, and are propagated as shape uncertainties. " The Table 3 is summarizing the typical size of the systematic uncertainties, which turn out to have similar ranges for the two analyses.

L 208: About the loss of transparency. Was not it corrected ? or the syst. uncert. (10-24%) is evaluated after transparency correction ?

  • It is evaluated after the correction.

L212: this LowRes and HighRes only applies to the EWP analysis; what is it done for the STP analysis?

  • The SP analysis does not have any event categorization based on the expected diphoton mass resolution.

L 218-227: refer somewhere to Fig. 1?

  • DONE

L 230: what happens if the mass splitting is larger than 130 GeV ?

  • We did not explicitly consider those scenarios. Larger mass splitting would progressively change the shape of the kinematic variables: Higgs pT and MET.

L 230: split the paragraph after [42-44]. ? Start a new paragraph with "In the second scenario ..."

  • DONE

Fig 2: Would you consider making 2 GeV bins? Is this the category with most statistics?

  • We have fixed our binning for the fit. This is not the category with the most statistics.

L230-231: how does the sensitivity change if the mass difference between chi20 and chi10 is much larger? Is the MET change significantly affecting the results?

  • We have not studied those signal models. The Higgs and chi1_0 will get a larger momentum in the rest frame of the chi2_0, and after boosting to the lab frame they will likely be more asymmetric, as one of the decay products can get an extra boost while the other one may get a boost in the opposite direction. Likely there is larger MET but not for all events, so it is a more complicated situation that would need to be studied with additional signal MC.

L247-251: this is probably too technical, leading to unusual statements like some neutralinos having negative masses. We guess that those changes of sign are just tricks to implement the mixing matrices. Do we really need these explanations or is it just enough to cite Ref. 45? If not, can it be done in a simpler way? (For instance, it would be useful to explain the relevance or implication of setting some elements to +-0.5 or 1 in the mixing matrices.)

  • This is the statement that has been carefully prepared by the SUSY group and used for all CMS SUSY papers involving electroweak SUSY signals. We prefer to stick to this convention and not change it.

L 252: Specify in the text the assumed masses used for the values in Tables 7, 8 and 9 (they are written in the caption of Table 7).

  • To avoid repetition of text in the spirit of the comments from Sijin Qian, we have instead added the sentence: "The details of the particular signal model are described in the captions of Table 7".

General comment on the results: would it make sense to highlight in the body of the paper the best of the analysis for each set of results only (move the other figure to the backup); or alternatively to prepare a figure with both analyses for the main body and keep the independent results in the appendix?

  • We can discuss this with ARC and Pub Comm.

Tables 4-6: are the uncertainties just statistical? Nothing is written in the captions. In addition, the reader can not conclude anything about the consistency (or excessive consistency) of the data with the background hypothesis if no systematics are given.

  • We have added this sentence to the captions: "The uncertainties quoted are the fit uncertainties which include the impact of all systematic uncertainties."

Summary: After such exhaustive categorization to improve sensitivity we miss some sentence about the improvement with respect to the previous results. Maybe at the very end something like “These results extend previous limits by xx%”. Or if new scenarios have been tested, something like “New decay channels or new search channels involving H and Z ... have been explored.” Additionally, maybe a sentence summarizing the improvements in the analysis, something like “The higher statistics analyzed allowed us to increase the event categorization thus improving previous limits…” or similar.

  • DONE

Type A comments (LE)


L2: If you define Higgs boson as H then you should use it (see line 9).

  • DONE

L2: "provide an intriguing window..." can windows be intriguing? or is it more the view/outcome you obtain after looking? What about "an interesting/attractive/appealing..."?

  • Changed window to opportunity

L3: “beyond the standard model (SM)” and L4: “beyond the standard model (BSM)” - I understand the problem, but maybe rewrite to be able to introduce both acronyms

  • Dropped BSM acronym following another CWR comment.

L7-9: the sentence is grammatically confusing. The word "produced" appears twice, the term production is applied to both the initial state in the chain and the decay products, ... Suggestion: split the sentence. For instance: "In minimal supersymmetry (SUSY) [3] a Higgs boson may appear in processes involving the bottom squark, the supersymmetric partner of the bottom quark. Bottom squarks are produced via strong interactions and then decay to a Higgs boson, quarks, and the lightest supersymmetric particle (LSP)"

  • DONE.

L13-14: "The decay signature in this case depends on whether... " looks a bit strange. Suggestion: "The decay signature in this case changes according to whether ..."

  • DONE.

Paragraph after L 123: pT (in High-pT and Low-pT regions) would look better if written as the symbol pT

  • DONE.

L 237: shift the subscripts 2 and 1 closer to letter chi.

  • DONE.

L 241: add space before leading

  • DONE.

Figure 3 (caption) Add space after (1sigma) (fourth line).

  • DONE.

L 277: M_Rand --> M_R and (add space)

  • DONE.

Fig. 3: we do not see any dotted or dashed curves, contrary to what the legend says.

  • FIXED.

Tables 4-9: It would be good if the tables could be fit before the Acknowledgements section.

  • DONE

Table 2: Quark names (in H->bbbar and Z->bbbar) should be roman.

  • DONE.

Table 4: Quark names (in H->bbbar and Z->bbbar) should be roman. pT (in High-pT and Low-pT regions) would look better if written as the symbol pT

  • DONE.

Table 7: Same comments.

  • DONE.

Comments on the references


[6] remove no. 5, pp in roman.

  • DONE.

[7] remove no. 7.

  • DONE.

[9] add a comma before arXiv.


[12] ATLAS AND CMS collaborationS. pp in roman.

  • DONE.

[18] GEN-17-001 already published, update the reference.

  • DONE

[34] remove no. 04.

  • DONE.

[45] arXiv: Hep-Ph --> arXiv: hep-ph

  • DONE.

Comments from Grace Haza (UC Davis) : on paper draft CMS-SUS-18-007-001, dated 06 June 2019:

Type B (physics)

Line 56: no mention of the order of the production cross section that the events are scaled to, just what the generated order was. There are better than NLO Higgs production cross section results.

  • DONE

Line 61-65: Some statement about why the Pythia version + tune are different for 2016 and 2017? Is there a significant difference? Likewise for the PDFs. In both cases is this because of changes in the LHC+CMS data taking in 2016 and 2017 or just because Pythia etc. have been further developed between 2016 and 2017?

  • I believe this is just due to some minor additional development between 2016 and 2017, and I don't think there's anything substantial. We prefer not to comment on this since it's primarily a technical reason.

Lines 71-78: I am confused about the ISR modeling. Do the corrections effect the central values? Or is the systematic error found by seeing the shift when applying the corrections or not? "The full size of this correction..." Is this referring to the 1% on the signal yield, 1% seems small, was that reall? Or the shape corrections? Table 3 lists the systematic uncertianty for "Signal ISR Modeling" as 25%, Should this be written as 18-49%? Perhaps in the first line, you could explicity mention it is a shape correction only: "..., we apply a shape correction as a function of..."

  • added "shape" correction as suggested. The corrections are applied and affect the central values but only by a small amount. The size of that correction is taken as a systematic uncertainty.

Line 86: Why barrel only? With so many categories in this analysis, was there no consideration have having an endcap category as well?

  • The endcap yields negligible sensitivity due to increased fake photon background and significantly worse resolution.

Line 94: Why are the photons required to satisfy pT/m>0.33,0.25?

  • These are just requirements on the photon pT. The cuts are scaled with mass to help with background rejection at lower mass values. This is done primarily to follow what is done by the SM Higgs analysis, and they have seen in the past that it helps signal to background discrimination a bit.

Line 112: Should pTmis be defined (i.e. use the 3 line equal sign) to the magnitude of the vector?

  • Defined by text now.

Lines 149-159: Should this entire paragraph, discussing the background estimation strategy, be moved to the background section?

  • We moved this paragraph to the beginning of this section following another CWR comment which suggested that it would help with the understanding of the strategy.

Line 178 and 185: This paper commonly refers to previous work or references to describe important parts of the analysis. It would be better to spend some time describing these procedures here (and then referring for more details) than to just have a single line that summarizes without explaining the procedure.

  • We have rearranged these sentences to explicitly clarify the important features, as suggested.

Line 203: Describe the systematics in more detail, specifically the nonstandard ones such as “missing higher-order corrections” and “fast simulation pTmiss modelling”, what are the "higher-order corrections" to?

  • Added "QCD" to the higher order corrections.

Line 208: transparency loss in the ECAL crystals -> should this be ascribed to radiation exposure?

  • DONE

Line 211: "prediction of the SM Higgs background...", Is this uncertainty not propagate to the QCD background? I would imagine pileup/ECAL would effect QCD too...

  • No, because the other backgrounds are determined by the functional form fit to the data, and is already not constrained by any simulation prediction.

Line 236: This would be better served in Section 3 where you discuss the modelling rather than here where we want to see the results.

  • We prefer to keep all the information of the signal models used for interpretations together in this section.

How do you account for the interference in the SM Higgs production and the SUSY production? In Figure 2 I see no difference between the background modelling prediction in the signal+background fit where I would expect some change in the “SM” Higgs production when considering a SUSY Higgs.

  • We are neglecting any interference between SM Higgs production modes and the SUSY Higgs production modes. They are different processes.

Lines 247-251: Is this stuff overly technical?

  • Yes, but these sentences have been crafted by SUSY POG experts as a compromise between having the necessary details and not being overly technical. We prefer to stick with the agreed-upon convention.

Type A

Global: different notation to represent Higgs or Z decays to b quarks. examples: L44) Hbb; Table 2) H->bb; Table 5) H_{bb}

  • DONE.


a) I do not think supersymmetry needs to be capitalized.

  • DONE.

b) title: "with the CMS detector" should be removed: https://twiki.cern.ch/CMS/Internal/PubGuidelines#Title

  • DONE.


a) "decay chains of the pair-produced supersymmetric"... The word "the" seems strange to me, almost like there are only two to choose from.


b) "The presence of charged..."; this sentence can use some commas (I think) to clear it up: "The presence of charged leptons, additional Higgs candidates, and various kinematic variables are used..."

  • DONE.

c) Does "LSP" need to be spelled out, it is an acronym not yet defined.

  • DONE.

d) "gravitino masses" -> "gravitino mass"

  • DONE.

corresponding to 77.5 fb1. -> Is it a new CMS standard to drop 'an integrated luminosity of or 'a total luminosity of'?

  • Added integrated luminosity of .

LSP is an acronym without definition (defined on line 9 of Introduction - should it be defined in abstract?)

  • DONE.

2-2: provides an intriguing window to explore physics -> seems a bit of a weird metaphor to me -> can be used to explore physics

  • changed to "opportunity"

Line 3: Introducing SM initialism after “beyond the standard model (SM)” and then introducing BSM immediately after with the same term is strange. Rephrase so that the “beyond” is introduced after SM.

  • DONE.

Line 5-6 I feel ", motivating a search for evidence..." can be removed. The introduction itself is serving as the motivation.

  • DONE.t

Line 18: Missing “data” collected. Change to “We search for evidence … using proton-proton collision data collected by the CMS experiment at the LHC … “

  • DONE.

Lines 20-21: Should this read "35.9/fb and 41.5/fb *of data*"? Or is just the sample size ok?

  • DONE.

Line 21: Comma before respectively.

  • DONE.

Line 22: different exclusive categories -> various exclusive categories OR several exclusive categories (I think 'exclusive' must be 'different' by definition)

  • DONE.

Line 26: "...CMS Collaboration [8]; we enhance the sensitivity…”

  • DONE.

Line 29: "focused" -> "focuses" to remove the past-tense

  • DONE.

Line 32: no comma, possibly move "by exploring alternative phase space regions" to the end of the sentence if appropriate

  • DONE.

Line 51: I don't think "in the search regions" is necessary.

  • DONE.

Line 72: no comma

  • DONE.

Line 73: Is p_{T}^{ISR} a typo? I am not sure what it refers to, is the pT of the chargino-neutralino system the same as that of ISR, or a proxy?

  • It's simply defined as a proxy.

Line 83: "rate of background" -> "background rate"

  • Changed to trigger rate.

Lines 89-91: A photon is considered isolated if the sum of the pT of the particle flow candidates from charged hadrons, neutral hadrons and photons each are below a set threshold. -> could be better phrased? e.g. doesn't say anything about these candidates being 'near' the photon

  • Added phrase about DeltaR to the photon

Line 91: The isolation sums-> undefined - presumably one is to assume an 'isolation sum' is the sum of the pT of particle flow candidates 'near' a photon?

  • Added phrase about DeltaR to the photon

Line 92: the pileup energy density -> undefined How is it obtained?

  • Added a citation to the paper by Cacciari and Salam.

Lines 92-93: an electron object -> undefined. Does it mean there is a projecting track and E/p is appropriate?

  • Changed to "If the photon is matched to a reconstructed electron that is inconsistent with a conversion candidate, it is discarded."

Line 93: The cut values have been chosen to correspond to a loose working point with an efficiency of approximately 90%.-> efficiency of what? loose working point?

  • We changed the sentence to say: " A loose working point is used for the photon identification, which has an efficiency of approximately 90%"

Lines 96-97: are combined into a single Higgs candidate -> what does this mean? Does it just mean form the invariant mass of the photon pair? Should it be 'are considered to arise from a Higgs decay' or something like that?

  • changed to : "are considered as the decay products of the Higgs boson candidate."

Line 98: (and others in this paragraph) you have introduced PF in previous paragraph, use it.

  • DONE.

Line 98: “anti-kT algorithm with distance parameter R = 0.4”.

  • DONE.

Line 106: I feel like this should say "= 0.4" if I am understanding it correctly. If I'm not, I think that sentence and the following sentence are confusing.

  • DONE.

Line 119: for the larger dataset -> for the enlarged dataset ?

  • DONE.

Line 123: exhibits an exponentially falling spectrum in both variables -> exhibits an exponentially falling spectrum in each variable

  • DONE.

After line 123: This paragraph is very dense and hard to understand. A schematic would go a long way to explain the categories. In addition, consistently use capitalization (“high-pT” -> “HighPt” as used in Table 1) The naming here could be better, this appears like an internal distinction that was added to the paper as is.

  • Changed.

Lines 123-124: a) the word "identified" can be removed from whenever it is placed before lepton/electron/muon. You don't write "identified" Higgs boson, for example.

  • DONE.

b) remove "if any such pairs are found"

  • DONE.

Lines 126-129: 'isolate' seems like a confusing word here when it doesn't relate to 'isolation' as used in event selection. It is also hypothetical, rather than factual, and the statements are made as if they are factual. And finally, it is not clear what is meant by 'isolate' in this context.

  • Updated the text for clarity, removed 'isolate'.

Lines 127: that contain additional Higgs (Z) boson which decays to -> that contain an additional Higgs (Z) boson which decays to

  • DONE.

Paragraph after 133: to produce two hemispheres referring to as pseudojets -> something broken in this sentence. Was the intent to write to produce two hemispheres referred to as pseudo-jets ? But that doesn't seem to make sense.

  • changed it to "referred to as pseudojets". Yes, we are just defining the two hemispheres as the pseudojets.

After line 120, 123, and 133: For future, don’t use

/equations within a paragraph, manually return so that line numbers are maintained.

  • DONE.

Line 134: comma after M_T^(i)

  • DONE.

Line 140: are used-> drop (the first occurrence in the sentence - line 139 - still works for this part of the sentence), 140 avoid using infinity symbol. Replace with >= 1.0.

  • DONE.

Line 141: A schematic would be helpful.

  • DONE.

Lines 151-153: exponentially falling-> exponentially-falling in two places


Lines 152: It is my understanding an exponential function is not used to model the background - as discussed in Section 6.

  • While it is not a simple Exp function, we are using functional forms that are exponentially falling. They may have additional features but are still from the class of exponentially falling functions.

Line 162: "..., and resonant background" -> and a resonant background

  • DONE.

Line 164: using -> within

  • DONE.

Line 166: is modeled from the Monte Carlo simulation -> presumably this means 'taken from the MC' (the MC is already modeling the assumed physics)

  • DONE.

Lines 172-173 : It is also important to penalize unnecessary freedom in the background model as we do not want to arbitrarily increase -> At the same time we do not want to arbitrarily increase

  • DONE.

Line 173: free fit parameters -> fit parameters (not sure what 'free' adds

  • DONE.

Lines 175-176: Therefore, both methods employed attempt to choose the best functional form model for the non-resonant background by balancing these two competing interests. -> Does this really add any information? Just drop the sentence?

  • Dropped the sentence.

Line 178: remove the word "candidate", it had me thinking of particle candidates...

  • DONE.

Lines 179-180: penalizes forms with additional degrees of freedom -> favors forms with fewer degrees of freedom

  • DONE.

Line 186: and is not intended to yield more aggressive or more optimal results.-> just drop?

  • DONE.

Line 187: The choice of the function is treated as a discrete nuisance parameter-> Not sure the 'choice' is a nuisance parameter. Isn't it just 'The background function is treated as...'?

  • DONE.

Line 191: "taking into account the penalties"->"taking the penalties into account"

  • DONE.

Line 193: remove "uncertianty in the"

  • DONE.

Line 195: remove "exponentially", it is my understanding an exponential function (ae^bx) was not used, but rather sums of exponentials, polynomials, etc...

  • DONE.

Line 205: comma after "modeling"

  • DONE.

Lines 221-222: I think you can drop "consistent with any of the SUSY signal considered". Since their is no SM deviation, the lack of excess is consistent with any new physics models.

  • DONE.

Lines 223-224: on the production cross section times branching ratio -> on the product of the production cross section and branching ratio

  • DONE.

Lines 224-225: a) Should the b tilde symbol be introduced after "bottom squark"? It is done for all the Chi particles and G particles. b) pair-production and pair production are written both ways, make those consistent.

  • DONE.

Lines 227,234,235: Don't need to say LSP (/Chi^0_1) every time. Just use /Chi

  • DONE.

Line 236: once you define acronyms, use them: remove next-to-leading-log and just use NLL

  • DONE.

Line 237: the /Chi subscripts and superscripts are not in line with each other

  • DONE.

Line 240: add space after first set of /Chi products

  • DONE.

Line 241: add space before "leading"

  • DONE.

Line 241: particles, leading -> missing space after comma

  • DONE.

Line 257: "production cross sections." -> "production cross sections times branching fractions" (?)

  • We set limits on the simplified model production cross section. The branching fraction is divided out of the quoted limit based on assumptions of the model parameter space point.

Lines 276-277: Photon pairs in the central part of the detector are considered to reconstruct the Higgs boson. -> Reads a bit strangely. I suggest something simpler, e.g. Photon pairs in the central part of the detector are assumed to originate from decays of the Higgs boson.

  • We have changed it to: "Photon pairs in the central region of the detector are used to reconstruct Higgs boson candidates."

Line 274: Higgs -> Higgs boson

  • DONE.

Line 276: change "considered" to "used”, "... to 77.5 /fb"; should it read 77.5/fb of data?


Line 282: The colon is misplaced because it breaks the flow of the sentence. If you consider it important, move it after 'exclusion limits' and then add 'on' at the start of each of the three sub-sentences: [...]limits: on the production cross section....; on wino-like...; and on higgsino-like....(The colon could also be dropped completely, and the sentence structure would still work:[...] limits on ... ; wino-like. ... ; and higgsino-like ... )

* DONE, dropped the colon.

Table 1 caption: remove "is presented along with the..."


Table 1 is very bulky. It might be good to trim that down somehow.

  • It's difficult to condense it more given that we need to convey the necessary information in it.

Table 1: replace "None" with "No req." (for no requirement)

  • DONE

Table 1: I would expand the caption to describe the categories more. In particular, what are 17-28 since it's not immediately obvious how 17-22 differ from some of the preceding bins or why some bin numbers have two categories.

  • We have added a longer caption description

Table 1 and 2: These should be of the same form for easy comparison. I find Table 2 to be slightly better (though a schematic as mentioned above would be better still). Also, since this structure is used in the supplementary tables at the end, use the same format as used there (if you decide to keep the table layout rather than a schematic).

  • The tables have been harmonized and a schematic has been added.

You should write explicity how many bins are used in each analysis. It is clear from Table 1 that the EWP analysis has 28 bins (or is it 34?) but I have no idea from Table 2 how many bins there are, maybe mention there are ~75 bins total in the caption?

  • We now mention the number of bins for each approach explicitly in the text.

Table 2 caption: that the region-> that region


Table 2: this layout is not clear. Consider adding column headings to improve

  • Tables have been harmonized.

Table 3: a) Order these from the largest to smallest sizes. b) The caption mentions uncertainties on the "SM Higgs background", does this not effect QCD? If so there is no table for QCD. c) The caption mentions "the size of their effect on the signal yield", does this not effect the background yields?

  • We changed the order as suggested. The QCD background is predicted by data driven predictions for which none of these systematic uncertainties apply. The systematic uncertainties affect background yields and also fitted signal and we simply choose to express them in terms of signal yields.

Table 4: lists 34 bins for the EWK analysis. Table 1 lists 28 bins for the EWK analysis. Where did the extra bins come from. (I am assuming EWK should be replaced with EWP?)


Some places have pT and others have p_{T}, make those consistent. Table 4 for example uses pT but Table 5 does not.


Table 7 caption: The label HH and ZH -> The labels HH and ZH


Table 8: The symbol \ell should be used for Z_{ell, ell}: https://twiki.cern.ch/CMS/Internal/PubGuidelines#Miscellaneous


Figure 1: For the bottom squark diagram, are the subscripts _{1} on the b squark supposed to be there/?

  • Yes

Figure 2: y-axis to “Events / GeV ”, Lower left “0” and “100” are touching, offset axes a bit more, Make the text larger, consider adding bin information into the plots. These plots would be shown in a seminar and it's not clear without those details what is going on here.

  • DONE.

In the caption, it mentions the plot on the left is the background only but I see a bump at 125; is that actually background only? The bump looks striking to me, so perhaps a quick note should be made, in the caption, to remind us this is statistically compatible with zero (although there is a real Higgs there?...) Add a legend entry here too to make it consistent with the plot on the right.

  • The bump is the SM Higgs, which we consider as background. We were asked to remove the legend at some point because there was only one curve shown.

Figs 3,4,5,6: it's not clear in the text or in the captions why you are comparing the EWP and STP since the EWP performs better for electroweak production and STP for strong production (as expected).

  • It was decided with the conveners to keep both interpretations in the paper.

Figures 3-4: Use consistent “[]” or “()” for all figures (Figure 2 is “()” rest are “[]”) Lower-left “0” and “250” are touching, offset axes. Note: the font size on this is much better for the “CMS” and lumi, but try to increase for legend.

Figure 3 caption: (1\sigma)of -> missing space before 'of'

  • DONE.

represent the observed exclusion region and its +/-1 standard deviations (1s)of their experimental and theoretical uncertainties, -> Doesn't seem to match what the legend on the figure says?

  • Changed.

Figs 3 and 4: I think you should comment on the discrepancy between expected and observed and why it changes direction between Figs. 3 and 4.

  • We prefer not to highlight half sigma to one sigma fluctuations. We have isolated to a couple of specific bins which show corresponding 1 sigma fluctuations.

Figure 4 caption: confusing caption which doesn't match the legend

  • Changed.

Figure 5 caption: undergo several cascade decays producing either Higgs bosons -> something missing here

  • Changed, comment Fig 5-6.

Figures 5-6: Put y axis label at top like the rest of the plots. Increase font for legend. Note: size of CMS relative to luminosity is different here than in Figures 3-4. I prefer the 3-4 sizing. In legend, add the theory uncertainty lines to the plot (as you do in, for example, Figures 3-4 “Expected” legend).

  • We prefer to keep the current style as they are matching previous CMS 1D limit plot results.

Figure 6: "... decays are each 50% (left)"; I think the (left) is a typo

  • DONE.

It looks like Tables 4-9 all got placed in the references section. For instance on Page 9, the header says "References". Do these not belong before the references section?


Ref 18: Add cds link Ref 48: Add cds link

  • %GREN%Ref 18 has been fixed. Ref 48 follows the convention that we were told by Pub Comm to follow for this one.

Comments from Luigi Moroni (Milano Bicocca) : on paper draft CMS-SUS-18-007-001, dated 06 June 2019:

Comments of type A (style)

Abstract: LSP (lightest supersymmetric particle) is not defined.

  • DONE.

Line 26: What about “This paper extends the previous work by the CMS Collaboration [8] enhancing the sensitivity to SUSY signatures involving W and Z bosons through the introduction of additional ...”?

  • DONE.

Line 100-102: What about “Jets originating from a heavy-flavor hadron are identified by the CSVv2 tagger algorithm [27] using a loose working point. The resulting mistag rate for light-quark and gluon jets is approximately 10%.”?

  • DONE.

Line 102-103: “We identify any jets with pT>20 GeV and satisfying the loose working point as a b-tagged jet” Aren’t you considering pT>30 GeV only?

  • Sequence of sentences have been changed to clarify that we do btagging down to 20 GeV but count jets down to 30 GeV.

Line 111-112: the formula for pTmiss is splitted in two lines.

  • Fixed

Line 169: Would remove “respectively”.

  • DONE.

Line 170: Change “functional form used” into “used functional form”

  • DONE.

Line 233: “mass-degenerate” is splitted in two lines, and in two pages.


Line 241: “particles,leading” → “particles, leading”

  • DONE.

Table 1: Uniform the “bin number” notation with Tables 4 and 7.

  • DONE

Figure 2:

The plots are not PDF The lines for sig. and bkg. are dashed in the legend but are solid in the plots

  • Fixed.

Caption of Figure 3: “(1\sigma)of their” →“(1\sigma) of their”

  • DONE.

Tables 5,6,8,9: The notation “p_T^{75}” ( {0} and {125} as well) for the “search region bin” column is misleading because you are actually using the pT/Mgg ratio. For instance, the mapping between p_T^{75} and 0.6<pT/Mgg<1 is not immediate. I would suggest to change the notation or explicitly explain it in the captions.

  • Added an explanation in the caption

Tables 6,7,8,9: the \pm symbols are not aligned as in the previous tables. I would be nice to align them.

  • It's rather difficult to align them as there are different number of digits in each line

Comments of type B (contents and physics)

Line 212: Explain why the systematic uncertainty is propagated only in HighRes and LowRes categories.

  • The estimated diphoton mass resolution is only used to classify events between the HighRes and LowRes categories. Thus, a simulation mismodeling that affects the estimated diphoton mass resolution only affects event migration between those 2 categories.

Line 237: We think that a much clearer definition of the considered SUSY particles would greatly help to understand the discussion and the differences among the models here considered.

  • We prefer to stick to the current sentences as it has been crafted by SUSY group experts and used consistently in past publications.

Line 243-245: “We consider the case where the branching fraction of the ̃χ01→H ̃G decay is 100%, and the case where the branching fraction of the ̃χ01→H ̃G and ̃χ01→Z ̃G decays are each 50%” Why, among all possible B.R. do you choose those cases? Are they representative / already used in previous articles?

  • They represent a scan between 0 and 100% which is the full range.

Lines 249-251: The sentences “ The product of the third and fourth elements of the corresponding rows of the neutralino mixing matrix N is +0.5 (−0.5). The elements U12 and V12 of the chargino mixing matrices are set to 1.” are not very clear. If you think that it is relevant, you should rephrase it in order to make it understandable to a larger audience.

  • We prefer to stick to the current sentences as it has been crafted by SUSY group experts and used consistently in past publications.

Comments from Franco Ligabue (PISA) : on paper draft CMS-SUS-18-007-001, dated 06 June 2019:

Here are a few Type B comments:

1) in the analysis strategy section, in order to understand the ability to discriminate the SUSY signal from the SM background and understand the subdivision into bins of data it would be desirable to show some distributions for some important variables at least for some categories: a) lines ~ 131: for the EWP approach it would be desirable to show the distributions of the variables MR and R2 for Susy signal and SM background, to understand also the subdivision of data into the chosen bins. b) lines ~ 138: for the STP approach it would be desirable to show the distribution of the MT2 variable for Susy signal and SM background, to understand also the subdivision of data into the chosen bins.

  • We prefer not to show those as we do not have a sufficiently clean method to give an accurate prediction of the background shapes for these variables. If we add a MC prediciton, which could serve the purpose you are thinking of, we believe it could be confusing the reader more than it helps. We could add some plot with MC predictions as supplementary material.

2) lines 268-272: please specify that for Figs.5 and 6 the EWP analysis yelds a slightly better sensitivity.

  • This has been explicitly stated in the sentence prior to those sentences: The inclusion of bins with smaller \MR \,and larger \Rtwo in the EWP analysis results in slightly better sensitivity. Events in such bins typically have lower values of \ptmiss and are not in the regions of high signal sensitivity for the SP analysis, while the \Rtwo variable is able to suppress ackgrounds more effectively in these regions of phase space.

3) line 268-271: remove Figs. 5 and 6 and replace in line 270 "decay is 100%" with "decay is 100% in Fig. 5" and replace in line 271 "decays are both 50%" with "decays are both 50% in Fig. 6."

  • The two figures have been combined into one figure following Sijin Qian's comments.

4) line 281: the conclusion will be clearer replacing " ..530 GeV for...." with " ..530 GeV (Fig. 3 STP analysis) for...."

  • We prefer the current presentation, as the brackets and references makes it read awkwardly.

5) line 284: the conclusion will be clearer replacing " ..235 GeV for...." with " ..235 GeV (Fig. 4 EWP analysis) for...."

  • We prefer the current presentation, as the brackets and references makes it read awkwardly.

6) line 285: the conclusion will be clearer replacing " ..290 GeV and 230 GeV for...." with " ..290 GeV (Fig. 5 EWP analysis) and 230 GeV (Fig. 6 EWP analysis) for...."

  • The two figures have been combined into one figure following Sijin Qian's comments.

Comments from Maciej Gorski : on paper draft CMS-SUS-18-007-001, dated 06 June 2019:

Part A:

In line 84/85 you say "algorithm uses information in the tracker". I would rather say "algorithm uses information from the tracker"

  • DONE.

Part B:

Lines around 60: You say you use two diferent Pythia versions. Why is it so?

  • These samples were produced in different years and that's what CMS GEN and production teams decided.

Lines around 230: Why mass splitting is 130 GeV? What would happen if it were different?

  • This is chosen as a point just above the Higgs mass of 125 GeV. We do not study the impact of different values of this parameter.

Fig. 3: On left panel you add black line on the upper part of coloured area, on the right one - not. It's a small thing, but someone might wonder why.

  • DONE

Comments from Christophe Delaere (Louvain) : on paper draft CMS-SUS-18-007-001, dated 06 June 2019:

A. English/Style/Formatting (including figures)


Fig. 3, caption: “(1sigma)of” → “(1sigma) of”

  • DONE.


L18: “excess of Higgs bosons” → “excess of Higgs boson events”

  • DONE.

L107: … in a cone of size DR=0.5 AROUND the selected ...

  • DONE.

L123: before and after the definition of sigma_M, “where” is repeated twice. Maybe rephrase as: “where sigma_M is computed as: ” → “with sigma_M defined as: ”

  • DONE.

L127: “that contain additional Higgs (Z) boson” → “that contain an additional Higgs (Z) boson”

  • DONE.

L133: referring to -> referred to

  • DONE.

L161: “There are two types of backgrounds for this search” → “Two types of backgrounds can be identified for this search”

  • DONE.


[22] For the Run-2 papers, PubComm recommends to also cite a more recent proceeding, see the bibtex sniplets in https://twiki.cern.ch/twiki/bin/viewauth/CMS/Internal/PubGuidelines#References ; the reason is that a reader of this paper who is interested in the details of our fast simulation approach would be mislead, by being pointed only to “Abdoullin et al.”, into believing that we are neglecting out-of-time pileup and are treating the calorimeter response as a Gaussian smearing; that was true in Run 1, at the time of “Abdoullin et al.” (when those approximations were good enough), but we now simulate out-of-time pileup and we treat the calorimeter response in a much more detailed way. .

  • Added the 2nd reference.

B. Everything else (e.g. strategy, paper structure, emphasis, additions/subtractions, etc).


It would probably help to refer more often to figure 1, which might require to number diagrams as 1a -> 1d.

  • We refer to it more often now


The numbering of tables 1, 4 and 7 is inconsistent. Please use the same convention.

  • DONE


L71. It is not clear why only signal samples have to be corrected. What is different there?

  • Only signal samples used the version of MADGRAPH which exhibits the problem that needs to be corrected.

L82. Could you explain briefly how conditions differed in 2017? Instantaneous luminosity (and pileup) increased by ~50%... it may be worth mentioning that given that the integrated luminosity for the two datasets is not that different.

  • We added the phrase: "to cope with the increased instantaneous luminosity"

L145-146: It’s not very straightforward how the search regions are defined. Maybe rephrase as: “Finally, for the remaining events, the exclusive search regions are defined by ...”

  • DONE

Comments from Piotr Zalewski (NCBJ Warsaw) : on paper draft CMS-SUS-18-007-001, dated 07 June 2019:

In the line 100 it is written "Jets with pT>30 GeV [...] are considered in the analysis", whereas below in the line 103 "We identify any jets with pT>20Gev [...] as a b-tagged jet." Please try to remove the confusion.

  • DONE

In the line 105 it is written "Jets are required not to overlap with the selected electrons, muons and photons" What is done if they do overlap?

  • changed to sentence to say that overlapping jets are discarded.

In the equations (1) and (3) different elevation of the superscripts j_i is used.

  • DONE

In the paragraph between lines 123 and 124. What if there is an electron and a muon in the event?

  • We fixed this section to clarify what happens with electron muon events.

In the line 153 we would like to suggest to add "as described in the next section" after "region bin" or do something equivalent.

  • DONE.

Table 1 versus Table 4 (and Table 7) Bin numbers in the former do not agree with these in the latter. Categories are not distinct. What is the difference between "R^2 None" and "R^2 \ge 0.0"? What is the reasoning behind R^2 binning?


Table 2 versus Tables 5&6&8&9 Search region description used in the latter (group of tables) should be, in our opinion, introduced in the former.

  • The tables have been harmonized.

Figure 2 In the legend green and red lines are not continuous, whereas in the Figure they are. Please make them dotted and dash-dotted everywhere.

  • DONE

Additional ARC comments on paper draft for CWR readiness

Comments from Manfred Paulini : on paper SUS-16-007, V4, dated 2019/03/13:

- Abstract: . The diphoton final state is chosen for its clean signature --> can be left out

  • OK

. ... while the possible extra jets and leptons give additional sensitivity to the production process of the supersymmetric particles and the products of the decay chain of the other supersymmetric particle. --> this comes out of the blue and is missing context . in addition, the abstract should say a bit more. I'm suggesting for the abstract something along the lines of ... A search for supersymmetry is presented where in the decay chains of the pair-produced supersymmetric particles at least one Higgs is produced and decays to two photons. Different kinematic variables are used to categorize events into different search regions. The analysis is based on data collected by the CMS detector in 2016 and 2017 corresponding to 35.9 fb-1 and 41.5 fb-1, respectively. No excess of events is observed beyond expectations from standard model processes and the search result is interpreted in the context of strong and electroweak production of supersymmetric particles. We exclude ... give a few examples such as ... bottom squark pair-production with masses below ... electroweak wino-like chargino-neutralino production below ...

  • DONE. We've changed the abstract based on the suggestion you gave.

- l 16: say somewhere that these 8/13 TeV collision data are from pp collisions at the CERN LHC.

  • DONE

- l 25: put text from the caption from Fig. 1 into the main text to explain the diagrams in the main text. I know this is done in more detail in Sec. 8 but something referring to EWK, strong, .. would be useful

  • Added some additional text on the simplified models

- l 25: should we say something about main backgrounds in the intro to give the reader an idea about this?

  • We have added the sentence: "The dominant backgrounds are QCD diphoton and photon+jets production and are modeled by functional form
fits to the diphoton mass distribution."

- Sec. 2: Is this the standard text as recommended by PubComm? It seems a bit unusual.

  • It is the minimal text recommended by the PubComm. We have moved the first sentence to the end of the section, because it was reading a bit unusual.

- l 57: ... include the effects of pileup --> explain pileup ... something like ... include effects from additional pp interactions in the same or adjacent beam crossings (pileup).

  • DONE

- l 67: ... are obtained from the recommendations of the LHC Yellow Report 4 of the LHC Higgs Cross Section Working Group [22]. --> I would shorten to ... are from Ref. [22].

  • DONE

- l 71: These NLO+NLL cross sections and their ... I would move to Sec. 7

  • DONE. In fact they were already in Section 7, so we have just removed this part of the text.

- Sec. 4: We need to say what datasets we used, 2016 & 2017, what lumi this is and most importantly what trigger data (guessing diphoton) we used for the analysis.

  • We have added the description of the 2016 and 2017 datasets to the introduction section. We have added text discussing the trigger selection.

- l 77: ... particle flow candidates ... already talking about particle flow here when it is only introduced in l 85 "The CMS particle flow (PF) algorithm ..." <-- also give a ref.

  • We have moved this sentence before the discussion about photon selection. Also added the reference for CMS PF.

- l 94/95: why are the dR cones different for e and mu --> give a motivation

  • We have added the sentence: "A larger veto cone is used for electrons to suppress photon conversions."

- l 110: I think it would be helpful to include a flow chart to help illustrate the event categorization procedure like we have in the AN and also for the MT2 analysis. This can be connected to the vast labels of search region bins used in Tables 8/9 explaining the abbreviatins used there

  • Not answered

- l 120: before Eq. (5) there needs to be a brief explanation of what is happening and not just a pointer to Refs [36/37}. Something like "Two pseudo-jets are clustered together with MET used to form MTw ...' maybe a bit more what I have here ... otherwise nobody understands what is happening with pseudo-jets on l 122

  • We have added this phrase : "An alternative clustering algorithm is employed following Ref.~\cite{MT2at13TeV} to produce two hemispheres referring to as pseudojets,..."

- Sec 6: it occurred to me here that somewhere earlier we should say that we use the gg inv mass and the Higgs signal to discriminate signal from background

  • We have added this description to the introduction section

- p 6 bottom: here we talk about likelihood fit and nuisances etc. This would fit better into Sec. 9 but I realize that we also talk about fits and nuisances in Sec. 7. So, this needs a bit thought and maybe a bit more explanation of what we do with the likelihood fit etc

  • I have moved this paragraph to Section 5 under Analysis Strategy because it seems to fall more naturally there. It was also missing because we only discuss the categorization and binning but did not say explicitly how we actually extract the signal. We also added some additional sentences to make that more clear. Now the paragraph says: "Finally, to test specific SUSY simplified model hypotheses, we perform a combined simultaneous fit using all of the search bins defined for each analysis. The dominant background results from QCD induced
production of diphoton or photon+jets, and exhibits an exponentially falling diphoton mass distribution. This background is modeled with a fit to an exponentially falling functional form independently in each search region bin. The SM Higgs background and the SUSY signal model under test exhibit a resonant shape in the diphoton mass and are constrained to the MC simulation predictions within uncertainties. These uncertainties are modeled by use of nuisance parameters that account for various theoretical and instrumental uncertainties that can affect the SM Higgs boson background and SUSY signal normalization, and are profiled in the fit. A more detailed discussion of systematic uncertainties can be found in Section~\ref{sec:systematics}."

- l 179-182: can we expand a bit what we do as this is the main systematics

  • We added some more detailed description of the fit and non-resonant background parameters. The new text reads: "The dominant systematic uncertainty for this search is the uncertainty in the shape and normalization of the non-resonant background. The non-resonant background is modeled by an unconstrained fit to an exponentially falling functional form. The specific choice of functional form varies with each search region bin, and the normalization and functional form parameters are assumed to be uncorrelated and are fitted independently in each bin. Therefore the systematic uncertainty in the shape and normalization of the non-resonant background is propagated by profiling these unconstrained parameters."

- Sec 7: do we need a systematic for MET modeling in the signal MC ?

  • Not answered

- l 214-233: the MC description would benefit from more close reference to Fig. 1. Toward the end of this paragraph there seem too many details given such as "The product of the third and fourth elements ..." can be shortened/removed.

  • That whole paragraph was given to us by the SUSY group conveners as a centralized way of describing these higgsino-like simplified models. There are some non-trivial assumptions going into this, and I think they wanted to be as specific as possible because we had been called out by theorists in the past.

- p 9: the limits given in GeV do not seem to agree with what I would read out of Figs. 3-6. Please check - will also need to be updated with new signal MC.

  • TODO: need to update the exactly numbers once the final plots are made.

- Acknowledgments: is this the right version appropriate for the length of the paper? It seems very long too me ...

  • There are only 3 options: short letter, standard letter, long paper. This paper is not a letter, so we picked up the long paper version.

Approval homework

- Finalize the combine checks and run those through our combine contacts

  • Not answered
- Update final plots with FastSim /Full corrections for 2017 as these become available
  • Running

- Add appropriate ATLAS references in the introduction. An appropriate reference seems to be https://arxiv.org/abs/1812.09432

  • Not answered
- Try to opt for a more descriptive naming than "Analysis A" and "Analysis B"
  • Done
- Need to add some distinction on interpretation plots too (it's only mentioned in captions right now)
  • Done
- Figure 2 : usual information like "CMS", lumi etc. missing
  • Not answered
- use appropriate luminosity uncertainties for 2016 and 2017
  • Not answered
- Discussion on systematic uncertainties is very short - need to expand to explain the sources more clearly
  • Not answered

- Please run the spell checker, there are a few typos like L73 identification, L103 sensitivity, extra "is" in tenth like of para without line numbers on page 4 L76-77 It's not clear what do you want to say here about PU subtraction L79 leading (subleading) since you give the numbers as 0.25 (0.33) .. * Not answered

Questions by the ARC

Questions by Susan

Comments based on v4 of AN-2018/247:

Abstract: 2017 integrated lumi: you quote 41.5 fb-1 here and 41.5 fb-1 elsewhere.

  • This was fixed for AN v5


L58-59 How was trigger efficiency measured, and is the quoted efficiency number absolute, or normalized to the number of events passing the offline selection?

  • Not answered

L70 ISR modeling in signal samples: 'we apply a correction...' a correction to what, to the predicted xsec?

  • These correction values are applied on an event by event basis, while preserving the overall normalization.

3.Object selection 3.2 Muons

L102 'mini-isolation variable is defined...'-->'mini-isolation variable R is defined...'

  • Done

3.4 Photons

L122 Why is the analysis limited to the barrel region only?

  • The two photons from the decay of a Higgs boson are generally boosted, even more so if the Higgs originates from the decay of SUSY particle. The results favors central events over barrel events. On the other hand QCD and GJets tend to have at least one photon in the endcap region. In the end BE & EE events have a much worse S/sqrt(B) compared to BB.

L138-139 Veto of photons within a cone of 0.5 (1.0) to selected muon(electron) as footprint: How are these cone sizes determined, they seem a bit large/severe to me.

  • These values are the sames as in AN2017-036 (SM Higgs measurement) in section 12.2.3.

4. Event Selection and Analysis Strategy 4.1.1 MT2 Reconstruction

L226-227 Pertinent to Eq. (3), the definition of MT2: "The minimization is performed on trial momenta of the undetected particles fulfilling the pt_miss constraint." The meaning of this sentence is not clear to me.

  • @ MT2 Not answered

L235-237 "Here the two objects are chosen with (which?) give the highest invariant mass...mi=mj=0" This sentence also is not very clear.

  • @ RAZOR Not answered

Comments based on v5 of AN-2018/247:

4.2.1 MT2 categorization

L301-302 Splitting of regions into low and high zones of pt_Hgg/Mgg with boundaries 0.6 and 1.0: How were these boundaries chosen? Are they justified by the plots in Fig. 1?

  • They were chosen such that they provide sensitivity to systems that are not/intermediately/very boosted while retaining a sufficient amount of events for each of the resulting regions for the fit to the Mgg spectrum in data.

L312: What are 'compressed' mass points?

  • This is SUSY slang for unboosed, where the decaying particle and the LSP have almost no mass difference and thus the Higgs and the LSP have almost no boost.

4.2.2 Razor categorization

L325-327 Similar to comment on L301-302, the subcategories according to pt_Hgg> or <110 GeV, how is this value chosen?

  • @ RAZOR : Ref. AN2014-080 Appendix D shows S/B increases as one selects higher p T photon pairs.

L340-342 Same for the resolution categorization with threshold sigmaM/M< or >0.85%, how is this chosen?

* @ RAZOR Not answered

L344-346 Binning in MR and R**2 based on optimization done for sbottom pair production simplified model analysis: What is the justification that leads you to assume that this optimization is correct or at least adequate for this analysis?

  • @ RAZOR Not answered

5 Backgrounds

L361-363 Mass window between 103 and 160 GeV: This is different from the window usually used for the SM Hgg analysis: {100,180} GeV, why? In particular, did you observe turn-on effects between 100 and 103 GeV? Were any specific studies presented on this point?

  • This has since been changed to the usual window of (100,180). For the previous version of this analysis there was a worry about the turn on, which has since been mitigated by adopting also the photon thresholds as used by the standard model analysis (pT/Mgg).

L365-367 Use of 'crystal ball' function to model SM Higgs background and the signal: Was this really the (single-sided) crystal ball function (with an exponential tail on only 1 side) or the double-sided one (with an exponential tail on both sides)?

  • Not answered

5.1 Non-resonant background

Overall comment on the two different background estimation methods for the 2 different analysis schemes (MT2 and Razor): You have 2 different analysis strategies (MT2 and Razor), each of which use a different background estimation method (envelope and AIC). But there is no reason why one or the other of these background methods is more well-suited for or linked to the use of one or the other of your analysis strategies, is there? At least no mention of this is made in the documentation (for example that one or the other background methods leads to lower systematic uncertanties for one of the particular strategies). Therefore it could be questioned why 2 different background estimation methods are used, and not the same one for both analysis strategies. Indeed, when you have a large number of bins as in your 2 strategies, my personal opinion is that the envelope method is easier to use since no explicit bias studies are required, which can be tedious to perform.

  • Not answered

L387-393 For a given candidate family for the envelope, all orders up to and including the highest order as determined by the f test, are included except those removed by the goodness-of-fit test. Perhaps the text does not make this point explicit enough.

  • Not answered , I'll add this info to the text

L394-395 AIC, don't you need a reference for this?

  • @ RAZOR Not answered

L395-398 The text talks of the determination of the 'relative value-added' of one functional form versus another, and also penalties for functional forms having 'an unecessary amount of freedom'. Would it be possible to somehow quantify these concepts just a bit without getting into too much technical detail? Otherwise it sounds vague, particularly compared to the detail evoked for the envelope method.

  • @ RAZOR Not answered

L405-406 Bias tests for AIC allowing bias up to 30% of the stat uncertainty: Although perhaps other analyses have allowed biases this high when using this kind of bias calculation method, it seems on the high side to me. In particular, when this technique was used for the standard model Hgg analysis, the limit was 25%. Also, quickly the standard model hgg analysis abandoned this percent of stat uncertainty bias criterion in favor of the average pull of the fitted signal strength modifier over the set of relevant generated pseudo-data sets (i.e. toys). The upper limit for this average pull was set at 0.14 (14%), corresponding to an amount of bias necessitating an increase in the uncertainty in the frequentist coverage of the signal strength of less than 1%, which was deemed acceptable.

  • @ RAZOR Not answered

L410 "AIC weight", do you mean really "AIC measure" which was used before on L396?

  • @ RAZOR Not answered

L412 High bias (31.2%) for bin 9: In addition to the uncertainty of your bias test, you could check also directly the coverage, it could still be acceptable.

  • @ RAZOR Not answered

5.2 Resonant background

L421-426 Nuisance parameters for normalization of SM Higgs background: Where were the individual uncertainties listed taken from?

  • Not answered

6 Combination of 2016 and 2017 data periods

L444: Unless I am mistaken, there is a mistake here: "For the fit to the non-resonant background the data is added ...." should be "For the fit to the resonant background the data is added..."

  • We have to rephrase this section as it doesn't reflect currently the strategy followed by Razor for the resonant bkg

7 Systematic uncertainties

L458-461 Object selection efficiencies: I am not sure you have taken into consideration the Loose Photon ID efficiency and its related uncertainty, available from EGM, is it the case (it is not said explicitly like for the other objects)

  • Not answered, to add

L463-471 Is what you are describing here the standard 'scale and smearing' corrections (with their associated uncertainties) for photons and electrons from EGM? If so, there should be a reference, if not, why are these not being used?

  • Not answered , they are applied. We should also fix the reference at the end of the section to a figure that is not included.

Questions by Manfred

-Regarding the paper draft, reading the comment from the SUSY conveners to our concern about the paper structure, it sounds like we are encouraged to put effort into making the 2 different approaches look unified in the paper, e.g. using the same interpretations to discuss the complementarity and strong points of each approach and highlighting potential gains by pursuing two strategies. I suggest for the ARC to wait until the authors have produced a new paper draft with this approach in mind.

  • We have rewritten the paper draft. The two strategies are presented as complementary, one focusing on electroweak produced SUSY signals and another focusing on strong SUSY production. We now highlight the strengths of the two approaches for each interpretation considered.

- regarding the the question that was raised by the ARC about the QCD scale and PDF variation systematics, from Myriam's posting on Jan 30, it sounds like the correct SUSY recommendations have now been identified but have not been followed originally and will now be implemented. Is this correct? Please indicate how the systematics changed with the SUSY recommended approaches?

  • The only difference are the scale variations for signal, which should generally be smaller than the ISR uncertainties, so we expect at most a small effect on the final results.

- p. 3: good to see that you use ISR corrections to improve the MadGraph5 @nlo modeling. The variations between 0.92 and 0.51 for jet multiplicity between 1 and 6 seem larger than I remember. What do other analyses find for those ISR corrections?

- p. 7: as indicated the POG working point recommendations for loose electron and photon ID differ for 2016 and 2017 data. I assume the 16 recommendations were used for 2016 data and the same for 2017 data. How was the MC treated, especially the signal MC? Was it split according to 16/17 lumi and the 16/17 ID's applied accordingly? Or was the same e/gamma ID's applied on MC? Which one?

  • Recommendations for 2016 are used for 2016 data, MC and signal MC. Recommendations for 2017 are used for 2017 data, MC and signal MC. Given the lack of the 2017 signal MC so far, 2016 signal MC with all the corresponding 2016 recommendations was used and just scaled up for 2017 as a place holder.

- p. 8: you state that electrons that lie within a DR cone of 0.4 to any selected muon are interpreted as the footprint of a muon and are vetoed. What is the effect of this cut? How much signal do you loose?

  • The strategy is to start with the cleanest object, which are isolated muons, and clean all other objects that overlap with it. The other objects that overlap with an isolated muon is very likely to be a muon mis-identified as that object. The effect of this cut on the signal is basically zero, because in almost all cases the selected isolated muon is really a muon and no mistake is made. Instead if we did not do the overlap removal, we would likely lose signal because the interpretation of the event would be wrong.

- p. 9: photons are accepted up to |eta| < 1.48. It is more customary to only go up to eta < 1.44 for barrel photons. Why was 1.48 chosen?

  • This is a typo, sorry about that. Indeed we use 1.44 as you noted. This will be fixed in the next AN version to be uploaded.

- p. 10: you state that any photon that lies within a cone of size 0.5 (1.0) to any selected muon (electron) is interpreted as the footprint of that muon (electron) and is vetoed. This seems really odd to me given that you use a di-photon trigger. Wouldn't you want to do it the other way around and veto the ele or mu candidates if they are too close to photon candidates? As above, what is the effect of this cut on signal or the selection?

  • As discussed in the question above, the strategy is to start with the cleanest object, which are in order muons, electrons, photons, jets, and at each stage clean all other objects that overlap with it. If a photon object overlaps with an isolated muon or isolated electron object, in almost all cases the object was actually a muon or electron. The vetos are done in the order of clean-ness of the object. Muons are the cleanest, followed by electrons then photons then jets. This strategy ensures that globally, we minimize object identification mistakes. There are no negative effects of this strategy on signal or the selection. Instead, the opposite strategy - favoring photons over electrons - will result in many electrons being mis-identified as photons. It's much more likely to lose a track than for a neutral object to produce a fake track.

- p. 20: why is an asymmetric mass window between 103 and 160 GeV used for the diphoton background fit? Is this customary in the Higgs group?

  • An asymmetric region is customary in the Higgs group. As the background shape is falling, there are more events per GeV to the left of the resonance than to the right.

- p. 21: how often did it happen that the number of events was smaller than 5 in any given bin and toys were used instead?

  • All regions ended up having more than 5 events in the mass range 100-180 of M_gg as can be seen in the figures in the appendix D, so this was never the case.

- Tab. 18: the max bias of 31.2% is above the target of 30% - how can this happen?

  • To be precise a 30% bias term only leads to about 4.4% underprediction of the statistical uncertainty, so 31.2% is within the target 5% underprediction (which the maxmum bias allows to be 32%). We can add a sentence about it.

- Fig. 2: the 2016 to 2017 data differ quite a bit for MT2 in the top right plot. Why is this? Is this something we need to treat in the systematics?

  • Given the differences between the years (new Pixel detector, different LHC conditions, etc.) some differences are expected but will in the same way affect the SM Higgs background and the signal MC and are thus accounted for.

- Fig. 3-6 and 7/8: I'm confused about the limit plots. I thought we needed 2017 MC to get the final limits. Plots say all 16+17 data were used. Can you clarify what is still needed to do to get the final limit plots?

  • We use 2016 signal MC and use it scaled up for 2017 signal MC. All that is needed is the 2017 signal MC for the final version. This will be updated once the 2017 samples are produced - this should happen in the next few days.

- Fig. 3+4: interesting that the observed limits are below the expected ones for both the MT2 and razor approaches resulting in more or less the same limits. I guess there is a high correlation between the selections ... This might be worth to discuss in the paper ...

  • For certain models, a lot of sensitivity comes from the Hbb category. And because the overlap in that Hbb category is larger between the two analyses, you see a fluctuation in the Hbb category affect the two analyses in similar ways.

- Fig. 9: applying the prefiring recipe to 2017 MC still shows some discrepancies between data and MC especially in H pT/Mgg and MT2 with H. Do we need to assign some systematics for that?

  • The difference is small enough that it would not give rise to a dominant uncertainty for the final results.

- p. 50 ff: this is a dumb question, where can I find the unblinded data plots of the distributions shown here?

  • The blinded figures have been replaced by the unblided ones.

Questions by Pablo

* General comments to the analysis (based on the AN)

+ Objects

- Photon rejection cone when a muon or electron ar close. 0.5 (1.0) are being used for muons (electrons). Are these standard numbers? If not, how were they chosen?

  • They are standard numbers as described in AN2017-036 (SM Higgs measurement) in section 12.2.3.

- The analysis is using the CSVv2 b-tagging algorithm, while it seems that DeepCSV is giving a better performance nowadays. Is there any reason for not using that algorithm?

  • DeepCSV was not available directly from in the MiniAODv2 samples the analyses were optimized for. In the interest of getting the result out as soon as we can, and because the impact of the improvement from DeepCSV is second order, we chose to use the CSVv2 which was more readily available.

+ Event selection

- "The best Higgs candidates is chosen to be the photon pair that maximizes the scalar sum of the pt of the pair of photons" -> Is this an standard/optimal choice?

  • This is a standard and optimal choice to suppress background.

- I find the event selection quite complicated and I would encourage the analyzers to try to be as clear as possible when describing then. In particular I have the following doubts:

  • Clearer description are now added in paper draft.

MT2 Selection

- How are events with >= 3 leptons treated? (are they rejected?, do they go into the Z -> ll region with some criteria to select the best pair?)

  • If they pass the di-lepton selection (see answer below) they are categorized accordingly, otherwise they are either categorized as into the single muon or electron bin (depending on the id of the leading lepton).

- Are you asking that the 2 leptons have the same flavor?

  • Yes, the dilepton bin requires two opposite sign same flavor leptons close to the nominal mass of the Z.

- Do the Zbb and Hbb have any kind of lepton veto, or events with 2 leptons outside the Z mass window can go into these regions if they match the m(bb) criteria?.

  • The priority goes: 2 lepton bin, 1 lepton bin, Hbb bin, Zbb bin and then the kinematic bins. If any event has been categorized by a region that has priority, it will not be counted again in another region. So if an event has 2 leptons, where they lie outside of the mass window, it will not be counted in the 2 lepton bin. If the event satisfies the mbb criteria it will be assigned to the Hbb/Zbb bin it corresponds to.

- What happens with events with 3 b jets when one of the m(bb) matches the Z window and the other the H window? Which one is preferred?

  • Hbb has priority.

- What are the thresholds used for the ptHgamma/Mgammagamma? The text suggest 0.8 while table 13 in the AN 0.6 or 1.0. Can you clarify? How was this threshold selected? It seems the distribution of this variable (Fig 1, AN) is very different for the different signal models. Having the same cut for all of them is optimal?

  • The bin edges are at 0.6 and 1.0. The old value of 0.8 has been removed. Because the different models have such different distributions (EW production being more on the soft side, compared to the strong production) also the bins correspond to low values of pT/M have been retained.

Razor Selection

- As pointed out by the analyzers, no Z mass window cut is imposed on this sub-analysis. Why not? The model contains a resonant Z and going outside the Z mass window would bring ttbar and other non-resonant backgrounds. What is the rational for not having this cut?

  • The choice was made based on the expectation of low statistics for the diphoton mass sideband region. If the mass window cut was made, then on average the expected background was too small and resulted in biases for the background extraction procedure on average.

- What happens with events with 3 b jets when one of the m(bb) matches the Z window and the other the H window? Which one is preferred?

  • Hbb has priority, which is consistent with MT2 method.

- I think it would be very good to see some distributions of the key variables used in this selection, i.e. sigma_M/M, MR, R2.

  • We have added plots of these variables to the AN in Section 6.

+ Backgrounds

- "The lower range is chosen to avoid any effects due to trigger efficiency turn-on". Have you (or some other analysis) quantified this? You say that the trigger plateau is reached at 40GeV, but you are using 20 GeV photons. It's true that there are many pt/M cuts and I agree starting the fit at 103 GeV seems quite safe, but have you quantified that no trigger turn-on effect is present?

  • The values have been obtained by the standard model Higgs search and have been shown there to be fully efficient. We are following what the SM Higgs analysis is doing.

Non-resonant background MT2

- It's not really clear to me the procedure to choose the functions that go into the envelope. I guess what I'm asking can be extracted from the fit plots in the appendix but still: how many functions did you consider finally? How often were you rejecting functions because of the goodness of fit test?

  • At least one of each type (Laurent, expo, poly, Bernstein) is chosen. For each type of function a set of possible degrees is tested and if it passes the F-Test, added to the list of functions that comprise the envelope. As you noted correctly, the functions shown in the appendix figures are all the possible ones for that region.

Non-resonant background Razor

- Which families of functions did you try? In table 18 I can see SingleExp, poly2, doubleExp, poly3 and singlePow.

  • singleExp, singlePow, doubleExp, doublePow, ploy2, poly3, poly4

Resonant backgrounds

- Are the two analysis considering the same processes here?

  • We are considering the same proceeses. They are: ggH, vbfH, vH, ttH, bbH

- Is double Higgs production a problem for this analysis or its x-section is so small that it doesn't affect? Have you checked this?

  • The xsec for HH production (31 fb) is roughly a factor 1000 smaller than the xsec for H production (56 pb). We have checked the contribution of SM HH production on the Hbb category which is the one most affected by that background, and its contribution is only about 1% of the SM H contribution. Therefore its negligible.

- It would be good to see a table like 19 for the MT2 analysis to check the consistency of the relative contributions to the resonant background.

  • The table has been added

+ Combination of 2016 and 2017 data periods

- If I understood correctly for the non-resonant backgrounds the data are simply being added up. However I see clear differences for example in the MT2 variable, which is actually used in the analysis to define between signal regions. If this effect was due for example to different pileup, I would tend to think that the cut on MT2 could be selecting regions with different S/sqrt(B), and then maybe the combination is not the best thing to do. I know it is a tough question but can you comment on it?

  • The dominant uncertainty for this search is the statistical uncertainty of the non-resonant background fit. Gaining a factor of roughly 2 in statistics by combining the years outweighs the any sensitivity loss due to the mixing of the years.

- It would be good to see some comparison plots also for MR and the R2 variables.

  • Comparison plots of 2016 and 2017 of Razor variables have been added to AN

- Considering the resonant-background. How different are the shapes used per year? It would be good to see the normalizations and uncertainties used for every shape and year.

  • Plots comparing the shape and normalization can now be found in Appendix E. We observe all of the expected effects due to larger amount of pileup: slightly worse mass resolution in 2017. Normalizations do not seem to change too much. The MET resolution degrades with more pileup, and therefore more ggH events go into bins with larger MET in 2017.

+ Systematic uncertainties

- Are the 2 analysis using consistent systematic uncertainties? For example, looking at tables 20 and 21 and focusing on the ISR it seems that one analysis is assessing 25% while the other is assessing 5%. In the paper however only 25% are mentioned. Could you please clarify whether the same uncertainties are being considered and whether they are consistent given the differences between analysis?

  • Apologies for the inconsistencies in the AN. We have cleaned up the AN and a clearer description is in the new paper draft. Yes, we are using consistent systematics.

- I don't see any systematic uncertainty due to FASTSIM simulation. Is it not important for this analysis?

  • The FASTSIM uncertainties relevant for this analyses are applied (JEC, btag & lepton scalefators) but not explicitly mentioned in the table as they are applied on top of the FULLSIM uncertainties.

+ Interpretation

- The limit plots have substantial differences among the analysis, on the other hand is not straightforward to understand why. In the PAPER you mention "the finer bining in the Razor variables gives sensitivity to boosted topologies while the b-tagged bins in the MT2 approach are sensitive to models where heavy quarks are present", but I think this is still quite vague and only applies to one model. If possible I would like to identify for each of the models why one analysis is better than the other, what are the signal regions driving the sensitivity and finally comparing these signal regions between the two analysis to see whether they are consistent. I think most of this information is contained in the tables you provide but still it would be good if you could make some clear statements about it (at least in the AN).

  • We can find the following signal regions with complementarity among the two approaches. MT2 has a clear advantage for strong production due to more bins in the number of jets and b-jets (which was the intention of this strategy. The razor approach has some advantage for the EWK SUSY models due the bins in the high-pt category which have larger values of Rsq. This is a result of a re-optimization run for the larger expected dataset. It is worth noting that the EWK signals have a substantial fraction of events which do not have large MET, but does have large R^2 - while backgrounds do not. Therefore the R^2 variable is able to use this region of phase space ( not so large MET, but large R^2) more effectively for distinguishing signal from background. These bins contribute the extra boost of sensitivity for the EWK SUSY models that we consider.

  • Some more details about specific bins driving the sensitivity is below:
For Razor:

TChiWH: a large Rsq bin in the hadronic high-pt (Higgs pT > 110 GeV) category is very sensitive to this model. For masses below NLSP 200 GeV the muon and electron categories, again with high-pt (Higgs pT > 110 GeV), contribute significantly to the sensitivity. For masses above 200 GeV the hadronic high-pt (Higgs pT > 110 GeV) drives the sensitivity.

TChiHH: the HggHbb categories with high-pt drive the sensitivity for low NLSP masses 127-175 GeV. For higher NLSP masses the HggHbb categories with high-pt continue to contribute to the sensitive but significant additional sensitivity is added by an additional high-rsq bin in the hadronic high-pt category.

TChiHZHH: HggHbb categories with high-pt drive the sensitivity for low NLSP masses 127-175 GeV. For higher NLSP masses a high-rsq bin in the hadronic high-pt category dominates the sensitivity.

T2bH: most of the sensitivity throughout the mass plane comes from the HggHbb category.

For MT2 :

TChiWH: a 4 jet (0-bjet), intermediate Higgs-pt (75-125 GeV) and low MT2 is very sensitive to this model. The muon and electron categories, with high-pt (Higgs pT > 125 GeV) and high-MT2, contribute significantly to the sensitivity.

TChiHH: the HggHbb category with high-pt (> 125 GeV) and low MT2 drive the sensitivity for low NLSP masses 127-175 GeV. The second most sensitive bin is usually a generic hadronic bin with relatively high number of jets and intermediate Higgs-Pt (75-125 GeV). For higher NLSP masses the HggHbb category with high-pt (> 125 GeV) and low MT2 >30 drive the sensitivity.

TChiHZHH: HggHbb category with high-pt (> 125 GeV) and low MT2 alongside with the Z->ll category drive the sensitivity for low NLSP masses 127-175 GeV. For higher NLSP masses the HggHbb category with high-pt (> 125 GeV) and MT2>30 along side generic hadronic bin with relatively high number of jets and intermediate Higgs-Pt (75-125 GeV) drive the sensitivity.

T2bH: most of the sensitivity throughout the mass plane comes from a bin with 1-3j (at least 2b-jets) with large higgs-pt(>125 GeV) and high-MT2 along side with most bins from the HggHbb category.

- Connected to the previous question: in some cases the limits are quite similar, for instance consider fig 7 and 8, I would say that for low mass the expected limits are quite similar. However when I go to tables 28 and and 36 I see that the expected limit on the signal strength are quite different between the 2 analysis. Can you comment on this?

  • There was a typo in the previous version of the AN. The new version corrects that. The tables for HH that should be compared are Tab. 29 vs Tab 38 and Tab. 30 vs Tab 39. As you noted for the HH(127 GeV) the limits (expected) are very close: ~0.28 vs ~0.24 (for the most sensitive bin only). The tables for HH/HZ that should be compared are Tab. 31 vs Tab 40 and Tab. 32 vs Tab 41. As you noted for the HH/HZ(127 GeV) the limits (expected) are close: ~1.1 vs ~0.9 (for the most sensitive bin only)

+ Appendix B

- It would be good if you could add the data observed in these tables.

  • The data yield is included now in the tables.

- A lot of local significance have negative numbers. Is this simply a low-stats problem?

  • The local significance quoted here uses the following convention: negative means that there was a deficit, while positive means excess. The number of deficits looks consistent with the expectation from statistical fluctuations. It's roughly half of the bins.

Questions by Petar

-As discussed before, my substantial concern is how to describe both analyses in the paper so that the description is (a) interesting and (b) meaningful to the reader. After reading the note, I'm quite skeptical of this, since there are many differences between the two approaches, and it will be very hard to attribute the benefits to any particular thing (least of all to the choice of MT2 vs Razor variables).

  • We have rewritten the paper draft. The two strategies are presented as complementary, one focusing on electroweak produced SUSY signals and another focusing on strong SUSY production. We now highlight the strengths of the two approaches for each interpretation considered.

-My main physics comment has to do with the uncertainty on the SM Higgs pt spectrum (which I don't know if it is included, and if is, how large is it).

* We are using the aMC@NLO MC predictions for the Higgs pT. From the differential measurements, there are no substantial shape differences between the data and the aMC@NLO prediction beyond statistical uncertainties. We propagate systematics from all of the scale variations that impact the Higgs pT spectrum. As far as we can tell from the differential measurements, these variations cover the difference between data and aMC@NLO prediction.

I also second Manfred's request for unblinded m_gg fits.

* They are added in the new AN version now.

-Sec 2 Are the photons isolated? What happens if SUSY is heavy and the Higgs in the SUSY decay chain is energetic, and the photons stop being isolated? Is this corrected for? Or maybe we don't care about such signal?

  • Yes, photons are isolated. For the masses we are consider, it doesn't happen much and is not super relevant.

-L91 You use CHS. What about PUPPI?

  • Using CHS is the standard for most SUSY analyses. As this analysis does not particularly depend on jet substructure nor boosted objects (where PUPPI would give an advantage) we prefer to use the current standard in the group.

-L99-100 How efficient (for the signal) are these two cuts?

-L187 Couldn't you cut on cos\theta* instead?

  • If this refers to the angle between the diphoton system and a dijet system as described in 12.4 Hadronic TagEvent in AN2017-036 (HIG-16-040), then the Razor and MT2 variables will give a similar distinction between events where the diphotons recoil against another boson or not by means of the Hemispheres/Pseudojets and the angular relation thereof.

-L196 What is the bkg from HH->(\gamma\gamma)(bb) ? Is that taken into account? (If we saw it, is that a SUSY signal or HH observation?)

  • The xsec for HH production (31 fb) is roughly a factor 1000 smaller than the xsec for H production (56 pb). We have checked the contribution of SM HH production on the Hbb category which is the one most affected by that background, and its contribution is only about 1% of the SM H contribution. Therefore its negligible. SM HH is also a factor of 6 smaller than sbottom production at 600 GeV, so even compared to signal it is very small.

-L199 So this assumes that we know the Higgs spectrum as a function of all the binning variables quite well (especially those that do not depend on MET, such as Nj, Nb, and MR). Do we?

  • The differential cross measurements by CMS show good agreement within statistical uncertainties between the data and the simulation. Our bins in kinematic variables are relatively large and to have significant migration of events among these different large bins would require larger discrepancies in shape than what has been observed from Higgs measurements.

-(The same comment is also related to L413-419. My understanding is that the uncertainty on the SM Higgs pt spectrum is substantial, or at least non-trivial. If this is not taken into account, then it probably should be, since it may dwarf the other SM Higgs systematics.)

  • " We are using the aMC@NLO MC predictions for the Higgs pT. From the differential measurements, there are no substantial shape differences between the data and the aMC@NLO prediction beyond statistical uncertainties. We propagate systematics from all of the scale variations that impact the Higgs pT spectrum. As far as we can tell from the differential measurements, these variations cover the difference between data and aMC@NLO prediction. "

-L280-281 Do these include H->WW* and H->ZZ* decays, or they only refer to W and Z appearing in the SUSY cascade decays (e.g. one decay chain contains a H, and the other W/Z)?

  • The main goal was to tag the a second boson in the event coming from the second SUSY decay chain, i.e. the Z in TChiHZ and the W in TChiWH. The most sensitive bin for TChiHH and T2bH (i.e. where we can have a H->WW/ZZ) are the Hbb and Zbb bin, so the contribution so the sensitivity from the leptonic bins for this model from H->WW and H->ZZ are not so important and were not the main aim of the leptonic bins.

-L332 So if this is useful, why MT2 binning does not use it?

  • The MT2 categorization has finer binning in pT/M of the diphoton system, which is an input variable for the resolution sigmaM/M. So indirectly MT2 is also binning in this variable.

-*** IMPORTANT * -In general, while we can argue whether MT2 is better than the Razor variables or not, I thought there would be a general agreement of how to subdivide the data.

-The problem here is that if the paper is being spun as a comparison between the two methods, then everything else needs to be the same. At this point if analysis A does better than B, maybe that is because Razor plane is better than MT2 variable, but instead because the events are divided according to the mass resolution and the other is not, etc., etc.

  • We do not present the paper as a comparison. The new paper draft plays down the significance of these kinematic variables. Instead the more important difference are the categorization by number of jets and b-tags which enhances the sensitivity to strong SUSY production.

-L377-383 This looks like an F-test. Is it?

  • Yes

-L400 Where does 30% come from?

  • To be precise a 30% bias term only leads to about 4.4% underprediction of the statistical uncertainty, so 31.2% is within the target 5% underprediction (which the maxmum bias allows to be 32%). We can add a sentence about it.

-L421 "Table ???" -- are we missing a table of Higgs contributions? This is very important because of the comment from L199 above.

  • It's Table 19, 20 and 21 summarize SM Higgs contribution for two analysis , there was a mistake with the latex.

-Sec. 5.3 Where's the equivalent text for MT2?

  • We have modified the text in Section 5.3 to apply for both the razor and MT2 versions of the analysis. The same likelihood is used, just different search region bins.

-L453 I'm confused why the content of the tables 20 and 21 is different. Shouldn't the list be the same and the numbers be largely identical?

  • We've synchronized the uncertainties and have now just one table showing the uncertainties for both analyses.

Questions from pre-approval report

Questions from SUSY conveners

- analyses should provide the list of most sensitive regions in each model, and use this to pinpoint where discrepancies between the observed and expected limits arise, especially in Razor case;

Tables with most sensitive bins for the different models are provided now in the AN. We observe that for TChiWH a large Rsq bin in the hadronic high-pt (Higgs pT > 110 GeV) category is very sensitive to this model. For masses below NLSP 200 GeV the muon and electron categories also with high-pt (Higgs pT > 110 GeV) contribute significantly to the sensitivity. For masses above 200 GeV the hadronic high-pt (Higgs pT > 110 GeV) drives the sensitivity. We observed that for the TChiHH model the HggHbb categories with high-pt drive the sensitivity for low NLSP masses 127-175 GeV. For higher NLSP masses the HggHbb categories with high-pt continues to be very sensitive but now accompanied by a high-rsq bin in the hadronic high-pt category. We observed that for the TChiHZ model HggZbb categories with high-pt drive the sensitivity for low NLSP masses 127-175 GeV. For higher NLSP masses a high-rsq bin in the hadronic high-pt category dominates the sensitivity. We observed that for the T2bH the most sensitive bin throughout the mass plane comes from the Hgg-Hbb category. The discrepancies on the expected and observed limits for the electroweak models indicated that expected is more aggressive than the observed in all model. We identified that bin 17 (high-pt, high-rsq) has a large deficit (downward fluctuation in the observed data) and has large sensitivity for all models and when removing that bin from the combination the tension between the expected and observed limits is highly ameliorated.

- in EWK models, which phase-space is driving the sensitivity in Razor analysis: are these events with at least one ISR jet, or events with hadronic decays of a second boson?

For the EWK models the sensitivity goes as follows: TChiWH -> high-rsq bin in the hadronic high-pt (Higgs pT > 110 GeV) and muon and electron categories also with high-pt (Higgs pT > 110 GeV) TChiHH -> HggHbb categories with high-pt and high-rsq bin in the hadronic high-pt (Higgs pT > 110 GeV) TChiHZ -> HggZbb categories with high-pt and high-rsq bin in the hadronic high-pt (Higgs pT > 110 GeV) This has been answered in the previous question, here is the same answer: “the discrepancies on the expected and observed limits for the electroweak models indicated that expected is more aggressive than the observed in all model. We identified that bin 17 (high-pt, high-rsq) has a large deficit (downward fluctuation in the observed data) and has large sensitivity for all models and when removing that bin from the combination the tension between the expected and observed limits is highly ameliorated”

- please provide a more detailed split up in the systematics uncertainties, e.g. why lepton unc. goes as high as 4%, photon efficiency unc. is missing.

  • Added the MT2 table of systs to the AN, but we need to expand this section of the AN and add a section about systs to the paper

- suggest to put both interpretations from MT2 and Razor in the paper for all models, as at this point there are not so many interpretations to allow for an analysis choice per each. And add a discussion for each model in which phase space which analysis is more sensitive and why.

  • We agree, we would like to keep both interpretations in the paper for all models. Generally the finer binning in the Razor variables gives sensitivity to boosted topologies while the b-tagged bins in the MT2 approach are sensitive to models where heavy quarks are present. The preliminary results also indicate that razor has and edge on the EWK model sensitivity while MT2 has an edge on the strong production model (T2bH) making the two approaches complementary in a concrete set of models.

- Figure 5 vs. Figure 6: These two plots, we would expect the results to be much closer since the WH selection is still pretty similar in the 1L categories. It looks like there is a bigger deficit in the obs data for Figure 6 (Razor result) compared to Figure 5 (MT2 result). In Figure 6 the observed line is above the 1 sigma band, while MT2 is always very consistently within 1sigma band. Can you cross-check the non-resonant background unc. and obs yields for the 1-lepton categories between the two plots ?

This has been answered in the previous question, here is the same answer: “the discrepancies on the expected and observed limits for the electroweak models indicated that expected is more aggressive than the observed in all model. We identified that bin 17 (high-pt, high-rsq) has a large deficit (downward fluctuation in the observed data) and has large sensitivity for all models and when removing that bin from the combination the tension between the expected and observed limits is highly ameliorated”

- question brought up by the ARC: need to think on the justification of using different bkg estimation fit procedures, and explain it in the paper;

  • In the same way that we use different approaches for the categorization and signal fits, also a different approach is chosen for the non-resonant background. Moreover both approaches have been published in the past in the context of the Higgs analysis and this analysis itself. We could argue the razor approach is just a continuation of the previous publication and the new MT2 approach has it's own new non-resonant background estimation method and cite the Higgs and and high mass diphoton analyses for the method.

Questions from Marc

* Page 3, first paragraph: The line "To converge..." is a little confusing. Here the "identified jet" is an additional hadronic jet in the event, rather than one of the megajets, correct? And is it necessary so you have another object (in addition to the Higgs candidate) to define the second hemisphere of the event?

  • We have rephrased this to say: "To successfully identify two hemispheres, the algorithm requires at least one
identified lepton or jet in the event in addition to the Higgs boson candidate." Yes, we require one additional hadronic jet in the event

* The categorization process is so involved for this analysis that it might be helpful to have some version of Tables 13--17, at least in supporting materials.

  • To be added

* 107: The AIC should certainly have a citation.

  • We have added the citation

* It doesn't look like there's an organized discussion of the sources of systematic uncertainty in the paper. In section 5 the AIC/bias approach is mentioned after the razor method and the envelope approach is mentioned after the MT2 method, but I think the paper would benefit from a dedicated section to describe all the systematics.

  • Not answered

* 111: I think this echoes a question from the ARC, but is there any justification for the 30% boundary for the bias tests for the razor bins? Is it possible to evaluate the impact of this choice by adjusting it up and down, taking the new best function (if different), and using this to estimate the nonresonant background?

  • The 30% boundary criteria is based on the desire to have a negligible additional systematic error due to biases from the background function. A 30% bias will lead to a 4% additional systematic error, which is small compared to other uncertainty sources.

* This was also mentioned in the meeting, but it does seem as though the estimation method for the main systematics should be independent of the event categorization. Is it possible to try the envelope method on a few of the razor bins to see how the nonresonant estimate changes? Or try the crystal ball function on some of the MT2 bins?

  • Two things are mixed in this question: one for the non-resonant background and razor using the envelope method (so the non-resonant background), and one for how we obtain the resonant shapes.

* Figures 2 and 3: It would be good to put these side by side for the purpose of comparison. (Also Figure 3 should indicate it's the limit for the razor method.) Same for Figures 4 and 5.

  • Done

* Why is the mass binning so much coarser for the razor method than the MT2 method? Presumably they're using the same signal MC?

  • This was an artifact of and old plotting macro used to plot the razor results, the scan is the same for both approaches. We have now update the razor plots with the same binning in the updated documentation.

* It might be worth mentioning in the results that MT2, which bins in b-jet multiplicity, seems to perform better in simplified models with sbottom production, while razor performs better in ewk models, emphasizing the complementarity of the two methods.

  • Added a sentence in the results section

Questions from Rishi

H(gg) Paper/AN Comments LN 5 "providing motivations to search” —>”motivating a search"

  • Done

LN 19-20 cite papers for MT2 and Razor

  • DONE

I’d take a look here: https://twiki.cern.ch/twiki/bin/viewauth/CMS/Internal/PubDetector and check to see if there is a recommendation for describing the new vs. old pixel

  • The current recommendation above makes no statement about the new pixel detector. As we wish to keep this section short and the analyses don't heavily rely on the tracker information we would prefer to not go into too much detail about the pixel detectors as used during the 2016 and 2017 data taking.

Suggestion: LN 68 Maybe you can write the categories as list with bullets (same way as in the AN?)

  • Not answered

Suggestion: Results Figures 2-4 Would it be useful to include the MT2 & razor Exp/Obs exclusion contours in the same plot? This allows for direct comparison, but this can also just be a supplementary public plot

  • Since we've been asked to anyway keep both interpretations in the paper we can provide such a figure as supplementary material

AN LN 74, it will be worthwhile here to say that you correct for data/MC differences using POG SFs for leptons, photons, and b-tags

  • Done

Based on the latest inputs: https://twiki.cern.ch/twiki/bin/viewauth/CMS/SUSRecommendationsRun2Legacy You might expect that there will be a few updates that accompany the latest 2017 MC. I think in the AN you should add a list of updates you expect to apply for the MC: Latest JEC’s ISR weights for 2017 signal samples Will use the latest Ewk Prod Xsecs for Signal if they are available in time: https://indico.cern.ch/event/775518/contributions/3236728/attachments/1766189/2867678/BasilSchneider_20181205_Cross-Sections.pdf (Updates will be linked to the 13TeV SUSY xsec page) Am I missing any others?

  • Not answered

Figure 4: Adjust the razor plot so that it matches the z-axis range in Figure 3, and also you need to run the points up to sbottom mass of 650GeV

  • The scan only goes up to 600GeV of sbottom mass, we're currently checking if the extension will arrive in time


I know there are SF’s for combCSV for 2016 and 2017 in 94X, would you consider switching to deep CSV? I would do this if it was a simple change, but it might involve considerable optimization of Nj-Nb categories which you can’t do after unblinding. But this might be a useful strategy for a future publication for the full Run 2 data

  • DeepCSV seems to be the recommendation for the legacy results and as such will be employed for the full Run2 dataset. At this stage of the approval process, we prefer to not change the binning.

Section 4.3: I think here you also have the difference between resolution variables the pT/M(gammagamma) cuts and the sigmaM/M categories. These are correlated since pT/M is an input variable for sigmaM/M

  • Not answered

If Myriam drops the Laurent in the MT2 categories penalty, does she get the same exponential or power law functions as Razor? Based on AN plots in the appendix, it seems like Laurent,power law and exponent have similar penalties in each category

  • Not answered

Figure 5 vs. Figure 6: These two plots I would expect to be much closer in sensitivity since the WH selection is still pretty similar in the 1L categories. These plots will be easier to compare when they are in the same x-axis range and interpolated. It looks like the is a bigger deficit in the obs data for Figure 6 compared to Figure 5. In Figure 6 the observed line is above the 1 sigma band

In comparison, seems like Figure 7 and Figure 8 are in pretty close agreement despite different treatment of the 2-lepton category and the hadronic search regions. So for the 1-lepton categories I would compare bkg fits to be sure they are reasonably comparable between Razor and MT2 and also you can compare the bias terms for the background.

We concentrate on the differences of expected limits in this answer since the discrepancies between observed and expected for the razor case has been answered in previous questions. For completeness we include it here at the bottom. Regarding the differences between the expected limits for MT2 and Razor. We have check that for the WH case a large-rsq bin in the high-pt category is driving the sensitivity for the WH case, this bin not present in the MT2 version of the analysis or at least it would be hard to get a one-to-one correspondence. Furthermore we have checked that when removing that bin in the razor combination the MT2 and Razor limits do agree very well as you expected. In the HH and HH-HZ case the predominance of the same bin (large-rsq bin in the high-pt category) is not as clear as in the WH case such that the differences between MT2 and Razor are not as large in this models. As you mentioned the other categories (two leptons, Hbb, Zbb) are very similar among the two analyses so this is expected. Regarding the observed vs expected difference in the razor case this is the answer in previous questions: “the discrepancies on the expected and observed limits for the electroweak models indicated that expected is more aggressive than the observed in all model. We identified that bin 17 (high-pt, high-rsq) has a large deficit (downward fluctuation in the observed data) and has large sensitivity for all models and when removing that bin from the combination the tension between the expected and observed limits is highly ameliorated”

Object selection

Text marked in orange are differences still to be discussed and agreed upon.

Caltech 2016 version 2017 data ETH
Loose cut based ID spring 16 Loose cut based ID Spring16
Medium cut based ID spring 2016
Leading photon pT > 40 pT1 / Mgg > 1/3 pT1 / Mgg > 1/3
Subleading photon pT > 25 pT2 / Mgg > 1/4 pT2 / Mgg > 1/4
| eta | < 1.4
| eta | < 1.4 | eta | < 1.4
pT > 20
---- R9 > 0.5 R9 > 0.5 (from trigger)
DR(mu,gamma) > 0.4, DR(e,gamma) > 0.4 DR(mu,gamma) > 0.5, DR(e,gamma) > 1.0 DR(mu,gamma) > 0.5, DR(e,gamma) > 1.0
Veto MVA ID (had box) Cut based loose 2016
Cut based loose 2017 when available
Loose cut based ID
Tight MVA ID (lep box) Cut based loose 2016 Cut based loose 2017
Loose cut based ID
pT > 20 pT > 20 pT > 20
Veto MVA ID (had box) Medium cut based ID
Tight MVA ID (lep box) Medium cut based ID
pT > 20 pT > 20 pT > 20
pT > 30 pT > 30 pT > 30
| eta | < 3.0 | eta | < 2.4 | eta | < 2.4
loose ID
DR(e/mu, j) > 0.4, DR(gamma,j) > 0.5 DR(e/mu/gamma, j) > 0.4
b - Jets
pT > 30 pT > 20 pT > 20 (also have 30 collection)
| eta | < 2.4 | eta | < 2.4 | eta | < 2.4
loose&medium CSV DeepCSV DeepCSV medium CSV
() Lepton selection veto

Suggest (for simplicity and because there is no need to veto leptons for this analysis) to just bin in the number of leptons instead.

(**) B-tagging:

only DeepCSV SF for 2017 data, see slide 27 here: https://indico.cern.ch/event/681877/contributions/2794766/attachments/1569560/2475185/20171204_BTVtalk.pdf

It would probably be easiest to also adopt DeepCSV for 2016 data, so that we don't have to describe 2 working points in the paper.

-- MyriamAngelikaSchoenenberger - 2017-12-01f

Edit | Attach | Watch | Print version | History: r85 < r84 < r83 < r82 < r81 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r85 - 2019-07-07 - MyriamAngelikaSchoenenberger
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback