Difference: BFrancisNotes (6 vs. 7)

Revision 72016-01-04 - BrianFrancis

Line: 1 to 1
 
META TOPICPARENT name="BrianFrancis"

SUS-15-009: Search for natural GMSB in events with top quark pairs and photons (8 TeV)

Line: 37 to 37
 

Manfred Paulini on AN v3:

Changed:
<
<
  • Sec. 2.2: What checks were performed to gain some confidence that the privately produced GMSB signal samples can be trusted as if they were centrally produced?
>
>
  • Sec. 2.2: What checks were performed to gain some confidence that the privately produced GMSB signal samples can be trusted as if they were centrally produced?

  These samples were created similarly to those for the di-photon inclusive searches (SUS-12-001 and SUS-12-018), in which the production was not stops but first/second generation squarks and gluinos (scalar udsg). Thus the checks performed for the "stop-bino" samples were mainly against these well-known older samples, which were also private FastSim.
We found that for stops and binos in our sample, the kinematics agreed favorably to that of squarks and binos with the same masses produced in these older samples. For an executive summary see Dave Mason's xPAG presentation and the twiki for the scan.
If he'd like, perhaps Dave Mason could comment on this since he oversaw their creation firsthand.
Changed:
<
<
  • Sec. 2.5: why was the tt= sample re-weighted by the weights squared and not by a variation of no re-weight and 2x the weight (instead of weight squared)?
>
>
  • Sec. 2.5: why was the tt= sample re-weighted by the weights squared and not by a variation of no re-weight and 2x the weight (instead of weight squared)?

  Weighting by the weight squared is the TOP PAG recommendation for estimating the upwards systematic fluctuation of this effect: see their twiki on the matter.
Changed:
<
<
  • Sec. 3.3 & 3.4: what bothesr me a bit is the fact that the eta regions for e (2.5) and mu (2.1) are different for tight but for loose you use 2.5 for mu, too. Can this difference in choice cause any effect on the CR estimates?
>
>
  • Sec. 3.3 & 3.4: what bothesr me a bit is the fact that the eta regions for e (2.5) and mu (2.1) are different for tight but for loose you use 2.5 for mu, too. Can this difference in choice cause any effect on the CR estimates?

 
      The requirement |eta|<2.1 for tight muons is due to the SingleMuon trigger requiring it, and is not necessary for other/additional muons in the event.  The loose lepton veto is kept constant between signal and control regions, so this should not affect control regions. Where it could affect the analysis is if the object kinematics or MET differed greatly between ttbar-->
(e+e, e+mu, mu+mu), in which case the different efficiencies for each combination would be important; however this is not the case.
The |eta|<2.1 cut in the trigger does make the tight muon requirement tighter than for the tight electron, which is one cause of the difference between electron and muon event counts. Beyond all this, these vetoes are what is recommended by the TOP PAG for semi-leptonic selections.
Changed:
<
<
  • Tab. 8: why is there a lower cut of 0.001 on sigma_IetaIeta? Is this standard photon ID? I don't recall ...
>
>
  • Tab. 8: why is there a lower cut of 0.001 on sigma_IetaIeta? Is this standard photon ID? I don't recall ...

  This cut, and the one on sigma_IphiIphi, are for ECAL spike removal and general anomaly protection. They are not required by EGamma but are fairly common; for example the 8 TeV inclusive search used these as well. Concerning its effect, zero otherwise selected photons in the TTGamma MC sample fail these cuts.
Changed:
<
<
  • Fig. 9: the fits seem okay around the Z region but are less from optimal away from the Z. Is this anything to worry about? Was is treated in a systematic uncertainty?
>
>
  • Fig. 9: the fits seem okay around the Z region but are less from optimal away from the Z. Is this anything to worry about? Was is treated in a systematic uncertainty?

 
      While not considered important for the signal regions, what you are seeing is the lack of Drell Yan for 20 GeV < M(lep lep) < 50 GeV, which in Figure 9 is exaggerated compared to signal regions due to the di-lepton selection here. You can see in Figure 9 that when requiring b-jets (the top two plots) this is not an issue. 
What can be done to study the effect of this is to re-do the template fit excluding this low-mass region, and see that the scale factor doesn't change much (it should be dominated by on-mass Z-->
dilepton). Furthermore, since these events are more accurately Z/gamma* Drell Yan, the fit range can be extended to higher masses to observe how much the scale factors change. Keep in mind here that the non-btagged muon channel (bottom right of Fig. 9) is not used in the analysis: the non-btagged electron sample is only useful as an input to the electron mis-id rate measurement. When varying the fit range of Figure 9, the scale factors for this are:
Z(gamma) SF in channel Normal (0 - 180) 50 - 180 50 - 600
Line: 63 to 84
 
Normal (0 - 180) 50 - 180 50 - 600
z_mass_ele_jjj_0_180.png z_mass_ele_jjj_50_180.png z_mass_ele_jjj_50_600.png
Added:
>
>
  • Sec. 4.4.1, bottom of p. 21: how is the overall scale adjustment taken into account in the analysis? From Fig. 15 is seems to be a good 10% effect.
 
Changed:
<
<
  • Sec. 4.4.1, bottom of p. 21: how is the overall scale adjustment taken into account in the analysis? From Fig. 15 is seems to be a good 10% effect.
>
>
  In lines 357-360 and 376-378 explain, this scale adjustment is not actually applied to the final result. The goal of this section is to ask: if we were to adjust the photon purity with this scale factor, would the distribution of MET change noticeably? In isolating only the shape of MET in the final evaluation, the extra 100% systematic on background normalization would wash away this overall 10% effect, but would not wash away a change in the shape. You can also see this is a 10% effect from the scale factors in Table 16.
Changed:
<
<
  • Tab. 15: the discrepancy between Fit and MC seems to be bigger in sigma_IetaIeta? Why not just using chHadIso? Or at least having a systematics that using only one or the other?
>
>
  • Tab. 15: the discrepancy between Fit and MC seems to be bigger in sigma_IetaIeta? Why not just using chHadIso? Or at least having a systematics that using only one or the other?

  Back to the previous answer, neither is actually used in the final results so a systematic reflecting the difference isn't warranted. As for the discrepancy in sigma_IetaIeta here, the tt+gamma cross section measurement also encountered this and treated it in the same way. As you say, the way both analysis handled this was to just use chHadIso. The low sigma_IetaIeta is seen to be from some error in the shower evolution of photons in GEANT4.
If you look in the PAS in lines 147-151 indirectly touch on this question, because the 5% variation is from a very maximal case where you completely replace the MET from ttjets with tt+gamma's shape, or vice versa -- ie, if you were to perform a template fit like chHadIso or sigma_IetaIeta and find a maximal disagreement, the effect on MET would just be 5% bin-by-bin variations.
Changed:
<
<
  • Tab. 17 & 18: There is a significant excess in the data compared to the total background prediction - in CR1 and if I take the background errors at face value, also in CR2. I assume this came up in the pre-approval. What was decided then?
>
>
  • Tab. 17 & 18: There is a significant excess in the data compared to the total background prediction - in CR1 and if I take the background errors at face value, also in CR2. I assume this came up in the pre-approval. What was decided then?

  In the HN I noted these tables did not include the correct uncertainties, so in short this did not come up in pre-approval and nothing was decided. To further temper this issue, compare to Figure 29 to see that the event counts are well within uncertainties for most channels. Related to a previous question of yours, you can also look at the photon purity measurement in Section 4.4, which in simplified terms can be considered a normalization of the tt+gamma/jet rate to data: it is roughly a 10% effect, which is about the order of the differences you speak of in Tables 17-20. You also might consider the public CMS measurement of the tt+gamma cross section (Public Twiki, CDS) which was higher than predictions by about 30% 30% for a similar (but not exactly the same) selection as this. Also, the uncertainties on the theoretical cross section of tt+gamma used here is 50%, and when all combined the theory systematics for ttbar-related rates alone are ~25%, well past the differences of which we're speaking. Lastly, the differences in CR1 are close to the systematic uncertainties therein (see Figure 16), and are used conservatively as an additional systematic in the signal regions -- ignoring the unfortunate presentation of uncertainties in the tables, the variations in all channels are fairly consistent with a tt+gamma rate that is slightly higher than predicted, an effect that in Section 4.4 we found to have minimal effect on the shape of the MET distribution.
Changed:
<
<
  • Tab. 19 & 20: Same comment for SR1 and certainly for muon SR2. What conclusion did the discussion about this data excess come to during the pre-approval?
>
>
  • Tab. 19 & 20: Same comment for SR1 and certainly for muon SR2. What conclusion did the discussion about this data excess come to during the pre-approval?

  See the previous answer for SR1. The table uncertainties seemed to have been overlooked in pre-approval and it simply did not come up. As for the muon channel in SR2, this was briefly touched upon in pre-approval as only an interesting notice. As a shape-only comparison however, this did not drive the limits as it was not compatible with the high-MET signal nor compatible with the other channels. The conclusion in pre-approval was that with higher statistics this might be good to explore, and with that CMS should be able to precisely measure the tt+gamma+gamma cross section and not rely on the shape-only provision. A significantly different (mu+jets):(ele+jets) ratio in tt+gg events would be exciting to see but this dataset is not powerful enough to approach that, and with the overall method isolating the MET shape we feel it's best not to address this in the PAS.
Changed:
<
<
  • Sec. 7, p. 44/45: why do you use all MET bins in your definition of your signal region? I thought the low MET bins were used for background normalization? Wouldn't it make sense to start the signal region at moderate MET, say > 50 GeV or so? From Fig. 29, the data-bg discrepancy seems to be at low MET. I think restricting the signal region to not include the low MET bins will also help in getting a better agreement between the data and bg predictions in Tab. 19&20. Was this discussed?
>
>
  • Sec. 7, p. 44/45: why do you use all MET bins in your definition of your signal region? I thought the low MET bins were used for background normalization? Wouldn't it make sense to start the signal region at moderate MET, say > 50 GeV or so? From Fig. 29, the data-bg discrepancy seems to be at low MET. I think restricting the signal region to not include the low MET bins will also help in getting a better agreement between the data and bg predictions in Tab. 19&20. Was this discussed?

  The reason for including these background-dominated bins, especially in SR1, is to allow the limit-setting machinery to constrain these backgrounds (with the 100% log-uniform "float" parameter, this makes it basically a normalization) in the high MET bins. For SR2, removing the low MET bins could be very dangerous for this analysis because if you only have 1-3 bins, you lose most of the "shape" information and you just have a log-uniform free-floating +/- 100% estimate, giving you no sensitivity.
As for "double-using" the low MET (< 50) SR1 region, recall from a previous question that the photon purity scale factor method is not applied to the final estimate. You can consider that method to be simply a check that if you were to change the composition of tt+jets and tt+gamma, would it indeed just be a normalization and not a big change in the MET shape? With that independant check giving a fairly flat 10% effect, you can just allow the limit-setting tool to fit the normalization for you using the log-uniform 100% float parameter and find that the post-fit value is very similar. Once again for Tables 19 and 20, if you include the correct uncertainties there is reasonable agreement and the discrepancy is of order 10% like all these effects. This was discussed in our group also in the context of avoiding "double-using" this low-MET region, and is why the photon purity scale factor is only a check.

Manfred Paulini on PAS v0:

Changed:
<
<
  • use CMS convention of GeV for mass and momentum and remove all GeV/c^2
>
>
  • use CMS convention of GeV for mass and momentum and remove all GeV/c^2

  Done.
Changed:
<
<
  • pp collisions: use pp in roman and not italic
>
>
  • pp collisions: use pp in roman and not italic

  Done.
Changed:
<
<
  • do not use 'fake ...' or fakes and replace all with misidentified or similar
>
>
  • do not use 'fake ...' or fakes and replace all with misidentified or similar

  All references replaced with "misidentified photon"
Changed:
<
<
  • look up PubComm recommendations for use of hyphens in b quark, b jet but b-quark jet ... and correct all
>
>
  • look up PubComm recommendations for use of hyphens in b quark, b jet but b-quark jet ... and correct all

  I went through the whole text and made many corrections governed by the PubComm hyphen rules.
Changed:
<
<
  • I know we talked about this ... this is just a reminder about the plot beautification and CMS figure standards ...
>
>
  • I know we talked about this ... this is just a reminder about the plot beautification and CMS figure standards ...

  All plots have been recreated as closely as possible to the recommended style macros.
Changed:
<
<
  • title: it is not good to have an abbreviation such as GMSB in the title. My suggestion: Search for natural supersymmetry in events with top quark pairs and photons in 8 TeV pp collision data (or: ... in pp collisions at sqrt(s) = 8 TeV)
>
>
  • title: it is not good to have an abbreviation such as GMSB in the title. My suggestion: Search for natural supersymmetry in events with top quark pairs and photons in 8 TeV pp collision data (or: ... in pp collisions at sqrt(s) = 8 TeV)

  I agree, I believe the original title was a place-holder of sorts until the ARC began. It has been changed to your suggestion.
Changed:
<
<
  • abstract: We need to add that we don't find an access and set some limits. My suggestion for the abstract wording:
    We present a search for a natural gauge-mediated supersymmetry breaking scenario with the stop squark as the lightest squark and the gravitino as the lightest supersymmetric particle. The strong production of stop quark pairs and their decays would produce events with pairs of top quarks and neutralinos, with each decaying to photon and gravitino. This search is performed with the CMS experiment using pp collision data at sqrt(s) = 8 TeV, corresponding to an integrated luminosity of 19.7 fb-1, in the electron + jets and muon + jets channel, requiring one or two photons in the final state. We compare the missing transverse energy of these events against the expected spectrum of standard model processes. No excess of events is observed beyond background predictions and the result of the search is interpreted in the context of a general model of gauge-mediated supersymmetry breaking deriving limits on the mass of stop quarks up to 750 GeV.
>
>
  • abstract: We need to add that we don't find an access and set some limits. My suggestion for the abstract wording:
    We present a search for a natural gauge-mediated supersymmetry breaking scenario with the stop squark as the lightest squark and the gravitino as the lightest supersymmetric particle. The strong production of stop quark pairs and their decays would produce events with pairs of top quarks and neutralinos, with each decaying to photon and gravitino. This search is performed with the CMS experiment using pp collision data at sqrt(s) = 8 TeV, corresponding to an integrated luminosity of 19.7 fb-1, in the electron + jets and muon + jets channel, requiring one or two photons in the final state. We compare the missing transverse energy of these events against the expected spectrum of standard model processes. No excess of events is observed beyond background predictions and the result of the search is interpreted in the context of a general model of gauge-mediated supersymmetry breaking deriving limits on the mass of stop quarks up to 750 GeV.

  I agree with the comment and have made the abstract to be very similar to your suggestion.
Changed:
<
<
  • Fig. 1: Since this is not a real Feynman diagram where time arrows play a role and need to be correct, I would remove all arrows and just show lines
>
>
  • Fig. 1: Since this is not a real Feynman diagram where time arrows play a role and need to be correct, I would remove all arrows and just show lines

  Okay.
Changed:
<
<
  • Fig. 1 caption: What is GMM? My suggestion for a less redundant caption:
    Feynman diagram of the GMSB scenario of interest. With stop quarks as the lightest squark, their pair-production would be the dominant production mechanism for SUSY in pp collisions at the LHC. Assuming a bino-like neutralino NLSP, each stop would decay to a top quark and a neutralino, with the neutralino decaying to a gravitino and a photon. Shown above is the electron+jets or muon+jets final state of the top pair decay.
>
>
  • Fig. 1 caption: What is GMM? My suggestion for a less redundant caption:
    Feynman diagram of the GMSB scenario of interest. With stop quarks as the lightest squark, their pair-production would be the dominant production mechanism for SUSY in pp collisions at the LHC. Assuming a bino-like neutralino NLSP, each stop would decay to a top quark and a neutralino, with the neutralino decaying to a gravitino and a photon. Shown above is the electron+jets or muon+jets final state of the top pair decay.

  For the GGM comment I agree, however in small points in your suggestion I would disagree. I feel it's best to keep the language of "lightest squark or gluino" versus just "lightest squark". The stop being much lighter than the gluino is important to the analysis, otherwise any allowed gluino production would be very close to those in the inclusive photon searches (ie no third-generation decays) we've published previously. "Squark or gluino" is a bit confusing I accept, so if there are any recommendations how to clean this up while retaining the gluino caveat I'd be happy to change it.
I also prefer the language of "top squark" over "stop quark" for clarity that it is not a quark. Somewhere else a comment was made that "stop squark" is redundant so I have edited those instances to be "top squark". The updated Figure 1 caption now reads:
"Feynman diagram of the GMSB scenario of interest. With top squarks as the lightest squark or gluino, their pair production would be the dominant production mechanism for SUSY in pp collisions at the LHC. Assuming a bino-like neutralino NLSP, each stop would decay to a top quark and a neutralino, with the neutralino decaying primarily to a photon and gravitino. Shown above the the electron~+~jets or muon~+~jets final state of the top pair decay."
Changed:
<
<
  • l 6: what is "a new little Hierarchy problem"? How does it differ from the known 'regular' hierarchy problem? Can you explain or give a reference?
>
>
  • l 6: what is "a new little Hierarchy problem"? How does it differ from the known 'regular' hierarchy problem? Can you explain or give a reference?

The 'regular' hierarchy problem is the 1:10^34 loop corrections to the bare higgs mass, and with SUSY this is down to 1:10^2. There is still a 'little' tuning between SUSY sparticle masses to keep EWKSB unchanged, and it is related to the stop mass. In literature I've seen from CMS this has gone un-referenced, but a suitable reference might be http://arxiv.org/abs/hep-ph/0007265 .

  • l 7: and are left largely un-explored at the LHC. --> and have been left largely unexplored at the CERN LHC.

Okay.

  • l 11: CMS now frowns upon the use of the expression "missing
transverse energy" since energy is a scalar and a transverse component does not make sense. You can still use the symbol ET^miss but not the wording transverse energy. I would write here: ... contributes to large missing transverse momentum (\vec{p}_T^miss) in the detector, where the magnitude of \vec{p}_T^miss is referred to as ET^miss. [and then you are good to use ET^miss for the rest of the paper]

Done. This also infers a change in the title where 'missing transverse energy' was used; it is now 'missing transverse momentum'.

  • l 14: in pp collisions --> in pp collisions at the LHC.

Done.

  • l 24: in each signal region ... the reader might wonder what the signal region might be?

Adjusted this to: "in the one and two photons signal regions."

  • l 26/27: don't take away the thunder of the paper and already reveal that no excess was found. I would remove "No significant excess ... of Standard Model processes, and" and just write: "... shape-based comparison. The results are compared to a range of stop and bino masses ..."

The first clause as you recommend is removed.

  • l 30-32: Since this is a short paper, we do not need to state the "organization". I would remove l 30-32.

Agreed.

  • l 42: I would remove "arising from the H -> gg decay." as it doesn't seem to be relevant here.

Done.

  • l 44-46: Since you are using only barrel photons, there is no need to talk about the endcaps and then say we use barrel only. Remove lines 44-46.

Done, however the sentence saying only barrel photons are used and the reference should be informative.

  • l 58: you start by saying that all objects are reconstructed using PF and then say that the PF algorithm clusters all particles into jets. I'm not sure whether this is correct. I would write: "... (PF) algorithm [3335]. Jets are constructed by clustering particles using the anti-kT (note antikT typo) ..."

Done.

  • l 63: 1.4442 -> 1.44 (that's good enough as precision)

Okay. I also see other papers using 1.44.

  • l 64: remove the repetition of 'are required'

Done.

  • l 66: after 'A photon-like shower shape is required.' reference the 8 TeV ECAL performance paper JINST 10 (2015) 08 (arXiv:1502.02702)

Okay.

  • l 67: remove = sqrt(d-phi^2...) since already defined in l 60

Done.

* l 68: I would omit 'sub-detector dependently.'

Removed.

  • l 71: to be --> be; same in l 75
  • l 72: and an isolation energy --> and have an isolation energy; same in l 76

Done. Better parallel construction after changes.

  • l 77: not sure but I think criteria is plural while we need singular here. Maybe say 'requirement' ?

I agree, and also this sentence is largely reproduced in lines 90-91 in more detail. For now this is changed to: "An additional, looser requirement for each lepton provides a veto against dileptonic backgrounds."

  • l 86 - 100: a lot is repeated here from Sec. 3. Please remove repetitions such as "Photons are required to be tightly isolated ..." and just give final cuts if not yet done so and state the SR's and CR's

Made several changes to streamline this section.

- l 111: and the control region selection is designed to highlight this. How? Why? Can you explain and motivate this a bit?

If you recall the diphoton inclusive MET search (gamma gamma + X), the MET resolution is very different for events with 'fake' photons (really jets). What this sentence should convey is that if you also have a semileptonic ttbar decay in the event, the effect of a poorly-reconstructed photon is pretty small compared to all the other activity in the event. I've re-written this section to read:

"The control region definition is chosen to be orthogonal to the signal regions, to have very low signal acceptance, and to greatly enhance the population of the photon-like jets contributing the most to the background estimate in the signal regions. The control regions allow for the study of the performance of \MET simulation with the most poorly reconstructed photon-like objects expected in the signal region; the presence of a semileptonic \ttbar decay is expected to be a much larger effect on the \MET resolution than the photon energy resolution."

I think this way, the part about the ttbar system is just a statement of what's expected of these control regions.

  • l 117: of simulated --> of the simulated

Okay.

  • l 117/118: is generator level info used to reject the 0.6% of tt + jets events or how are these 0.6% identified and rejected?

Yes, only generator-level info is used here. If a tt+jets event has a generated photon within the tt+gamma sample definition for photons, it's rejected. I've added a word to clarify: "... the simulated \ttjets events contain generator-level photon radiation falling into..."

*l 119: maybe I missed it but V-gamma should be defined as W/Z + gamma

Changed in several places.

  • l 123: calculated at at least NLO. --> calculated at least at NLO.

Fixed.

  • l 128-138: it was hard for me to follow what was done in order to get the 2 scale factors. Can you try to rephrase and make a bit more clear?

Extensively re-written. This was broken into two paragraphs to hopefully make it easier to follow. I've also changed the "k factor" language to just "scale factor" as this is not at all a correction for loop order diagrams.

- l 139: the removal of the b-tag requirement applies to the MC data under discussion here, right?

For both MC and the data that's being fit.

  • Tab. 1 caption: ... only the one is applied --> which one? Explain. Errors shown are fit+statistical only. --> What is fit+statistical? The error returned from the fit? This is usually considered a statistical error.

Yes, the "fit" error is just the error rerturned on the post-fit parameters. I've changed this to simply say statistical.

  • l 141: of photon purity --> of the photon purity

Done.

  • l 147: no difference is found ... where is this found? In MC? In which MC?

Changed this sentence to read: "... no difference in the overall distribution of simulated \MET is found when altering the purity of selected photons."

  • l 147/148: The maximimal difference bin by-bin --> The maximimal bin by-bin difference

Fixed.

  • l 148: found to be 5%, and when their --> found to be 5%. When their

Done.

  • Fig. 2 caption: ... channels, and the template fit --> channels. The template fit ... ... b-tag requirement removed. Errors ... --> ... b-tag requirement removed is shown on the right. Errors ...

Done.

  • l 152: insensitive to source --> insensitive to the source

Fixed.

  • l 153: To eliminate dependance --> To eliminate the dependance

"To eliminate any dependance" might be better as it is less definite, the dependance on tt+gamma+gamma rate has not been quantified anywhere.

  • l 155: to effect a completely --> to result in a completely

Done.

  • l 158: I don't think A x eps is not defined

Changed to be spelled out: "(acceptance times efficiency less than 1%)"

  • l 160: enhances --> enhance

Fixed.

  • l 170: 1 - -8% shape systematic ??? 1 - 8% ?

Fixed. Purely typographical, should be "1 - 8%"

  • l 175: so that they are completely --> and are treated as completely

Done.

  • Tab. 2: explain what the check marks for 'shape' refer to

Done.

  • Tab. 2: Control Region Delta --> Control Region Discrepancy

Done.

  • l 183: remove 'corresponding to an integrated luminosity of 19.7 fb-1'

Done.

  • l 185: in each signal region, across the entire range of MET ... See my comment above on Sec. 7 of the AN. Same for the data - prediction issues with Table 3.

As from my comments for the AN, Table 3 is formatted incorrectly and includes only the uncertainty from limited MC statistics, which is more correctly labeled a systematic. By including the correct uncertainties this table is much less confusing. Also as the photon purity method is used only as a check and not included as a normalization in the final background estimate, it is not a 'double-use' of the low MET bins, and the reason for including them is to allow the limit setting tool to constrain the total background using these bins.

  • l 187: we need a bit more description of how the GMSB sample was produced.

I've broken this paragraph up for clarity, and to include a mention of SuSpect/SDECAY spectrum generation and PROSPINO NLO cross section calculations.

  • Fig. 4 caption: The control region-derived uncertainties are not included in the systematic uncertainties shown above. -->
Why?

Initially for technical reasons, but beyond that I'm not certain that including the control region systematics here is the correct thing to do. The way Figure 4 is presented, the signal regions are as they are independant of the CRs and the "extra" CR systematics are not shown to continue the suggestion that they are indeed "extra", or conservative user-driven estimates of the error. I feel that if Figure 4 did include them, many readers would want to see the plots without them, and not everyone can be happy. For now these plots will be produced and we can discuss this further.

  • l 200/201: observed indicating the presence --> observed that would indicate the presence

Fixed.

  • References:
  • check all references to have the proper way to put only the volume number in bold: Phys. Lett. B 70 (only 70 is in bold). See for example [17]-[19]
  • there are problems with ref's [10], [11], [34], [35], [38], [39], [46]
  • check [40]
  • some refs have no doi: e.g. [2], [3], ...
  • check arXiv only pubs are published by now
 
Added:
>
>
Should be much improved. I wasn't able to find DOI links for some of them, as the inspire BibTex doesn't have them. For these few I've tried to be as complete as I can.
 

Anthony Barker on AN v3:

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback