-- JingyuLuo - 2018-01-25

Internal Review for Displaced Dijet Search


Comments from Steven (01.24.2018)


Analysis Note Draft: http://cms.cern.ch/iCMS/jsp/openfile.jsp?tp=draft&files=AN2016_259_v2.pdf

  • L16-17: but where does the second jet come from then; ISR, FSR? Or, do you mean one can have a two displaced jets (like in the GMSB model that is considered) that are considered as 1 displaced dijet? I found it a bit confusing to follow the topologies you discuss; maybe some cartoon can help.
    • Here we mean two separate displaced jets coming from different displaced vertices can be considered as 1 displaced dijet (though FSR jet can also help gain sensitivity in this case).

  • L65: it is undefined what is meant by "track"
    • Do you mean which tracks are selected here? They are selected from the first three iterations of HLT online tracking, and are associated to the calo jets using jet-tracks association at the primary vertex (with cone size=0.4). Added to the text

  • L68: it is undefined what is meant by "track"; same as L65?
    • An additional iteration of HLT online tracking is applied, and the track candidates are merged with previous iterations. Jet-tracks association at the primary vertex (with cone size=0.4) is also applied. Added to the text.

  • Figure 1 and further: data/MC ratios in most cases would be more instructive if put on a linear scale. Also some of the plots themselves (jet eta, phi) would be more isntructive in linear scale, or at least zoomed in on the Y axis.
    • Fixed. Changed to the linear scale.

  • L121, eqn (2): can you add plots for these quantities?
    • Fixed. Plots are added.

  • Fig 9 and onwards, caption: "recorded by the displaced jet trigger" -> this is confusing. These are offline quantities right?
    • Here we mean the events used to plot these distributions are events passing the displaced jet trigger. Changed "recorded" to "collected".

  • L215-216: I find the naming "PU energy fraction" confusing. Are you really rejecting against pileup jets with this cut? Also, can you confirm (and add in the text) that the PV1 and PV2 need not be the same? Are they usually?
    • Yes, they are designed to reject pileup jets here. PV1 and PV2 need not to be the same. Usually PV1 and PV2 are just the leading PV, but indeed they can also pick up the pile-up vertices.

  • L222: you say "auxiliary", but this algorithm actually uses the secondary vertex, so it is built rather on top of the sec vtx fitting described earlier. As a general remark to this extra algorithm: I wonder if the complexity of this extra algorithm is worth it. Reading this, I get the feeling this will make it impossible for pheno people to reinterpret this analysis (and ATLAS has a DV+MET search relatively easy to reinterpret). Can't you get the extra gain in the selection from the displacements of the tracks with the secondary vertex directly (eg. largest displacement, spread, etc)?
    • The cluster algorithm already existed in the Run 1 analysis, and pheno people actually did recast this variable (e.g. https://arxiv.org/pdf/1503.05923.pdf https://arxiv.org/pdf/1409.6729.pdf). Still I think it's a very good suggestion to replace this variable with some variable directly built upon the distances between the tracks and the secondary vertex. But I feel that might be a lot of work, maybe we can investigate into that in future projects (e.g. 2017 analysis or so)

  • L223: how is the displaced track here selected?
    • Same as the selection for secondary vertex reconstruction, i.e. IP2D >500um, IP2D significance>5.0 %ENDCOLOR. Added to the text.%

  • L236: the vertex here is the one from the adaptive vertex fitter, right?
    • Right.

  • L253: given Fig 20, with a very steep fall off towards high values and large stat jitter, I'm not sure how meaningful this correlation factor is. But you only use it to pick vertex over cluster multiplicity, if I understood correctly. So it doesn't matter much?
    • Yes, it's only used to pick vertex over cluster multiplicity, so it doesn't matter much here.

  • L296-297: Why would this be a proper estimate of the systematic uncertainty? Wouldn't an MC closure test be more adequate?
    • The main issue with MC closure test is the lack of statistics. Given that we have a low HT threshold (400GeV), the largest contribution from QCD events comes from HT bin [300, 500]. However, the cross section in the low HT bin of QCD process is large, so the statistics of the official QCD multijet sample is not enough. In order to improve the QCD statistics, we removed the displaced jet trigger requirement for the QCD MC sample, otherwise there would be no event in region G and H. In that case, we think it's more appropriate to validate the method with QCD MC events as well as in the data control region, and to show the method of systematic uncertainty estimation indeed cover the deviations from the prediction to the observation.

  • L306-307: three things make me worried about these correlation factors: the apparent low statistics for the pass category, the heavily peaked shape of the likelihood discriminant, and the eventual extremely high cut on that discriminant. I wonder if you make this a 2x2 histogram (fail/pass on both cuts) whether you find the same lack of correlation. I also wonder if you can quantify the statistical precision.
    • Indeed, the statistics is very low in these bins (Also, these are some very old studies based on 2015 sample, which may need some updates...). I think the MC closure test alone would be convincing enough, since any problematic correlation would impact the results of the closure test there.

  • L318: which systematic? Is this as described on L296-297?
    • Right.

  • L320: please add a table like Table 7, which compares quantitatively. Is this test purely limited by MC statistics? (this relates to the previous question of what the systematics are you consider)
    • Table added for QCD MC closure test. We also updated the significance calculation (based on a poisson distribution with a Gaussian prior.

  • Fig 25 and 26 are very hard to read. Please zoom in a few orders of magnitude on the Y axis.
    • Improved. Added zoom-in plots at large discriminant cuts region.

  • Fig 27: isn't it expected that the best limit is reached at B~0 if you assume 100% signal efficiency?
    • Yes, the limit would be the best at B=0 if the signal efficiency is constant. However, since the signal efficiency also depends on the cut values, we added it to the denominator inside the log function of the equation 14.

  • L346: gamma>0.9992 is an extreme cut; the behavior of the discriminant in that regime can only be guessed from Fig. 21. Could you try transforming the likelihood discriminant such that that region close to 1 gets blow up wrt the rest of the [0,1[ interval, and the cut value is more easy to visualize?
    • Fixed

  • Fig 28, right: I'm surprised you take the inefficiency at 1TeV as the value for any bigger HT. Indeed, the MX=1TeV sample will lead to HT~2TeV, and extrapolating the inefficiency linearly would lead to double the inefficiency at 1TeV, twice your systematic. So: how does the inefficiency look like at higher HT? When rebinning you should still be able to get a trend beyond 1TeV. Is a linear extrapolation adequate, or does it indeed flatten out?
    • The efficiency curve indeed flatten out beyond 1TeV, we expanded the range of x-axis to show that.

  • L409: |eta|>2 -> |eta|<2 ; I guess
    • Yes, fixed smile

  • Table 9, 2nd row: "MX = 100 GeV " -> "MX = 1000 GeV "
    • Fixed

  • Figs 32, 33: please use linear scale on the ratios
    • Fixed

  • Table 10: is the 10% at 50GeV and 1mm a typo?
    • It's not a typo, I think it's due to the lack of statistics in that signal point (which is related to latter comments...).

  • Table 11: JES on two jets really goes down to 0.2%? Can you confirm that you coherently scale all jets in the whole sample up, calculate +1sigma, and then the same for JES down, and not randomly mix moving up and down jet by jet?
    • Yes, I can confirm that I scaled up all jets consistently by +1 sigma, measure the efficiency; and then consistently down by -1 sigma, re-measure the efficiency. For MX=1000GeV, the pt of leading jets are large compared to offline threshold (50GeV), so the JES uncertainty can be small.

  • Table 12: I don't see a trend here; is this because of a lack of signal statistics? Which leads me to my next question...
    • We mainly used 2D parameters, so the impact of the PV selection is expected to be small. Nevertheless, in this analysis, the PV selection mainly affects the number of prompt tracks, which should be larger for higher mass and smaller lifetime, which can be seen from the MX=1000GeV row. For lower mass, it seems the impact was buried by the statistics uncertainty.

  • L506: did you include the signal statistical uncertainty?
    • Unfortunately not:( Thanks for pointing it out. Now the statistical uncertainties are included in the limit setting, they are generally small except for MX=50. The point (MX=50GeV, ctau0=1000GeV) was dropped due to a very large statistical uncertainty (~70%).

  • L506: did you include any systematic for the level of closure for the background prediction?
    • I think that should go into the systematics of the background prediction instead of the signal efficiencies...

  • L506: a table reviewing the different systematics is useful here. It would also be useful to review in words what is dominant for which phase space.
    • Yes, that would be good, added to the AN.

  • L518-L519: "If we bin" and "will be" are a little confusing here. Make explicit what are the search regions.
    • The search region is the four bins, changed the expression here (the analysis is almost background free, so the binning actually doesn't help much, they were designed in case the total background were large).

  • Fig 35: is there a possibility to extend to higher and especially lower ctau by reweighting, or would there be little interest to do so, or too little statistics such that you anyway need new samples? Would it be worth adding a sample at lower and higher ctau?
    • It's worth trying. Now added new lifetime points 0.5mm and 5m with ctau reweighting.

Edit | Attach | Watch | Print version | History: r7 < r6 < r5 < r4 < r3 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r7 - 2018-02-12 - JingyuLuo
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback