RA2 preapproval questions & answers (COLIN's SANDBOX, NOT THE OFFICIAL PAGE!)

On this page we bring together all questions and answers regarding the discussions during the preapproval of the RA2 SUS-10-005 analysis on 25 & 27 January 2011. The material of the presentation can be found here.

General Issues

Updated documentation with results changed in the talk, including explanations (e.g., in an appendix) of what changed.

Link to the updated note: SUS-10-005-v?
Changes wrt preapproval freeze:
  • several updates went in before preapproval-freeze and preapproval; we make them blue
  • some updates came after the preapproval; these are shown black
  • all were for the better

Provide detailed analysis cut flow tables for data and MC. Suggested cleaning cuts at the end.
Colin / Sue Ann / Steven / ChristianS

The following table has the breakdown of the cleaning cuts after different selection steps on data.

Consecutive cuts MHT > 150 MHT > 150, Δφ MHT > 250, Δφ MHT > 150, HT > 500, Δφ
All events trig, w. 3j, HT>300 601 200 37 69
muon vetoed 539 (62) 155 (45) 28 (9) 54 (15)
electron vetoed 496 (43) 131 (24) 24 (4) 51 (3)
EE noise filtered 495 (1) 130 (1) 23 (1) 50 (1)
inconsistent PF μ filtered 479 (16) 121 (9) 18 (5) 43 (7)
greedy PF μ filtered 475 (4) 120 (1) 17 (1) 43 (0)
beam halo filtered 474 (1) 119 (1) 16 (1) 43 (0)
tracking failure filtered 462 (12) 117 (2) 16 (0) 42 (1)
HCAL noise filtered 461 (1) 116 (1) 15 (1) 41 (1)
TP filtered 391 (70) 113 (3) 15 (0) 40 (1)
BE filtered 379 (12) 111 (2) 15 (0) 40 (0)

This table contains the breakdown of the MC-relevant cleaning cuts for Pythia QCD MC.

Consecutive cuts MHT > 150 MHT > 150, Δφ MHT > 250, Δφ MHT > 150, HT > 500, Δφ
# events in 36/pb 294.9 41.3 15.2 30.2
lepton vetoed 292.9 39.6 15.2 30.2
inconsistent PF μ filtered 259.2 24.4 0.537 14.9
greedy PF μ filtered 258.8 24.3 0.537 14.9
TP filtered 178.1 21.0 0.168 12.9
BE filtered 164.9 20.4 0.053 12.6

This table contains the breakdown of the MC-relevant cleaning cuts for Madgraph QCD MC.

Consecutive cuts MHT > 150 MHT > 150, Δφ MHT > 250, Δφ MHT > 150, HT > 500, Δφ
# events in 36/pb 154.2 7.98 0.508 6.34
lepton vetoed 154.0 7.92 0.506 6.30
inconsistent PF μ filtered 149.3 7.46 0.280 5.84
greedy PF μ filtered 149.0 7.44 0.280 5.82
TP filtered 108.0 5.89 0.082 4.81
BE filtered 97.3 5.59 0.072 4.61

Provide updated trigger (in)efficiency plot.
Sue Ann

In the plot below the efficiency of the 2010 mix of lowest unprescaled HT triggers is shown, as a function of the offline PF HT, for events passing the Jet15U trigger, after all cleaning. Of the 131 events passing the Jet15U trigger in the data-taking period that the HT150U trigger was active, all events passed that trigger. Knowing this, a conservative upper limit of 1.0% can be assigned to the inefficiency of the trigger over the whole data taking range. Error: (3) can't find turnonHT2010mix.png in Sandbox

Make MHT/HT plots for the separated background predictions.

Need to agree on binning and range. How do we show uncertainties?

Lost lepton background prediction

Provide distributions, e.g., isolation, in control samples, to verify that the background is negligible.

some distributions for the CS are written out at the moment, but as some of them also differ slightly between W+jet and ttbar this alone is not well suited to check for background contamination. SM background is excluded by MC studies running the CS selection on other samples. QCD already checked and no event passes. Control sample slightly harder selected than Maria so no problem should be expected for the lost leptons only. I will cook into transverse W-Mass plot

Provide closure test as a ratio to check for bias.

T&P is done in events with limited hadronic activity. For ttbar events with high MHT, boost effects could make the leptons tend to be close to jets and more likely to be unidentified. This might induce a bias on the high MHT, which seems to be suggested by the MHT closure on slide 18.

For this reason a binning in DR is chosen. Theoretically there might be a small effect from electron ID, but with the loose ID-cuts not expected. The plot shows the ratio plots above for ttbar only and it can be seen that even less than in the ttbar/W+jet case a trend can be seen. Therefore this is no problem.

Hadronic tau background prediction

Study Z to tau tau to see if visible tau templates can be verified and a systematic uncertainty determined.

Our goal is not to derive the template from data since we need un umbiased, pure and high statistic tau control sample. None of these three element can be achieved with Z to tau tau.

But we want/can still validate the template obtained in MC with the data.
We proceed as follow. We select Z->Tau_{Mu}+Tau_{H} (following the procedure in PAS-TAU-11-001) both in data and MC. We take the ratio of the Pt(TauJet)/Pt(mu) and compare both in data and MC. This ratio can be tought as the product of the probability that a tau Lepton go into a tauJet (our ideal template) convoluted with the difference introduced by the tauID and the probability a tauLepton go into a muon. If Pt(TauJet)/Pt(mu) MC and data agree we can conclude that the MC describe well the TAU decay.

The direct translation from the Z->mu+mu to Z->tauH+tauH is not considered since it involve the following steps. Ideally one want to select Z->mu+mu events in data and then smear each muon with a tau jets template and predict a Z->tauH+tauH event. In paralled a Z->tauH+tauH event can be selected in data. This second step is not trivial since it a tau ID need to be applied: a new proper template need to extracted from MC and still we are going to rely on some MC extrapolation to move from the validated template to the Need to correct for the fake contamination (fake rate is few%) and the tau inefficiency ( O 50%) all number from PAS-TAU-11-001. This procedure do not give an enough precision.

Provide a closure test with error bars to estimate the level of closure. Correct any bias found (if any) and propagate uncertainty.

Check consistency of systematic uncertainties with the procedure used in RA1.
The method used in RA1classic (SUS-10-003) is fundamentally different from what is used in the RA2. There the count of the muon+jets event as measured in data is scaled with a ScalFactor N(tauH)_{MC}/N(mu)_{MC}. RA1classic assign a 30% syst error om the SF and has a stat error of 50% in their search region.

The same method is used in the RA1b (AN2010_338_v6) They try to apply the same procedure but the authors do not show enought details for comparison of the systematics: what is the template they use? how many times they smear and what is the stat error you associate? and what are the systematics they assign to the templates (they use a calo)? are they using the muon trigger or the HT trigger? the 95% +- 15% on the muon correction on the stat error where is coming from?

Fix the systematic due to the mu->tau template. (It is a PF tau which is using the tracker.)

QCD background prediction

Provide a closure table.
Sue Ann

Correct for the observed bias and propagate the corresponding uncertainty.
Sue Ann

In the tail measurement procedure, set an upper bound for when no events are found in the normalisation window.

If no events are observed, the upper limit at 95% confidence of a Poisson-distributed quantity is three. Therefore, three entries have been added in the tail region in each pt and eta bin and new scaling factors have been derived. The difference to the nominal scaling factors derived using the observed number of entries is covered by the quoted statistical uncertainties of the nominal scaling factors.

The numbers are listed on slide 12 in this presentation.

Examine the effect of completely lost jets on the tails of the resolution function.

If one jet is completely lost due to an extremely low response, the event can still survive the dijet selection if the third jet is back-to-back in the transverse plane to the remaining leading jet with normal response. This would result in a dijet event of high asymmetry.

The asymmetry distribution of all available dijet events (no η binning) has been investigated and no increasing population of events with large asymmetry has been found.

Error: (3) can't find AsymData2010InclCleaned.png in Sandbox

The observed events with large asymmetry are due to clean dijet events with one low-response jet, except for the event with highest asymmetry. Here, the leading jet is balanced against a 6 GeV jet; this might be due to a Jet+Z-->Inv event or indeed a lost second jet. In any case, there is no indication for a significant number of lost jets.

When investigating the high statistics sample, six additional events of very high asymmetry (>0.9) have been found in which one jet originates in HCAL noise and is balanced against a randomly picked up low pt jet. An improved filter provided by the HCAL noise experts has been found to remove these events. No additional noise event has been identified in the RA2 search region.

More information and event displays of the events with high asymmetry are shown in this presentation.

Slide 30 shows no effect on the far tail, but only up to the limited stat. Check whether this is limiting the prediction.

This is covered by the statistical uncertainties on the tail scaling factors (see above).

Provide corrected numbers for the QCD factorisation analysis, so closure can be verified.

Remove unnecessary systematics to check MC closure test.

Try Chi2 test of extrapolated fit against higher MHT points in MC.

Since the used extrapolation functions are approximations, any statistical test will result in not meaningful results. As closure test we use two different models as lower and upper boundary, and show that the true value is for all possible variations covered by their difference.

Z to invisible background prediction

Check the effect of using isolation instead of shower shape to discriminate between photons and pi-zeros at high pT.

The determination of the purity is also performed with the combined isolation method instead of the shower shape method. Due to the lower purity obtained from the former method, the Z to invisible prediction also lowers. Comparison with my MC however indicates that the latter method (shower shape) describes better the Monte Carlo. Extract from the appendix of AN-10-403: Error: (3) can't find CombIsoRes.png in Sandbox

Show data/MC ID scale factor in bins of Pt, to check for problems at higher Pt.

This was not a trivial question, since both Z->ee data and Monte Carlo run out of statistics above 100 GeV to have a useful prediction. RA3 (AN-10-271) made some plots up to 125 GeV/c (SUS-10-002 : AN 2010/271 fig 2, p7), attached below: Error: (3) can't find RA3SF.png in Sandbox

In the table below I compare the scale factors for photon (and electron) identification and their uncertainties used in other analysis that also cover high energy objects. For this analysis, the result of the first row is used + 1% additional systematic uncertainty due to pile up is taken from the second row. Weighting my events according to Barrel and Endcap results in a systematic uncertainty of 2.6%. Error: (3) can't find ScaleFactorTable.png in Sandbox

Question, not in the official pre approval question list:

It would also be interesting to see the photon to jet Dphi (or DR) in MC and data after selection of photons. Especially also the Z inv. to jet DR (or Dphi) for the different selections (hight HT, hight MHT, baseline). Any chance to add these plots to the AN?

Before Delta Phi - After Delta Phi - Baseline selection plots are shown below. For other search selections will be included in the updated version of the note.

Before Delta Phi cut: Error: (3) can't find DR_AHC.png in Sandbox

After Delta Phi cut: Error: (3) can't find DR_AAC.png in Sandbox

Baseline Selection: Error: (3) can't find DR_AMC.png in Sandbox

Use Pt dependent b-tag efficiency uncertainty to determine ttbar background.
Hongxuan / Sarah

We measure the b-tag efficiencies for both working points, i.e., SSVHEM and SSVHPT, using our ttbar MC sample. What we use now is the average efficiency, but measured for different RA2 selection stages (where the average jet pt may be changing). This already takes into account the jet pt dependence of the b-tagging efficiency to some extent. And, at the end, we find that the variation of the b-tag efficiencies are small. We are looking into further the efficiencies as a function of jet pt, but we think the effect should be small.

Examine the effect of correlation between Z or W pT and leptons falling below the 20 GeV pT threshold.
Sarah / Jim / Hongxuan (?)

For W->munu analysis, we have studied the effect by lowering our muon pt to 15 GeV. We find that the data-driven predicted results are 15.20+-7.43+-5.54 events for MHT>150 GeV with b-veto. Comparing with results of muon pt >20 GeV at the same selection stages (14.09+-7.37(stat)+-5.35(syst)), we can find that they are close to each other. So we conclude that the effect of the correlation is very small on our prediction of the Zinvisible numbers and the prediction is robust against this lepton pT cut.

Finalise correlation between electron and muon channels in W and Z.
Hongxuan / Sarah / Jim

Update the closure tables and uncertainties
Hongxuan / Sarah / Jim

For W analysis, closure tables, plots and systematics are updated and put into a new version.

Added kinematic distributions for W->enu, updated tables for yield/systematics from Data & MC, put in a new version of note. https://twiki.cern.ch/twiki/pub/CMS/SusyRA2RoundTable/Wenu_06Feb2011_36pb.pdf


Treat correlations between common control samples.
Roberto / Jan / Maria

List of events passing the Maria's selections in /afs/cern.ch/user/d/dalfonso/public/CorrelationJan.txt

Signal contamination should be subtracted.
Jan / Maria / Piet

Make mSUGRA limit presentation consistent with RA1.
ChristianA / Jim

Add the 1sigma band around the expected limit.
ChristianA / Jim

Compare statistical methods.
Christian A / Jim / Roberto

Propose SMS plots in detail.
Maria / Roberto
A detailed note version is in /afs/cern.ch/user/d/dalfonso/public/AN-10-404_temp.pdf . Already shared with RA1 and Razor for synchronization. Things missing:
need to finalize the procedure for the theoretical systematic on the signal (is not a show stopper since the errors on the signal limit is dominated by uncertainty in the luminosity.) Need feedback from S.Mrenna
signal contamination: need to get feedback on how the theorist will use later.
repeat the RA2 search region optimization (for now only the historical MHT>250 region and HT>500 are used)

Answers to additional questions raised during pre-approval

A recent bug was found in the beamhalo ID code, which should make it more efficient. What is the impact in your case?

The cleaning was redone reproducing the beamhalo objects with the bugfix. No difference in rejection was found in the search region: still the same 1 event gets rejected, and no additional one.

-- StevenLowette - 31-Jan-2011

-- ColinBernet - 09-Feb-2011

Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2011-02-09 - ColinBernet
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback