|
|
Re-interpreting ATLAS SUSY search results
Introduction:
This page serves as the documentation to a tutorial on re-interpreting ATLAS SUSY results.
Events for a SUSY model will be generated using the MadGraph generator. It will be shown
how the resulting events can be used to validate the analysis code and assess the level
of agreement obtained from a simplified detector simulation. The obtained acceptance times
efficiency together with the cross section will be used to decide on whether the given SUSY model is excluded
by the given search.
Questions/comments to: Till Eifert
|
1) SETUP
The following tutorial will be based on the virtual machine environment as provided by the Terascale Alliance School and Workshop ("Prejudice meets Reality"). The virtual machine image can be downloaded from here:
http://pi.physik.uni-bonn.de/~wienemann/Prejudice_meets_reality_VM_v3.ova
Obviously, the tutorial also works without the virtual machine. This would however require to install certain software packages by hand.
Please start a fresh terminal session (i.e. GNOME -> Accessories -> Terminal Emulator).
Madgraph
The virtual machine comes with the madgraph5 generator (v1.4.8.2). We will need to install a few small additional packages.
Start madgraph
cd Madgraph5_v1_4_8_2
./bin/mg5
Install the pythia-pgs packages with this command:
mg5> install pythia-pgs
(this will take a few minutes to download, compile, and install everything).
Install the
ExRootAnalysis package:
mg5> install ExRootAnalysis
(this will create ROOT ntuples with the results, and also provide a library for the class structure)
Quit madgraph by pressing CTRL-d.
The installed packages can be seen as sub-directories.
2) GENERATE SUSY EVENTS
Configure a SUSY example model
Start madgraph
./bin/mg5
switch to the mssm, and define the process(es) of interest
mg5>import mssm
generate p p > go go / ul ur dl dr cl cr sl sr t1 t2 b1 b2 ul~ ur~ dl~ dr~ cl~ cr~ sl~ sr~ t1~ t2~ b1~ b2~ @0
add process p p > go go j / ul ur dl dr cl cr sl sr t1 t2 b1 b2 ul~ ur~ dl~ dr~ cl~ cr~ sl~ sr~ t1~ t2~ b1~ b2~ @1
output MSSM_GOGO1
Here we defined p p to gluino (go) pair production, excluding any squarks in the entire diagrams (everything behind the "/" symbol).
The only
Furthermore, with the "add process" line the same process with one additional jet (from the matrix-element) is defined.
The last line dumps this configuration (process information, and other default settings) in a directory with the given name.
Quit madgraph (CRTL-d) and go the new directory.
cd MSSM_GOGO1
The main configuration files are in the Cards sub-directory.
The file param_card.dat
is the SUSY Les Houches Accord (SLHA) which describes the SUYS mass spectrum, decays, mixing matrices, etc.
Replace the file (in the MSSM_GOGO1/Cards
directory) with this version: param_card.dat
The gluino mass is now set to about 560 GeV, and the neutralino1 (LSP) mass to about 190 GeV (check the file, and search for the corresponding PDG IDs). Furthermore, the mass of all other SUSY particles is set to 4.5 TeV. This makes them kinematically inaccessible. As a consequence, the generated gluino pairs can only decay to the neutralino1 in association with two SM quarks.
The file run_card.dat
contains information regarding the collider parameters, the number of events, and settings for the jet matching (when a parton-shower is run afterwards). Change the following settings:
-
nevents
to 100 for the first test run,
-
ebeam1
and ebeam2
to 3500 (we want to use an analysis which was run on sqrt(s) = 7 TeV data),
-
ickkw
to 1 (we want to use Pythia to do the parton-shower and need to activate the matching),
-
drjj
to 0 (min distance between jets),
-
xqcut
to 140
For Pythia, we need to modify the pythia_card.dat
file. If not already existing, start from the default card:
cp pythia_card_default.dat pythia_card.dat
Then add the following lines to the end of the file:
!...Matching parameters...
IEXCFILE=0
showerkt=T
qcut=140
imss(21)=24
imss(22)=24
Last, we configure PGS (pretty good simulation) which will be used to simulate the ATLAS detector response in a very rough way.
For the purpose of this tutorial we will simply use the ATLAS PGS card:
cp pgs_card_ATLAS.dat pgs_card.dat
with the modification of doubling the MET resolution:
0.4 ! MET resolution
antikt ! jet finding algorithm (cone or ktjet)
0.40 ! calorimeter kt cluster finder cone size (delta R)
This should be it! We have configured a simplified model, corresponding to one single point of the grid which was used for one ATLAS SUSY 0-lepton + jets + MET search (with 1 ifb)
interpretation
.
Generate events
Going back to our process directory (MSSM_GOGO1
), we generate events, run Pythia, and PGS:
./bin/madevent
MGME5>generate_events
Hit 3
to run MadEvent + Pythia + PGS, and then
0
to keep all configuration cards unmodified.
At this stage,Firefox should open a new window automatically. it shows an overview of the "run" including status, as well as the process diagrams (in the top part of the window).
As we had set the number of events to 100, this should finish fairly quickly. The status on the webpage shows when everything is done. Back to our terminal session, you might see many messages saying Error in <TTime::operator unsigned long()>: time truncated, use operator unsigned long long
. This can be ignored.
If everything went well, then the final message should be similar to this:
finish
=== Results Summary for run: run_05 tag: tag_1 ===
Cross-section : 0.5716 +- 0.003476 pb
Nb of events : 100
Matched Cross-section : 0.4972 +- 0.01946 pb
Nb of events after Matching : 87
Storing Pythia files of Previous run
Done
Inspect the results
Quit madevent (CTRL-d) and cd Events/run_01
directory which holds the output of our test run.
There should be three .root
files, one for each stage of generation (unweighted_events.root
),
pythia showering (tag_1_pythia_lhe_events.root
), and pgs det. simulation (tag_1_pgs_events.root
).
To check that our events look OK, we open the final file and plot a few variables:
python
>>> from ROOT import *
>>> f = TFile("tag_1_pgs_events.root")
>>> t = f.Get("LHCO")
>>> t.Print()
The first line in python is needed to get ROOT functionality, the following line opens our root file (after PGS),
then we get the TTree, and print all branches. You should see many branches, e.g. MissingET.Phi
.
Make some plots, and check that the distributions look OK.
>>> t.Draw("MissingET.MET")
>>> t.Draw("Jet_size")
Note that there are less than 100 entries in the MET distribution. This is expected because the jet matching rejected some events (this information is also given on the status webpage and on the final log output). The Jet_size
distribution on the other hand has 100 entries. Here the un-matched entries show up in the 0-bin. We will later ignore entries which were not matched.
Run larger dataset
Change the Cards/run_card.dat
file and increase the number of events to 10 thousand (10000 = nevents), and then repeat the generation (MadEvent + Pythia + PGS). The results should be saved in run_02
3) SUSY ANALYSIS
Implementing the analysis code
Now that we have events generated we need to implement the analysis code.
The analysis is described in the paper
.
This can be quite some work. For this tutorial you can use the following example code which implements an approximation of SR D. The python code can be found here ATLAS_SUSY_0lepJetsMET_search2011data.py.txt.
- downloaded it and place it in your
MSSM_GOGO1
directory,
- remove the .txt suffix from the name,
- the script assumes a local ntuple file with name
pgs_events.root
. Either put a symbolic link to your pgs root file (e.g. ln -s Events/run_02/tag_1_pgs_events.root pgs_events.root
) or change the script to point to your file directly,
- run the script like this
python -i ATLAS_SUSY_0lepJetsMET_search2011data.py
It should open your PGS ntuple file, loop over the events and apply the event selection. In the end you should see one plot of the final effective mass (meff) distribution and event selection efficiencies printed out on the screen:
N_LArVeto: 7095 eff.: 0.80079006772
N_MET130: 5344 eff.: 0.60316027088
N_JetQuality: 5032 eff.: 0.567945823928
N_Jet1Pt: 4730 eff.: 0.533860045147
N_Jet4: 3188 eff.: 0.359819413093
N_dPhi: 2668 eff.: 0.301128668172
N_METoverMeff: 1926 eff.: 0.217381489842
N_Meff: 345 eff.: 0.0389390519187
Info in <TCanvas::MakeDefCanvas>: created default TCanvas with name c1
Validation of analysis code
We can compare the full event selection efficiency of the above example, i.e. signal region D, with public ATLAS information.
This link
shows for each grid point (of the given model)
the ATLAS acceptance times efficiency values.
Find our SUSY model, and compare the value for SR D (the agreement should be at the level of 10%).
Note that as of Aug-22 2012 the AxEps numbers above on HEPdata are off by a factor 1000. This will be corrected.
In principle, one can also use the ntuple before PGS (i.e. from madgraph + Pythia). This gives the pure acceptance. For recent ATLAS SUSY searches acceptance and efficiency are each made public. Thus, one can validate the truth-level (or to be more precise hadron-level) quantities, and then the detector simulation separately.
4) INTERPRETATION
Cross sections
Find the next-to-leading-log cross section for our process (gluino pairs, with gluino mass of 562.5 GeV) on this
webpage.
Test for exclusion
Calculate the expected number of signal events in the signal region (i.e. cross section times acceptance times efficiency times integrated luminosity).
This should come out to be about
predicted events in SR D: 44.5
The paper
states the
excluded values of
are
22 fb, 25 fb, 429 fb, 27 fb and 17 fb, respectively, at the 95%
confidence level.
Calculate
yourself, and check whether it is above or below the limit.
Is the point excluded?
Compare your finding with the official exclusion
plot
for the given simplified model.
SUMMARY
In this tutorial you have:
- generated events for one SUSY model,
- run a (too) simple analysis code for one signal selection of the ATLAS SUSY search in the 0-lepton channel,
- validated the analysis code to first order,
- and checked whether the given model is already excluded.
OUTLOOK
Generate and run other BSM model grids to test how much model space is already excluded.
-- TillEifert - 17-Aug-2012