Public tools for recasting LHC searches

Currently on the market and involved in the InterpretingLHCresults forum


CheckMATE takes simulated event files in .hep or .hepmc for any model as input and simply returns if the underlying model is "excluded" or "allowed" after performing a detector simulation and testing various implemented analyses. The embedded AnalysisManager makes it easy to add current and prospective future LHC results from ATLAS and CMS which have not yet been implemented. Detector effects are considered by Delphes extended by tuned efficiency functions for lepton reconstruction and flavour tagging. The soon-to-be published version 2.0 adds the possibility of using Pythia8 to generate SUSY events on-the-fly or to shower provided .lhe files for any model. Currently, the collaboration is working on an extension to enable the on-the-fly simulation of events for any model.

MadAnalysis5 PAD

MadAnalysis 5 is a generic user-friendly framework for phenomenological investigations at particle colliders, i.e. to perform physics analyses of Monte Carlo event files. Its Public Analysis Database (PAD) comprises a growing collection of LHC analyses, which have been implemented in the MadAnalysis 5 framework for the purpose of recasting. Delphes3 is used for the detector simulation. For each implemented analysis, a detailed validation note is provided. The PAD follows an open-source policy; contributed codes are published and citable via Inspire. The framework is currently being extended to provide a full recast chain, from Madgraph to limit setting.


Originally developed as a toolkit for the validation of Monte Carlo event generators, Rivet (Robust Independent Validation of Experiment and Theory) has become a standard for documenting [unfolded] SM measurements. The LHC experiment top and Higgs groups are also increasingly providing Rivet routines for their analyses. Rivet analyses are written in a user-friendly subset of C++11, and are picked up at runtime as "plugin libraries"; they can be executed on an event stream either through a Python script interface, or by direct code interfacing to a C++ API.

The original SM-focused requirement of unfolded observables made Rivet inappropriate for BSM searches (other than those using just jets and MET) until the addition of detector-smearing/efficiency machinery in Rivet 2.5.0. This detector machinery provides equivalent efficiency effects to a Delphes-type simulation, and imitates the less important kinematic smearing of physics objects to within a few percent. A novel feature is that the Rivet detector implementation allows jet algorithms, lepton and b-tagging operating points, full-detailed object isolation algorithms, and resolutions/efficiencies to be specific to each analysis's procedure and event-selection. This allows more accurate detector modelling and more robust analysis preservation than "global" detector simulations, hence addresses some experiment concerns re. requests for "official fast-sim" tools. The aim is to encourage Rivet code provision direct from BSM data analysers, as is already the case for SM results: additional tools to assist BSM analysis implementation are being added on request.


The Global and Modular Beyond-Standard Model Inference Tool (GAMBIT) is a global fitting code for generic Beyond the Standard Model theories, designed to allow fast and easy definition of new models, observables, likelihoods, scanners and backend physics codes. GAMBIT includes the modules ColliderBit (for LHC and LEP particle search and Higgs limits), FlavBit (for flavour physics constraints), DarkBit (for astrophysical constraints from the relic abundance, direct and indirect searches and CMB measurements), SpecBit (spectrum generation), DecayBit (SM and BSM decay rates), PrecisionBit (for EW precision tests), and ScannerBit (for statistical algorithms, sampling and optimisation). LHC recasting is performed using a custom parallelised version of the Pythia 8 generator, a custom detector simulation (or an interface to DELPHES), an LHC event analysis framework based on the public HepUtils classes and a series of statistical routines that allow the fast calculation of marginalised Poisson likelihoods. The code includes a selection of Run 1 and Run 2 LHC searches relevant for SUSY and dark matter effective field theory applications, and became public in May 2017.


Contur is a series of statistical and data-selection scripts which allows the Rivet output of BSM models from Feynrules/UFO, generated in Herwig7, to be compared to data. The data are fully-unfolded particle-level fiducial cross-section measurements and are assumed to be consistent with the Standard Model. Thus Contur is used to set limits on new phsyics and narrow down the field of BSM possibilities. The tool is rather new; the description and first results are written up here and were presented here.

ATOM [not public yet]

Atom is a general purpose framework for reinterpreting existing experimental analyses and designing new ones. Originally started as a fork of Rivet, Atom includes the possibility of describing detector effects by using transfer functions from truth-level objects to detector objects. In addition to providing a recasting framework, Atom emphasizes checking the validity of the extrapolations intrinsic in recasting a result from one BSM model to another. This is achieved via a flexible system warning the user of various conditions invalidating the results. On top of providing a database of existing BSM ATLAS and CMS analyses, LHC run I and II detector descriptions, while still being fully compatible with all Rivet analyses, Atom delivers tools for easily implementing new analyses, including an automatic validation system.


Fastlim is a tool to analyze BSM models (currently the MSSM) based on the mass spectrum and branching ratios using simplified topologies. Limits from direct SUSY searches at the LHC are obtained from pre-calculated cross-section and efficiency maps. The efficiency maps are given for each topology and signal region. Fastlim combines all the implemented topologies and estimates the full SUSY contribution to a given signal region. In the case where all the dominant topologies are implemented, this reproduces the full simulation result. Otherwise the signal yield is underestimated, hence providing a conservative limit. Another application is to quickly identify the important decay chains of the model, since Fastlim lists up the dominant event topologies based on [cross-section]x[branching ratio].


SModelS is a tool for interpreting simplified-model results from the LHC. It is based on a general procedure to decompose the collider signatures of BSM models presenting a Z2 symmetry into Simplified Model Spectrum (SMS) topologies, which are then confronted with the relevant experimental constraints. The current SModelS database comprises mostly supersymmetry searches with missing energy, for which a large variety of SMS results from ATLAS and CMS are available. The tool also identifies the most important `missing topologies' for which no experimental result is available. Version 1.0 of SModelS is based on the use of cross section upper limit maps. SModelS v1.1, released in January 2017, has several new features, including the use of efficiency maps, likelihood and χ2 calculations, an extended database of experimental results as well as major speed upgrades for both the code and the database.


XQCAT (eXtra Quark Combined Analysis Tool) is a tool for the determination of exclusion confidence levels for scenarios of new physics characterised by the presence of one or multiple heavy extra quarks (XQs) which interact with any of the Standard Model quarks. The code uses a database of efficiencies for pre-simulated processes of QCD-induced pair production of the XQs and their subsequent on-shell decays in the narrow-width approximation. In its current implementation, the recasting is performed considering 6 SUSY-inspired searches and 1 search for vector-like quarks (all of them from CMS). Ongoing developments will extend on one side the database of analyses, including also ATLAS searches, and on the other side the number of possible scenarios, including XQs decaying to dark matter, finite width limits and single production processes.



Not exactly a public recasting tool but a service [under development] to run new models or model points through the full experimental machinery upon request. An integral part of the official data and analysis preservation efforts. See and

-- SabineKraml - 2017-01-25

Edit | Attach | Watch | Print version | History: r14 < r13 < r12 < r11 < r10 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r14 - 2017-06-21 - AndyBuckley
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LHCPhysics All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback