Documentation on LHEanalysis program



LHEanalysis is a convenient tool, able to read quickly a LHE (Les Houches Event) format files. Expected to be launched directly after events generation, it is adapted to many configurations. That’s why it supports also GZIP compressed format files (usually MadGraph events files are compressed to save disk space), stored on a local disk or on CASTOR/DPM.

Besides, LHEanalysis supplies a framework for writing quickly and easily an analysis with all ROOT histogram or plot features.

We hope that LHEanalysis will be useful for both students and experts.


  • From a sbgli/sbgui connection, getting the tarball source :
     cp ~econte/public/serret/LHEanalysis.tgz . 
  • Extracting the source files :
     tar xzf LHEanalysis.tgz 
  • Entering the LHEanalysis folder :
     cd LHEanalysis 
  • Notifying your computing plateform by selecting the proper Makefile :
    • If you are working on lxplus :
       cp Makefile_LXPLUS Makefile 
    • If you are working on sbgui :
       cp Makefile_SBGUI Makefile 
    • If you are working on another system (personal desktop or laptop for instance) :
       cp Makefile_DEFAULT Makefile 
      In this case, LHEanalysis Makefile is configured to be installed on a system without CASTOR/DPM access. So the "-castor" option (defined in the next section) will be not available.

  • Compilation with make :

Warning, important Note : LHEanalysis code is based on several libraries (ROOT, BOOST, ZLIB and SHIFT) and compilation/linking operation must take into account these dependences. The default Makefile corresponds to a "standard" installation of these dependences and must be good for a lot of configurations. Nonetheless it's not impossible that you have to change the Makefile by specifying the paths for the other libraries.


  • For using LHEanalysis, you have a priori to create a text file containing the list of the LHE files that you would like to treat. In the following, we suppose that a such file is called “filelist.txt”. Examples of “filelist.txt” creation :
    • data on local disk in a specified directory
           ls /mydirectory/*.lhe > tmp
           awk '{print "/mydirectory/" $1}' tmp > filelist.txt
    • data on DPM in a specified directory
           rfdir /mydirectory/* > tmp
           awk '{print "/mydirectory/" $9}' tmp | grep .lhe > filelist.txt

  • For reading and analyzing LHE files from local disk, you have just to execute :
     LHEanalysis filelist.txt 
  • If the LHE files are compressed with GZIP, you have to use the “-compress” option :
     LHEanalysis -compress filelist.txt 
  • If the LHE files are stored on CASTOR/DPM, you have to use the “-castor” option :
     LHEanalysis -castor filelist.txt 
  • Of course, the two options can be combined if LHE files are compressed and stored on CASTOR :
     LHEanalysis -castor -compress filelist.txt 

How to change the user analysis

If you select the user analysis, the executed algorithm consists in filling and saving histogram with mass of particles. We expect you write your own analysis :-). These are some useful instructions to help you in the task :

  • The only files to edit are : Analysis/user.cpp and Analysis/user.h

  • All declarations must be written in the analysis.h file, inside the analysis class. For instance :
    class analysis
     private :
        TH1D* myhisto; 
  • Definition of your histo must be written in the void initialize() method. For instance :
    void analysis::initialize()
     myhisto = new TH1D("pT muons","pT muons",100,-10,10); 
The initialize() method is called by LHEanalysis only one time and before loading events.

  • Filling histo must be done in void execute() method. For instance :
    void analysis::execute()
     for (unsigned int i=0;i<data_->particles.size();i++)
          if (fabs(data_->particles[i].pdgid())==13)  //getting only mu+ or mu-
                myhisto -> Fill( data_->particles[i].pt() );   // filling with muon pT
    The execute() method is called after reading each event. It allows an analysis events-by-events.

  • Displaying and saving histograms can be done in void finalize() method.
     void analysis::finalize()
     TCanvas * can = new TCanvas("example");
     myhisto -> Draw();
     can -> Print("");
    The initialize() method is called by LHEanalysis only one time and after treating all the events.

How to make another analysis

Modifying the preexisted user analysis could be enough if you would like to make some standard plots. You can also create a new analysis that you can select at the LHEanalysis running. Step-by-step creation of a new analysis called "Chewbacca" :

  • First copying Analysis/user.cpp and Analysis/user.h into Analysis/Chewbacca.cpp and Analysis/Chewbacca.h

  • Editing the new files and replace all class name user by Chewbacca. Warning : the class name must be the same that the file name

  • In the header file, replacing the name value by the title of your analysis. For instance : Chewbacca analysis

  • Finally, compiling again LHEanalysis with make

How to access data extracted from LHE files

Data extracted from LHE files are split into several structures :

  • data->init gives generation information. This structure is defined in the file InitDataFormat.h
  • data->processes raises the list of all processes used in the generation. This structure is defined in the file ProcessDataFormat.h
  • data->event contains information concerning the events. This structure is defined in the file EventDataFormat.h
  • data->particles is a standard vector (std::vector) of a C structure called ParticleDataFormat. Defined in the file ParticleDataFormat.h, this structure allows to access informations related to initial partons, final particles and "intermediate" particles.

The following tables sums up general information related to one LHE file.

variables Returned variable type Description
data_->init.beamPDGID().first signed long particle ID of the first beam
data_->init.beamPDGID().second signed long particle ID of the second beam
data_->init.beamE().first double energy of the first beam
data_->init.beamE().second double energy of the second beam
data_->init.beamPDFauthor().first unsigned int code identifying the author of the PDF used for the first beam
data_->init.beamPDFauthor().second unsigned int code identifying the author of the PDF used for the second beam
data_->init.beamPDFID().first unsigned int code identifying the PDF used for the first beam
data_->init.beamPDFID().second unsigned int code identifying the PDF used for the second beam
data_->init.weightMode() signed int code related to the weight calculation mode
data_->init.nProcesses() unsigned int number of processes used in the generation

variables Returned variable type Description
data_->processes[i].xsection() double cross-section of the process (in pb after MG, in mb after Pythia)
data_->processes[i].xsectionError() double integrating error on the cross-section
data_->processes[i].weightMax() double weight maximum allocated to one event of this process
data_->processes[i].processId() unsigned int code gived by the generator to the process

The following tables sums up all the data accessible for the read event and the i-th particle.

variables Returned variable type Description
data_->event.nparts() unsigned int  
data_->event.processId() unsigned int  
data_->event.weight() double  
data_->event.scale() double  
data_->event.alphaQED() double  
data_->event.alphaQCD() double  

variables Returned variable type Description
data_->particles[i].momentum() TLorentzVector  
data_->particles[i].px() double shortcut to "data_->particles[i].momentum().Px()"
data_->particles[i].py() double shortcut to "data_->particles[i].momentum().Py()"
data_->particles[i].pz() double shortcut to "data_->particles[i].momentum().Pz()"
data_->particles[i].e() double shortcut to "data_->particles[i].momentum().E()"
data_->particles[i].et() double shortcut to "data_->particles[i].momentum().Et()"
data_->particles[i].m() double shortcut to "data_->particles[i].momentum().M()"
data_->particles[i].mt() double shortcut to "data_->particles[i].momentum().Mt()"
data_->particles[i].p() double shortcut to "data_->particles[i].momentum().P()"
data_->particles[i].pt() double shortcut to "data_->particles[i].momentum().Perp()"
data_->particles[i].theta() double shortcut to "data_->particles[i].momentum().Theta()"
data_->particles[i].eta() double shortcut to "data_->particles[i].momentum().Eta()"
data_->particles[i].rho() double shortcut to "data_->particles[i].momentum().Rho()"
data_->particles[i].phi() double shortcut to "data_->particles[i].momentum().Phi()"
data_->particles[i].rapidity() double shortcut to "data_->particles[i].momentum().Rapidity()"
data_->particles[i].beta() double shortcut to "data_->particles[i].momentum().Beta()"
data_->particles[i].gamma() double shortcut to "data_->particles[i].momentum().Gamma()"
data_->particles[i].ctau() double  
data_->particles[i].spin() double cos angle between spin and momentum
data_->particles[i].pdgid() signed long code PDG for particles
data_->particles[i].statuscode() signed int step of the generation
data_->particles[i].mother1() ParticleDataFormat* pointer to the particle mother
data_->particles[i].mother2() ParticleDataFormat* pointer to the particle mother


If you have some problems or if “you need a problem”, call the C-team (Ch'ti team) : Guillaume or Jean-Éric.

Edit | Attach | Watch | Print version | History: r8 < r7 < r6 < r5 < r4 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r8 - 2011-02-23 - EricConte
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback