Documentation of the a->mumu analysis

This twikipage documents the full chain of the a->mumu analysis within the H->ZZ group.

Introduction

The package exists in CVS under UserCode/MABorgia/HiggsAnalysis/littleH.

Setting up the environment

setenv SCRAM_ARCH slc5_amd64_gcc434 
cmsrel CMSSW_4_X_Y
cd CMSSW_4_X_Y/src 
cmsenv
cvs co -d HiggsAnalysis/littleH UserCode/MABorgia/HiggsAnalysis/littleH
cvs co -d PhysicsTools/NtupleUtils UserCode/Bicocca/PhysicsTools/NtupleUtils
cvs co  HiggsAnalysis/CombinedLimit
scram b -c -j 8
Note: If you're using the tcsh shell instead of the bash one, you should also execute the command rehash after you have compiled the program.

A0 samples generation

In the HiggsAnalysis/littleH, under the test directory, there is a directory called GenSimDirectory which contains all the machinery necessary to generate and reconstruct the a0 samples in the mass range of interest.

  • ./generateSimTcl.pl
    produces 250 config files for each mass point (from 5 GeV to 13 GeV), to generate and simulate the a0 signal in two muons. The config files produced are stored in the directories gensimTcl/*GeV/.
  • ./genSubmit.csh
    takes the generation config files produced in the previous step and submit them to the lsf queue 1nw. The output RAW files will be stored in a castor directory. Be careful to check the castor path in the generateSimTcl.pl script and give the correct rights to the castor directory (besides checking that it exists... or creating it). The log files will be saved in the logtcl/*GeV directories.
  • ./reconstructTcl.pl
    produces 250 config files for each mass point (from 5 GeV to 13 GeV), to reconstruct the RAW samples produced in the previous step. The config files are stored in the directories recoTcl/*GeV/.
  • ./recoSubmit.csh
    takes the reconstruction config files produced in the previous step and submit them to the lsf queue 1 nw. The output RECO files will be stored in the same castor directory as the RAW files. The log files will be stored in the recologtcl/*GeV directories.
  • execute.csh
    is called by the scripts for the submission to the lsf queues.
  • Reco_skel_cfg.py
    is the base python file which is called by the reconstructTcl.pl file to create the config files for the reconstruction.

How to make ntuples.

Once the reconstructed files are ready, they can be ntuplized. Running
  •  ./ntuplesTcl.pl
the necessary files to ntuplize all the a0 samples will be produced. In the test directory there will be
  •  makeSimpleNtple_*_cfg.py
    which are the python files for the ntuplization
  •  makeSimpleNtple_*_cfg.lsf
    which are the lsf files for the submission to the queue.
The submission to the lsf queue is done by
  •  ./ntuplize.csh 

From ntuples to plots

In the bin directory there is a looper (MuMuLooper.cc) which loops on the ntuples and extracts the interesting plots and the invariant mass of the best candidates. The way to run it is:
  •  gmake RunMuMu
    to compile it and create the execute file RunMuMu
  •  ./RunMuMu <ntuple_file> <output_file> MC/Data 
Pay attention to pass the two files without the ".root" extention and give the option MC or Data according on the ntuples you are running on. Pay also attention that the program takes as input every file which starts with the same name, so if you have called different files with similar name which change only in the last part, you'd rather not to do that or you will might obtain wrong results.

Procedure for the extraction of the upper limit

The procedure for the extraction of the upper limit proceeds in several steps, mainly:

  • Create the workspaces with the pdf of signal and background
  • Create the datacards
  • Create the workspaces that can be finally passed directly to the "combine" command. These final workspaces contain not only the pdfs but also the uncertainties
  • Create the grid of distributions of the test statistics for various values of the signal strength
  • Submit the jobs
  • Use the the grid of values (the output of the crab jobs) to compute the observed limit (and also the expected ones).

Create the workspaces with the pdf of signal and background

The workspaces can be created with the cpp program WorkSpaceCreator.cpp in the test/Macros directory under cvs In order to choose the range it is sufficient to give to the program 0 or 1 as second argument (0 for Region 1, 1 for Region 2). Note that the first argument has to be always 1 (which selects to run on data and not on MC). So finally:

./workSpaceCreator  1 0       

will create workspaces for the region 1 in the directory workspaces_1 and

./workSpaceCreator  1 1     

will create workspaces for the region 2 in the directory workspaces_2. These workspaces will contain ONLY the pdfs of signal and background. The additional outputs of this program are two txt files:

datacard_conf_R1.txt 
datacard_conf_R2.txt    
depending on the range option given. These two files contain the values of the mass, the efficiency and the resolution for different mass points, are then used in the next step to create the datacards.

Create the datacards

The datacards are created automatically

Create the workspaces that can be finally passed directly to the "combine" command. These final workspaces contain not only the pdfs but also the uncertainties

Create the grid of distributions of the test statistics for various values of the signal strength

Submit the jobs

Use the the grid of values (the output of the crab jobs) to compute the observed limit (and also the expected ones).

final_UL.sh

-- MariaAssuntaBorgia - 08-Jul-2011

Edit | Attach | Watch | Print version | History: r8 < r7 < r6 < r5 < r4 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r8 - 2011-08-19 - unknown
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback