Particle Flow Validation How-To

Overview

  • This page explains how to install and run the particle flow validation package.
  • The following instructions are based on the METBenchmarkGeneric. Other benchmarks should take it as an example.

Installation

The validation packages are:

RecoParticleFlow/Benchmark
Validation/RecoParticleFlow

3_1_X

Please refer to the PFlowDevelopers#LatestRecipe

2_2_5

Please refer to PFlowDevelopers#CMSSW_2_2_5.

Producing the benchmark plots

Warning, important All commands must be ran from the benchmark directory, e.g. Validation/RecoParticleFlow/Benchmarks/METBenchmarkGeneric/

Benchmark histograms

Go to the benchmark directory. For example:

     cd $CMSSW_BASE/src/Validation/RecoParticleFlow/Benchmarks/METBenchmarkGeneric

The benchmark cmssw process runs on AODSIM input files, and produces benchmark histograms stored in benchmark.root.

Warning, important Do not forget to edit benchmark_cfg.py to point to your input file.

Run:

     cmsRun benchmark_cfg.py

Read more For more information:

You are now ready to produce the benchmark histograms.

Benchmark plots

Info The benchmark histograms are just data structures, and are not meant to be used as plots directly. The current step takes a benchmark.root file in input, and makes the plots. Just run:

     root plot.C

Read more For more information:

Producing the benchmark web page

Read more For more information:

Web page generation

First produce the benchmark plots. Then:

../Tools/indexGen.py   -g ../../python/source_diJets_cfi.py -s ../Tools/aod_PYTHIA_cfg.py 

Warning, important Please double-check you're giving the correct information in input.

The output is a directory

cd $CMSSW_BASE/src/Validation/RecoParticleFlow/Benchmarks/METBenchmarkGeneric/METBenchmarkGeneric
ls
This directory contains:
  • an HTML document summarizing the benchmark results
  • the plots
  • the files (python configuration and root macro) used to produce the benchmark

Caption

The list of plots to be used and the caption for each plot is set in the text file captions.txt.

Web page submission

First generate the benchmark web page, if not already done. The benchmark web page can now be submitted to the validation website.

Warning, important The first time you want to submit a benchmark, ask Colin to grant you the necessary permissions.

Just run the submission command from your benchmark directory:

../Tools/submit.py -e USERTEST

Producing comparison web pages

Tip, idea For this step: no need for input files in your local area. The input files are retrieved from the web site.

A comparison web page compares two benchmark web pages already submitted to the web site. More precisely, it compares the benchmark.root files corresponding to the two benchmark web pages.

  • List the available benchmark web pages:
   ../Tools/listBenchmarks.py "*3_1_0_pre7*/METBenchmark*" -a -u 

  • Make a comparison, without submitting it:
   ../Tools/indexGenCompare.py CMSSW_3_1_0_pre7/METBenchmarkGeneric_TEST2 CMSSW_3_1_0_pre7/METBenchmarkGeneric_TEST

  • Re-do the comparison, with submission to the web site

Caption

The list of plots to be used and the caption for each plot is set in the text file captions.txt.

Responsible: ColinBernet - 09 Feb 2009

Edit | Attach | Watch | Print version | History: r14 < r13 < r12 < r11 < r10 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r14 - 2009-10-06 - ColinBernet
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    CMSPublic All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback