b Tag & Vertex Validation Software

Complete: 5

For official b-tag validators please refer to section #Instructions_for_validators

For users please refer to section #Instructions_for_users

For the summary of all the validation campaigns' results, please check out BtagValidation.

Contacts

  • Adrien Caudron
  • Petra Van Mulders
  • Sebastien Wertz

  • Alexandre Aubin
  • Tomo Umer
  • Aram Avetisyan
  • Pratima Jindal.
  • Francisco Yumiceva
  • Jason Keller

IMPORTANT INFORMATION

  • [04/03/2015] Switch in 74x/75x to new jet flavour definition defined in SWGuideBTagMCTools twiki based on Hadrons and less generator depedent.

Introduction

In order to validate the b-tagging and primary vertex reconstruction, we have developed a set of scripts that run analyzers over different samples, do a book keeping of root files, logs and plots, and publish the results on a web page. The current status of the packages is the following:

Vertexing validation package: "Validation/RecoVertex":

  • Contains an analyzer for checking primary vertex algorithms: AVF, KVF, TKF.
  • Contains an analyzer for checking the tracking parameters.
  • Output are root files with histograms.
b-tagging validation packages: "Validation/RecoB" and "DQMOffline/RecoB":
  • For each tagger, it produces a folder with several distributions like the b-tag performances and discriminators.
Old Validation tools package under CVS: "UserCode/Yumiceva/ValidationTools"
  • Script to run analyzers over several samples, produce plots, and do the bookkeeping of results.
  • Script to produce web pages and publish results on the web.
  • Scripts transfered and updated to https://github.com/cms-btv-pog/Validation-Tools
Current validation tool: https://github.com/cms-btv-pog/Validation-Tools
  • Documentation available soon
You could also use the DQM comparison tools in order to compare two DQM root files.

Instructions for validators

Procedure to run the validation

The validation procedure uses a set of Python scripts to handle the output and publish the plots on a website. The procedure is described below:

0. Get status of new releases and samples by subscribing to the Hyper News "CMS.RelVal Samples and Release Testing" (hn/RelVal@cern.ch) and "Software Release Announcements" (hn-cms-relAnnounce@cernNOSPAMPLEASE.ch).

1. Setup a CMSSW release to validate (see Workbook ).

2. Checkout the validation packages:

From 700pre1 and higher version :

git cms-addpkg Validation/RecoB
git cms-addpkg DQMOffline/RecoB

From 620pre7 and higher version :

git cms-addpkg Validation/RecoB CMSSW_7_0_0_pre1
git cms-addpkg DQMOffline/RecoB CMSSW_7_0_0_pre1

From 610 and higher version :

git cms-addpkg Validation/RecoB CMSSW_6_2_0_pre6
git cms-addpkg DQMOffline/RecoB CMSSW_6_2_0_pre6

From 324 and higher version :

git cms-addpkg Validation/RecoB CMSSW_6_0_0
git cms-addpkg DQMOffline/RecoB CMSSW_6_0_0

CMSSW 3.2.4 and higher:

cvs co Validation/RecoB
addpkg DQMOffline/RecoB V01-03-22

For CMSSW 6.1.0 and higher please consider to do :

cvs co DQMOffline/RecoB

Then compile:

scram b -j 8

3. Two procedures are possible from 600pre11: A. or B. ; A. is generally faster when the DQM files are available at CERN. For earlier releases see C.

3. A.1 In the Validation/RecoB/test/ directory, run the RelValConf.csh script. It will create folders for the 5 samples used for validation with a copy of reco_validation_cfg.py in each folder.

Samples used for bTag validation are TTbar and QCD_Pt_80_120 samples with Startup conditions, and Startup TTbar FastSim sample. For DATA up to now 2011A /Jet/ sample is used. Change the source files to use the dataset to be validated (RelVal datasets can be found on DAS, see below).

From 500 only MC samples with Startup conditions have to be validated (no more ideal MC conditions) :

  • /RelValQCD_Pt_80_120/CMSSW_*_*_*-START*/DQM
  • /RelValTTbar/CMSSW_*_*_*-START*/DQM
  • /RelValTTbar/CMSSW_*_*_*-START*FastSim*/DQM
If it is produced it's good to look also at the TTbar sample with PU :
  • /RelValTTbar/CMSSW_*_*_*-PU*START*/DQM
For DATA :
  • /Jet/CMSSW_*_*_*RelVal_jet2011A*/DQM ; in the cfg file don't forget to put
runOnMC = False

To find the samples :

3. A.2 Do :

cmsenv
source RelValRun.csh

In few minutes you will have all you DQM root files.

3. B. It's possible to retrieve the DQM root files directly from here

3. C. For CMSSW 3.2.4, 4XY and 5XY : caloJets was used in the central DQM offline sequence then the DQM root files couldn't be taken centrally and sould be produced by the user.

For this use validation_cfg.py with the RECO files as input (to use RelValConf.csh and RelValRun.csh use the following tag : Validation/RecoB V01-07-10).

4. Once the jobs are finished, move the output files:

source RelValMove.csh relLabel

where relLabel is typically the number of the release without underscores (e.g. 600 for CMSSW_6_0_0)

5. Repeat the above steps for the sample you wish to compare with (if the ROOT files do not already exist).

6. Once the files for both releases are there, run the final script:

source RelValFinal.csh valRelLabel refRelLabel workDir valDir refDir

where the valRelLabel (valDir) is the label for the release to be validated and refRelLabel (refDir) is the label for the reference release. For example, when comparing CMSSW_5_2_3 to CMSSW_5_2_2, they would typically be 523 (5_2_3) and 522 (5_2_2) respectively.

workDir is the label of your working directory for example /afs/cern.ch/work/u/username/

This will run a python script called "cuy" which produces nice plots and superimposes distributions from previous releases. It can also be used independently of other scripts:

cuy.py
   cuy
    A very simple way to make plots with ROOT via an XML file.

   usage: /uscms_data/d1/mbonnett/work/CMSSW_2_1_0_pre6/bin/slc4_ia32_gcc345/cuy.py -x <XML configuration file>
   -b, --batch : run script in batch mode.
   -e, --example = EXAMPLE: generate an example xml file.
   -l, --list    = LIST: list of objects in the ROOT file. 
   -p, --prt     = PRT: print canvas in the format specified png, ps, eps, pdf, etc.
   -x, --xml     = XML: xml configuration file.

   Francisco Yumiceva (yumiceva@fnal.gov)
   Fermilab 2008

This script takes as input a configuration file in XML format. For example to produce plots, you can look at these XML files examples.

Change the xml file for your specific files and releases. Run cuy.py script for each xml file to make plots. NOTICE that due to a python path problem you should first open a xterm window and then run the script.

For example, you can run as follows:

cuy.py -x ValidationBtag.xml
   .
   .
   .
   enter: ["q",".q" to quit] ["p" or "print" to print all canvas]:

You can move the legends to see the plot better before entering "p"

7. The result of the RelValFinal.csh script is a directory with the same name as the validated release (e.g. CMSSW_5_2_3). Copy it to:

/afs/cern.ch/cms/btag/www/validation/packages/RecoB/

The script also creates a summary webpage (listed in the webpage.txt file) that can be used to link the validation results from the 5 samples together. It can be edited with the changes and observations (if any).

After do :

python ValidationTools/Scripts/scripts/make_webpage.py /afs/cern.ch/cms/btag/www/validation/

A cronjob runs at 5 minutes after the hour, and will automatically publish the plots in this special directory on the website located here

8. Once the validation results are available for viewing, send a notice to the Release Validation HyperNews stating what samples were used, the results of the validation, and the links to the plots for viewing.

9. Also, be sure to modify the RECO Validation Table to display the result of the validation for bTagging.

For 53X the RECO validation table is here.

For 60X RECO validation table is here.

For 61X RECO validation table is here.

New validation web page : PdmVvaldb (if you are using it, an automatic message will be sent to the HN)

Validation of Production Datasets

It is also possible to validate production datasets using the above steps, but it is much faster and easier to do this over the grid with CRAB. There are tools available in CVS to do this.

1. Check out the GridValidation tools from CVS by

cvs co Validation/RecoB/test/GridValidation

There should be three files to use: validation_FirstStepOnGrid.py, validation_Harvest.py, and crab.cfg

2. Modify the crab.cfg file to use the required dataset and any other parameters which need to be changed. Modify also validation_FirstStepOnGrid.py to have the good options and good GT, see also validation_cfg.py.

3. Submit jobs with CRAB and retrieve output as needed.

4. Once the jobs are finished, modify the validation_Harvest.py file to use the output EDM files from the CRAB step. Run the harvesting with

cmsRun validation_Harvest.py

5. This will produce a DQM file similar to the release validation step. Follow steps 5 onward from the "Procedure to run the validation" guide.

With Old CMSSW versions

To know the details to run with old CMSSW versions click on the link provided below.

1. Setup a CMSSW release to validate (see Workbook ).

2. Checkout the validation packages:

CMSSW 1.8.X and higher:

cvs co -r  Feb-6-18X Validation/RecoB
addpkg Validation/RecoVertex (if you want to validate vertices)
cvs co -r V00-00-06 -d ValidationTools UserCode/Yumiceva/ValidationTools

CMSSW 1.7.X:

cvs co -r Validation-170 Validation/RecoB
cvs co -r V00-02-14 Validation/RecoVertex (if you want to validate vertices)
cvs co -r V00-00-06 -d ValidationTools UserCode/Yumiceva/ValidationTools

CMSSW 1.6.7:

cvs co -r V00-03-08 RecoBTag/Analysis
cvs co -r V00-00-07 Validation/RecoB
cvs co -r V00-02-14  Validation/RecoVertex (if you want to validate vertices)
cvs co -r V00-00-06 -d ValidationTools UserCode/Yumiceva/ValidationTools

Then compile:

scramv1 b

Update paths so scripts can be used from any path:

rehash

3. Prepare cfg files with datasets, eg. QCD_pt50_80.cfg:

replace PoolSource.fileNames = {
'/store/RelVal/2007/5/11/RelVal/RelVal_Marcelo_140QCD_pt50_80-1178901791/0000/32022566-5300-DC11-A50B-00304885AEDC.root',
'/store/RelVal/2007/5/11/RelVal/RelVal_Marcelo_140QCD_pt50_80-1178901791/0000/665CEE36-FC01-DC11-94B9-00304855D55A.root',
'/store/RelVal/2007/5/11/RelVal/RelVal_Marcelo_140QCD_pt50_80-1178901791/0000/F4DB27B5-2900-DC11-81D1-00304885AA64.root'
}

NOTE: the scripts require that the filename of the these configuration files are the same across releases so the plots can be compared between them. The current convection is: BJets_Pt_50_120.cfg CJets_Pt_50_120.cfg QCD_pt50_80.cfg QCD_pt80_120.cfg TTbar.cfg

4. Run run_validation.py script to process the datasets and make plots. These are the options that the script can take:

run_validation.py

   usage: ./run_validation.py
   -w, --webpath = WEB: path to webpage folder
   -c, --cfg     = CFG: configuration file
   -1, --sample1 = SAMPLE1: cfg sample
   -2, --sample2 = SAMPLE2: cfg sample
   -3, --sample3 = SAMPLE3: cfg sample
   -4, --sample4 = SAMPLE4: cfg sample
   -5, --sample5 = SAMPLE5: cfg sample
   -6, --sample6 = SAMPLE6: cfg sample
   -r, --reference = REFERENCE: CMSSW version of reference plots, default is 1.3.1
   -n, --nocompare : do not compare histograms only produce plots. It can be used to create reference plots.
   -p, --plots : just produce plots.
   -l, --logaxis : produce plots with a logarithm Y-axis scale.

For example, you can run as follows:

Validation/RecoVertex/test > run_validation.py -w /afs/cern.ch/cms/Physics/btau/management/validation/ -c ValidateRecoVertex_Tracking.cfg -1 QCD_pt50_80.cfg -2 QCD_pt80_120.cfg -3 Higgs-ZZ-4Mu.cfg -4 Higgs-ZZ-4E.cfg
Using reference plots from version CMSSW_1_3_1
Running Release Validation on CMSSW_1_5_0_pre2
 Processing ValidateRecoVertex_Tracking.cfg with dataset QCD_pt50_80.cfg
 now producing plots
 root and log file moved to /afs/cern.ch/cms/Physics/btau/management/validation//packages/RecoVertex_Tracking/CMSSW_1_5_0_pre2/QCD_pt50_80
 Processing ValidateRecoVertex_Tracking.cfg with dataset QCD_pt80_120.cfg
 now producing plots
 root and log file moved to /afs/cern.ch/cms/Physics/btau/management/validation//packages/RecoVertex_Tracking/CMSSW_1_5_0_pre2/QCD_pt80_120
 Processing ValidateRecoVertex_Tracking.cfg with dataset Higgs-ZZ-4Mu.cfg
 now producing plots
 root and log file moved to /afs/cern.ch/cms/Physics/btau/management/validation//packages/RecoVertex_Tracking/CMSSW_1_5_0_pre2/Higgs-ZZ-4Mu
 Processing ValidateRecoVertex_Tracking.cfg with dataset Higgs-ZZ-4E.cfg
 now producing plots
 root and log file moved to /afs/cern.ch/cms/Physics/btau/management/validation//packages/RecoVertex_Tracking/CMSSW_1_5_0_pre2/Higgs-ZZ-4E

5. In the case of vertex validation, there are currently available two configuration files that can be run:

  • "ValidateRecoVertex_PrimaryVertex.cfg" used to run the primary vertex validation.
  • "ValidateRecoVertex_Tracking.cfg" used to run the tracking validation.
6. Publish plot on a web page running the make_webpage.py script where the first argument is the location of web page:

ValidationTools > make_webpage.py /afs/cern.ch/cms/Physics/btau/management/validation/  
 working path: /afs/cern.ch/cms/Physics/btau/management/validation/
 log file: /afs/cern.ch/cms/Physics/btau/management/validation//./make_webpage.log
 copying base html files to /afs/cern.ch/cms/Physics/btau/management/validation/

Validation plots

Instructions for users

Config file to run validation packages

From 720, two examples are provided in order to run the b-tag DQM sequence on RECO samples:

  • Validation/RecoB/test/validation_fromRECO_cfg.py
  • Validation/RecoB/test/validation_customJet_cfg.py
The first is really basic and take the main jet collection (currently ak4PFJetsCHS) from the RECO content and the available b-tagging information in the RECO file in order to produce the DQM plots, still you have to specify the algorithm or the taginfos you are interested to look at in tagConfig (see example in the file).

The second, which is used mainly in the release validation procedure, shows an example on how to custumize the jet collection over the one you want to run the b-tagging sequence and run the validation code. Few cases are implemented by default: ak4PFJets and ak4PFJetsCHS with and without JEC. Loose JetID is also applied by default.

Some options available in one or the two cfg files:

  • choose between MC or DATA with runOnMC
  • choose to use a trigger or not with useTrigger
  • choose your GT with tag
  • change the trigger path with process.bTagHLT.HLTPaths = ["HLT_PFJet80_v*"] (replace since 74x/75x by triggerPath)
  • Apply JEC or not and which JEC with applyJEC and corrLabel (easily configurable since 74x/75x)

For DATA is possible to produce one DQM root files by run removing the following lines :

process.dqmSaver.saveByRun = cms.untracked.int32(-1)
process.dqmSaver.saveAtJobEnd =cms.untracked.bool(True)
process.dqmSaver.forceRunNumber = cms.untracked.int32(1)

To run over the GRID, examples are provided here: Validation/RecoB/test/GridValidation

Please notice that from 720 the way to do the harvesting evolved (see: validation_Harvest.py)

  • especially there is no need anymore to pass by EDM converter and now the source is a DQMRootSource
In case you want to read the DQMIO outputs produced in the RelVal production you can use this script: Validation/RecoB/test/Harvest_validation_cfg.py
  • you just have to specify the input files
For more customisation of the sequence can have allok at the description of the modules and the code below.

DQM code description

DATA anlayzer

The DATA analyzer "BTagPerformanceAnalyzerOnData" is define as a plugins in DQMOffline/RecoB/plugins/BTagPerformanceAnalyzerOnData.cc (.h).

An EDAnalyzer bTagAnalysis is define in DQMOffline/RecoB/python/bTagAnalysisData_cfi.py .

Options of the analyzer :

  • bTagCommonBlock : see bTagCommon_cff.py

  • finalizeOnly = cms.bool(False) : if True the analyzer will not be run. Default is False . Use True only to run on the GRID or for global sequence like DQMOffline sequence.
  • finalizePlots = cms.bool(True) : if True efficiency curves and performance curves will be produced. Default is True . Set it always to True if finalizeOnly == True . For global sequence like DQMOffline sequence or GRID step it should be set to False .
  • mcPlots = cms.uint32(0) : unsigned integer flag to choose with histograms will be created. Default is 0. If one try to change it to an other value it will be reset to 0 except if finalizeOnly is True.
  • tagConfig : list of tag infos to use. Please look at bTagAnalysisData_cfi.py to see how to use it. Important : label refere to the InputTag then please ensure that the name is correct and that the collection is created. bTagAnalysisData_cfi.py contain the standard one for the following tagger : TCHE, TCHP, SSVHE, SSVHP, CSV, CSVMVA, JP, JBP, SMT, SMTIP3d, SMTIP3d, GhTrk and for the IP and SSV tag infos .

MC anlayzer

The MC analyzer is very similar at the DATA analyzer. Only the analyze method is different and look at generator information paricularly at the jet flavour.

The analyzer "BTagPerformanceAnalyzerMC" is define as a plugins in Validation/RecoB/plugins/BTagPerformanceAnalyzerMC.cc (.h).

An EDAnalyzer bTagValidation is define in Validation/RecoB/python/bTagAnalysis_cfi.py .

Main options :

  • bTagCommonBlock: see DATA analyzer options
  • applyPtHatWeight = cms.bool(False) : flag to use or not the event weight. Set it to True if you want to use this reweighting.
  • genJetsMatched: valid option from 710pre3, collection of pair of reco and gen jets used to disentagle PU-jets form "real" jets
  • doPUid: valid option from 710pre3, True means -> look for PU-jets, False means -> all jets are considered as "real" jets
  • flavPlots : this flag is a string which allows to define the mcPlots flag to choose which (flavour) histograms will be produced.
    • if "dusg" is find: in flavPlots : histograms for d, u, s, g and dus jets will be created, else: only histograms for b, c and dusg jets will be created
    • if "noall" is find: histograms for all jets will not be created
    • example : "dusg" , "noall", "dusgnoall" ; default "allbcl"

  • finalizeOnly, finalizeOnly : see DATA analyzer options
  • differentialPlots = cms.bool(False) : flag to produce mistag rate versus Pt or Eta at a constant b-tag efficiency. All flavour histograms will be created even if "dusg" is not specified in flavPlots but "noall" option is available.
  • tagConfig : see DATA analyzer options. bTagAnalysis_cfi.py alloww to look at TCHE, TCHP, SSVHE, SSVHP, CSV, CSVMVA, JP, JBP, SMT, SMTIP3d, SMTIP3d, GhTrk, SETIP3d, SETPt.
    • from 62x: SET and SMT replace all other soft lepton taggers

Harvester modules

Modules to produce the efficiency and performance plots. Based on DQMOffline/RecoB/plugins/BTagPerformanceHarvester.cc(.h).

Data module is defined in: DQMOffline/RecoB/python/bTagAnalysisData_cfi.py

MC module is defined in: Validation/RecoB/python/bTagAnalysis_cfi.py

Main options:

  • flavPlots: see MC analyser, have to match the analyser one, for Data don't use
  • differentialPlots: produce mistag vs pt at fixed b-tag efficiency

bTagCommonBlock options

Defined in DQMOffline/RecoB/python/bTagCommon_cff.py .

These options are used for both analyzer :

  • ptRecJetMin and ptRecJetMin : Pt minimum and maximum of the selected jets. Default is 30.0 to 40000.0 .
  • etaMin and etaMax : Eta minimum and maximum of the selected jets. Default is 0.0 to 2.4 .
  • ratioMin and ratioMax : Ratio minimum and maximum between the lepton momentum and the jet energy if a lepton is in the jet cone. default is -inf to inf. It was used for CaloJets with value between -1.0 and 0.8 .
  • ptRanges : create histograms for specified Pt bins. Default is 50-80 and 80-120. Could be disabled defining ptRanges = cms.vdouble(0.0).
  • etaRanges : create histograms for specified Eta bins. Default is 0-1.4 and 1.4-2.4. Could be disabled defining etaRanges = cms.vdouble(0.0).
  • doJetID : check jet pass loose jet ID, only for PFjets
  • doJEC : apply JEC on the fly?
  • JECsource : JEC record to be used: not need to add 'Residual' for data, this will be handled by the analyser
  • tagConfig : list of tag infos to use. Please look at bTagAnalysisData_cfi.py to see how to use it. Important : label refere to the InputTag then please ensure that the name is correct and that the collection is created. bTagAnalysisData_cfi.py contain the standard one for the following tagger : TCHE, TCHP, SSVHE, SSVHP, CSV, CSVv2+IVF, JP, JBP, SMT, SMTIP3d, SMTIP3d, GhTrk and for the IP and SSV tag infos .
  • jetMCSrc and caloJetMCSrc: map jet-flavour source, the first is default and make use of the latest flavour tool, the second is used only if useOldFlavourTool is True and should be used only for CaloJets or for tests (parameter relevant only for MC)
  • useOldFlavourTool: should be False except if you want to use CaloJets (parameter relevant only for MC)

Available histograms

Tagger histograms

Defined in DQMOffline/RecoB/src( interface)/JetTagPlotter.cc( h)

  • jetPt : Pt of the selected jets
  • jetEta : Eta of the selected jets
  • jetPhi : Phi of the selected jets
  • jetMomentum : Momentum of the selected jets
  • discr : Discriminant distribution
  • jetMultiplicity : Number of selected jets by events
  • jetFlavour : Flavour of the parton matched with the jet *
Defined in DQMOffline/RecoB/src( interface)/EffPurFromHistos.cc( h)
  • totalEntries : Total number of selected jets
  • effVsDiscrCut : Efficiency corrisponding to the discriminant cut computed for each bin
  • FlavEffVsBEff : Mistag efficiency versus b-tag efficiency *
* MC only

Flavour histograms

Defined in DQMOffline/RecoB/interface/FlavourHistograms(2D).h : all histograms filled by the analyzers are FlavourHistograms, according to the mcPlots value the histograms will be crated and filled with the correct suffix and for the corresponding jets. Suffix available :

  • ALL : for all selected jets
  • B, C, D, U, S, G : for b-, c-, d-, u-, s-, g-jets (flavour 5, 4, 1, 2, 3, 21)
  • DUS : for d-, u- and s-jets
  • DUSG : for d-, u-, s- and g-jets
  • NI : for non-identified jets (flavour 0)
  • PU : for PU-jets, ONLY from 710pre3 (usable in 70x and 62x)

Impact Parameter tag infos histograms

Defined in DQMOffline/RecoB/interface(src)/TrackIPTagPlotter.h(.cc) :

  • ip : IP value
  • ipe : IP error
  • ips : IP significance
  • decLen : decay length
  • jetDist : track distance to jet axis
  • tkNChiSqr : normalized chi square
  • tkPt : track Pt
  • tkNHits : number of hits
  • tkNPixelHits : number of pixel hits
  • selTrksNbr : number of selected tracks
  • prob : track probability
  • probIPneg : track probability for tracks with IP < 0
  • probIPpos : track probability for tracks with IP >= 0
  • ghostTrackWeight : ghost track weight
  • ghostTrackDist : ghost track distance value
  • ghostTrackSign : ghost track distance significance
  • trackQual : track quality
  • selectedTrackQual : selected track quality
  • trackMultVsJetPt : track multiplicity versus jet Pt
  • selectedTrackMultVsJetPt : selected track multiplicity versus jet Pt
Most of these histograms are available for the 2D and 3D tracks. Some of them are also available for the 1st, 2nd, 3rd and 4th tracks sorted by IPS (or track probability for the track probability histograms).

The selected tracks should have a distance to jet axis < 0.07 cm and a decay length < 5 cm. These values are defined in DQMOffline/RecoB/python/bTagTrackIPAnalysis_cff.py

Secondary Vertices tag infos histograms

Variables are taken from reco::btau::TaggingVariableName. Plotted variables are defined in DQMOffline/RecoB/python/bTagCombinedSVVariables_cff.py

Histograms filled only if a vertex is reconstructed (CAT1) :

  • flightDistance2dVal, flightDistance3dVal : flight distance value (distance between the PV and the SV)
  • flightDistance2dSig, flightDistance3dSig : flight distance significance
  • jetNSecondaryVertices : number of SV by jets
Histograms filled also if a pseudo-vertex could be found (CAT1&CAT2) :
  • vertexMass : SV mass
  • vertexNTracks : number of tracks at SV
  • vertexJetDeltaR : Delta R between the SV and the jet axis
  • vertexEnergyRatio : Energy at SV / Jet energy
Hitograms filled also if no vertex found (CAT1&CAT2&CAT3) :
  • vertexCategory : categories 1, 2, and 3 (RECO, PSEUDO, NO SV)
  • trackSip2dVal, trackSip3dVal : track IP value
  • trackSip2dSig, trackSip3dSig : track IP significance
  • trackSip2dSigAboveCharm, trackSip3dSigAboveCharm : IP significance of first track lifting SV mass above charm
  • trackDeltaR : Delta R between the track and the jet axis
  • trackEtaRel : track eta relative to the jet axis
  • trackDecayLenVal : track decay length
  • trackSumJetDeltaR : Delta R between track 4-vector sum and jet axis
  • trackJetDist : track distance to jet axis
  • trackSumJetEtRatio : track sum Et / jet energy
  • trackPtRel : track Pt relative to jet axis
  • trackPtRatio : track Pt relative to jet axis, normalized to its energy
  • trackMomentum : track momentum
  • trackPPar : track parallel momentum along the jet axis
  • trackPParRatio : track parallel momentum along the jet axis, normalized to its energy

Current Validation Tasks

DATA validation

standard release validation for MC

  • 62x :
    • 620pre5 vs 620pre4
  • 61x :
    • 612 vs 611

external release validation for MC

upgrade release validation for MC

Known differences / checks needed

Improvement of the validation procedure/code

  • Understand the crash of the Data validation sequence when doing only the first step.
  • At the end of a cycle of pre-releases : add the comparison with the previous release. Example : 620 versus 610 when the 620preX cycle finishes.
  • Other ? ...

Review Status

Editor/Review and Date Comments
JyothsnaK - 14-Oct-2010 Made the instruction for old CMSSW to hide by default
JasonKeller - 10 May 2010 Updated the twiki and added a section for running on grid
JasonKeller - 14 Aug 2009 Modernized the twiki for the 3.X.Y releases
JasonKeller - 24 Jun 2009 Changed links to new webserver
Main.tomalini contributor
FranciscoYumiceva - 03 Oct 2007 Last content editor
Responsible: Adrien Caudron
Last reviewed by:
Edit | Attach | Watch | Print version | History: r73 < r72 < r71 < r70 < r69 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r73 - 2016-03-07 - SebastienWertz



 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    CMSPublic All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback