*THIS PAGE IS UNDER CONSTRUCTION*

Chapter 6: Analysis of the collision data

Recipes to get started

In terms of software the user should always follow the instructions from WorkBookWhichRelease.

There are several recipes for you to use to clean up the event sample as recommended by the PVT group. This is a collation of the information presented here that is relevant for 38x analysis.

The following sample cleanups should be done for most analyses, unless you know what you are doing to change them.

  • Beam background removal
    process.noscraping = cms.EDFilter("FilterOutScraping",
                                    applyfilter = cms.untracked.bool(True),
                                    debugOn = cms.untracked.bool(True),
                                    numtrack = cms.untracked.uint32(10),
                                    thresh = cms.untracked.double(0.25)
                                    )
  • Primary vertex requirement
    process.primaryVertexFilter = cms.EDFilter("GoodVertexFilter",
                                               vertexCollection = cms.InputTag('offlinePrimaryVertices'),
                                               minimumNDOF = cms.uint32(4) ,
                                               maxAbsZ = cms.double(15), 
                                               maxd0 = cms.double(2) 
                                               )
  • HBHE event-level noise filtering
    process.load('CommonTools/RecoAlgos/HBHENoiseFilter_cfi')
    

Usage of computing resources

The best policy for users to use CRAB in order to process collision data is :

  • Use the DCSONLY good run lists (in the JSON format) in CRAB as described here.
    • The DCSONLY good run lists are available here.
  • From there, publish the dataset if you intend to use grid resources to access it later. Instructions for CRAB publication can be found here.
  • The final good run lists should be applied at the analysis level. The latest good run lists are available here.

Trigger Selection

Oftentimes, it is extremely useful to apply your trigger selection in your skim or PAT-tuple creation step directly. This reduces the load on the computing and gives you a smaller output. To do so, as an illustration, here is how to select HLT_Mu9 :

from HLTrigger.HLTfilters.hltHighLevel_cfi import *
process.triggerSelection = hltHighLevel.clone(TriggerResultsTag = "TriggerResults::HLT", HLTPaths = ["HLT_Mu9"])

...


process.anaseq = cms.Sequence(
    process.triggerSelection*
    process.myOtherStuff
)
where myOtherStuff is whatever other modules you want to run.

More information on trigger access in analysis can be found at WorkBookHLTTutorial and WorkBookPATExampleTrigger.

Analysis of the processed data

There are several choices for the user to analyze collision data. There is an example in FWLite to help get you started here.

-- SalvatoreRoccoRappoccio - 28-Sep-2010

Edit | Attach | Watch | Print version | History: r15 | r5 < r4 < r3 < r2 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r3 - 2010-09-30 - SalvatoreRRappoccio



 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    CMSPublic All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback