-- OwenLong - 14-Dec-2011

Disclaimer: I am not a trigger or CMSSW expert, though I'm learning a lot as I go. Anything that you see in this web page might be wrong and/or out of date at the time you read it. Use at your own risk. (April 25, 2014)


Useful python tricks

How to use a text file containing a list as an input to a cfg file

Here's a python snippet that will take a text file containing a list of input files

from FWCore.ParameterSet.VarParsing import VarParsing
options = VarParsing ('analysis')
options.register ('fileListFile',
                  'foo.txt',
                  VarParsing.multiplicity.singleton,
                  VarParsing.varType.string,
                  "File containg the list of input files")
options.parseArguments()
myfilelist = cms.untracked.vstring()
flist = open( options.fileListFile,'r')
for line in flist:
        str1 = "file:%s" % (line)
        myfilelist.extend( [ str1 ] )
process.source = cms.Source ("PoolSource",
          fileNames = cms.untracked.vstring ( myfilelist ),
)

Notes on trigger development work

I'm keeping my notes in this space on various procedures for doing work related to trigger development.

Disclaimer: I am not a trigger or CMSSW expert, though I'm learning a lot as I go. Anything that you see in this web page might be wrong and/or out of date at the time you read it. Use at your own risk. (April 25, 2014)

Set up and run a HLT trigger menu

This section has instructions on how to set up and run a custom HLT trigger menu. You will use the ConfDB GUI to create a new trigger menu. You will then use hltGetConfiguration to generate a python cff file and use that to run the HLT code using your menu. Most of what you see here was taken from the following sources:

Set up your working area

Choose a release in which to work. I have tested this in CMSSW_7_1_0_pre6 using SL6 on lxplus6 on April 25, 2014. Log in to lxplus6, create your working area and check out HLTrigger/Configuration for the release.

setenv SCRAM_ARCH slc6_amd64_gcc481
cmsrel CMSSW_7_1_0_pre6
cd CMSSW_7_1_0_pre6/src
cmsenv
git cms-addpkg HLTrigger/Configuration

Go to the src/HLTrigger/Configuration/python directory and inspect the file HLT_GRun_cff.py. The first line of the file is a comment and will look something like this.

# /dev/CMSSW_7_1_0/GRun/V23 (CMSSW_7_1_0_pre5_HLT1)

This tells you exactly where to find the GRun menu associated with the release in the ConfDB.

Use the ConfDB GUI to create a new trigger menu

If you want to make your own HLT menu, you need to do this step. If someone else has already done it for you, you can skip it. Before starting, have a look at the ConfDB GUI documentation. You will need to have Java set up and working on your computer. You will also need the password. If you don't know it, email me. The steps below show you how to start with a blank menu and then import a few trigger paths that you care about into the menu.

  • Start the GUI.
  • Select HLT Development from the Setup menu at the top and enter the password at the bottom.
  • In the Configurations menu, select New. A small window titled Enter Configuration Name will pop up.
    • For the Release: field, I'm using the release name that matches the one given in the first line of src/HLTrigger/Configuration/python, which is CMSSW_7_1_0_pre5_HLT1 in this case. I'm not sure if this is necessary, but it seems like a good idea if all you want is to run a subset of the trigger paths from the GRun menu for the release.
    • For the Name: field, choose what you like. For this example, I'll use rtest1.
    • For the Process: field, this would normally be HLT. However, we will be re-running HLT, so we need to use a different name. If you are going to do the harvesting step later, you need this name to match there. I'll use HLTX.
    • Click OK. After a few seconds, you should see a list in the left-hand-side box: PSets (0), EDSource (0), ...
  • Now, we want to load the standard GRun menu so that we can grab a few paths from it. Go to the Configurations menu and select Import.
    • For consistency, I'm using the one associated with the release, given in the first line of src/HLTrigger/Configuration/python. In this case, that's dev > CMSSW_7_1_0 > GRun > V23.
    • The box on the left hand side should now split in two. The gray box on the right is the official GRun menu. The white box on the left is yours.
  • Select and drag over the paths you want. Here's what I'm using. The SWGuideGlobalHLT web page says to be sure to have HLTriggerFirstPath and HLTriggerFinalPath. Drag over the paths in this order:
  • You should now have those 5 paths in your setup in the reverse order in which you inserted them, with HLTriggerFirstPath first.
  • In the Configurations menu, select Save As. Under the users folder, you will need to creat one for yourself, if you don't already have one. Please see the instructions at the ConfDB GUI web page here for that step and note that right-click on a Mac is ctrl-click. In this recipe test, I saved mine in the subdirectory dev_7_1_0 of owen with the name rtest1.
  • You are now done with the ConfDB GUI if all you want to do is create a menu with these paths without changing the configuration of those paths. Quit the GUI.

Inspect your trigger menu with the ConfDB browser

This step isn't required, but it's interesting and will confirm that you have successfully created your trigger menu.

ConfDB-browser1.png

Prepare your python configuration for running HLT

These are the steps from the SWGuideGlobalHLT web page. Look under "To run your own menu containing only HLT paths ...". In the hltGetConfiguration step below, use either the trigger menu you created in the ConfDB or the one for this recipe, which is /users/owen/dev_7_1_0/rtest1/V1. We will be running on MC, not data, so the options for hltGetConfiguration have been set appropriately. Note that we are continuing to use HLTX for the process name. Note that a version number is given in the edmConfigFromDB command (V23). If you don't provide one, it will take the most recent, which will probably be incompatible with the rest of your setup. When writing and testing this recipe, I accidentally left it out and it gave me the setup for V30. When I ran HLT on some events, it gave a runtime exception on the 3rd event and aborted, presumably due to an inconsistency with the rest of the release, which is based on V23.

cd src/HLTrigger/Configuration/test
edmConfigFromDB --cff --configName /dev/CMSSW_7_1_0/GRun/V23 --nopaths > setup_cff.py
hltGetConfiguration /users/owen/dev_7_1_0/rtest1/V1 --full --offline --mc --unprescale --process HLTX --globaltag auto:startup_GRun > run_hlt.py

The unmodified output of the commands are given here:

In order to have something runnable, you will need to set up the input, output, and a couple of other things. You also need to load setup_cff.py at the beginning. In the following, I have copied the raw hltGetConfiguration file run_hlt.py to the file run_hlt_withio.py, which has the necessary additions. Here's a copy of my version: https://twiki.cern.ch/twiki/pub/Sandbox/OwenLongSandbox/run_hlt_withio.py.txt

Here's a description of what was added. At the top of run_hlt_withio.py, add these two lines

process = cms.Process( "HLTX" )

##-- Owen : added these
process.load("setup_cff")
process.load('Configuration.EventContent.EventContent_cff')
##-- Owen : end


process.HLTConfigVersion = cms.PSet(
  tableName = cms.string('/users/owen/dev_7_1_0/rtest1/V1')
)

Just after the HLT path definitions, define the HLT schedule


process.HLTriggerFirstPath = cms.Path( process.hltGetConditions + process.hltGetRaw + process.hltBoolFalse )
process.HLT_DiCentralPFJet30_PFMET80_BTagCSV07_v6 = cms.Path( process.HLTBeginSequence + process.hltL1sL1ETM36ORETM40 + process.hltPreDiCentralPFJet30PFMET80BTagCSV07 + process.HLTRecoMETSequence + process.hltMET65 + process.HLTRecoJetSequenceAK4L1FastJetCorrected + process.hltBJetHbb + process.HLTFastPrimaryVertexSequence + process.hltFastPVPixelVertexSelector + process.HLTBtagCSVSequenceL3Hbb + process.hltBLifetimeL3FilterHbbCSV + process.HLTPFL1FastL2L3ReconstructionSequence + process.hltDiCentralPFJet30ZnunuHbb + process.hltPFMETProducer + process.hltPFMET80Filter + process.HLTEndSequence )
process.HLT_QuadJet75_55_38_20_BTagIP_VBF_v9 = cms.Path( process.HLTBeginSequence + process.hltL1sL1TripleJet644424VBFORTripleJet644828VBFORTripleJet684832VBF + process.hltPreQuadJet75553820BTagIPVBF + process.HLTRecoJetSequenceAK4L1FastJetCorrected + process.hltL1FastJetSingle75HbbVBF + process.hltL1FastJetDouble55HbbVBF + process.hltL1FastJetTriple38HbbVBF + process.hltL1FastJetQuad20HbbVBF + process.hltCaloL1FastJetEtaSortedM200VBF + process.hltBJetHbbVBF + process.HLTBTagIPSequenceL25HbbVBF + process.hltBLifetime2p5L25FilterHbbVBF + process.HLTBTagIPSequenceL3HbbVBF + process.hltBLifetime7p9L3FilterHbbVBF + process.hltCaloL1FastJetBTagSortedVBF + process.HLTEndSequence )
process.HLT_QuadJet75_55_35_20_BTagIP_VBF_v9 = cms.Path( process.HLTBeginSequence + process.hltL1sL1TripleJet644424VBFORTripleJet644828VBFORTripleJet684832VBF + process.hltPreQuadJet75553520BTagIPVBF + process.HLTRecoJetSequenceAK4L1FastJetCorrected + process.hltL1FastJetSingle75HbbVBF + process.hltL1FastJetDouble55HbbVBF + process.hltL1FastJetTriple35HbbVBF + process.hltL1FastJetQuad20HbbVBF + process.hltCaloL1FastJetEtaSortedM200VBF + process.hltBJetHbbVBF + process.HLTBTagIPSequenceL25HbbVBF + process.hltBLifetime2p5L25FilterHbbVBF + process.HLTBTagIPSequenceL3HbbVBF + process.hltBLifetime6p8L3FilterHbbVBF + process.hltCaloL1FastJetBTagSortedVBF + process.HLTEndSequence )
process.HLTriggerFinalPath = cms.Path( process.hltGtDigis + process.hltScalersRawToDigi + process.hltFEDSelector + process.hltTriggerSummaryAOD + process.hltTriggerSummaryRAW )

##-- Owen : add schedule.
process.HLTSchedule = cms.Schedule( *(process.HLTriggerFirstPath, process.HLT_DiCentralPFJet30_PFMET80_BTagCSV07_v6, process.HLT_QuadJet75_55_38_20_BTagIP_VBF_v9, process.HLT_QuadJet75_55_35_20_BTagIP_VBF_v9, process.HLTriggerFinalPath ))

At the very end, add the input, output, and schedule for the entire job. Of course, you should make appropriate changes to the output file, input file, number of events to run on, etc. I put the input file in my public directory, so you may be able to run on my file interactively at CERN on lxplus6, if you don't yet have your own file. My file is 850 events from this dataset TT_Tune4C_13TeV-pythia8-tauola/Fall13dr-tsg_PU40bx25_POSTLS162_V2-v1/GEN-SIM-RAW. Here's a link to it in DAS.

if 'MessageLogger' in process.__dict__:
    process.MessageLogger.categories.append('TriggerSummaryProducerAOD')
    process.MessageLogger.categories.append('L1GtTrigReport')
    process.MessageLogger.categories.append('HLTrigReport')
    process.MessageLogger.categories.append('FastReport')

##-- owen : adding input
# Input source
process.source = cms.Source("PoolSource",
    secondaryFileNames = cms.untracked.vstring(),
    fileNames = cms.untracked.vstring('file:/afs/cern.ch/work/o/owen/public/hlt-recipe-test/ttbar-13tev-850evts.root')
)

##-- owen : adding output
# Output definition
process.FEVTDEBUGHLToutput = cms.OutputModule("PoolOutputModule",
    splitLevel = cms.untracked.int32(0),
    eventAutoFlushCompressedSize = cms.untracked.int32(1048576),
    outputCommands = process.FEVTDEBUGHLTEventContent.outputCommands,
    fileName = cms.untracked.string('file:/afs/cern.ch/work/o/owen/public/hlt-recipe-test/post-hlt-ttbar-13tev-btagonly-test-v1.root'),
    dataset = cms.untracked.PSet(
        filterName = cms.untracked.string(''),
        dataTier = cms.untracked.string('GEN-SIM-RAW')
    )
)

process.FEVTDEBUGHLToutput_step = cms.EndPath(process.FEVTDEBUGHLToutput)

##-- owen: configure message logger to print a message for every event.
process.MessageLogger.cerr.threshold = cms.untracked.string('INFO')
process.MessageLogger.cerr.FwkReport.reportEvery = 1
process.MessageLogger.cerr.FwkReport.limit = 10000

##-- owen : do the schedule
process.schedule = cms.Schedule(process.HLTSchedule)
process.schedule.extend([process.FEVTDEBUGHLToutput_step])

There's one more fix you need to do if you want to run the Btag validation harvesting (below) on the HLT output. You need to update the list of collections that are saved in the output by grabbing this new version of HLTrigger/Configuration/python/HLTrigger_EventContent_cff.py. Otherwise, the validation harvesting step will fail when to try to run on the output of this step.

  cd CMSSW_7_1_0_pre6/src
  cp HLTrigger/Configuration/python/HLTrigger_EventContent_cff.py HLTrigger/Configuration/python/HLTrigger_EventContent_cff.py-original
  cp ~alebihan/public/forNatalia/HLTrigger_EventContent_cff.py HLTrigger/Configuration/python/.

Before running, you may want to check for syntax errors in your python by doing this

python run_hlt_withio.py

I find that, if there are mistakes, the output of this is more helpful than what you get with cmsRun.

Test running HLT

If your python compiles clean, you are ready to do a test run.

cmsRun run_hlt_withio.py |& tee hlt-try1.log

Here's what my log file looks like: https://twiki.cern.ch/twiki/pub/Sandbox/OwenLongSandbox/hlt-try1.log If you want to see what's in the output, try using edmDumpEventContent. For example, do this, except pointing to your file:

edmDumpEventContent /afs/cern.ch/work/o/owen/public/hlt-recipe-test/post-hlt-ttbar-13tev-btagonly-test-v1.root > dump.txt

Here's my output: https://twiki.cern.ch/twiki/pub/Sandbox/OwenLongSandbox/dump.txt

Set up and run the HLT validation for btag triggers

The sources of this information are given below:

Add the HLTriggerOffline package to your working area and set it up

The following is a variation of the recipe given at the HLTBTaggingFor2015 Twiki. This one has a bit more detail and a slightly different method. It also builds on the working area that we have already set up for running the HLT step.

In the following, you will need to make some substitutions (e.g. owen -> your username) that I hope are obvious.

  cd /tmp/owen/
  mkdir git
  cd git
  git init
  git clone -b hlt_btagging_v2 https://github.com/igormarfin/UserCode.git src/
  cd src
  cp -rvp HLTriggerOffline ~owen/rel-dirs/CMSSW_7_1_0_pre6/src/
  cd ~owen/rel-dirs/CMSSW_7_1_0_pre6/src/

Next, edit src/HLTriggerOffline/Btag/BuildFile.xml and remove the space near JetMCAlgos. Then, continue with

  cp /afs/cern.ch/user/o/owen/public/hlt-work/ArbitraryType.h-old-version HLTriggerOffline/Btag/src/ArbitraryType.h
  cp HLTrigger/Configuration/python/HLTrigger_EventContent_cff.py HLTrigger/Configuration/python/HLTrigger_EventContent_cff.py-old
  cp ~alebihan/public/forNatalia/HLTrigger_EventContent_cff.py HLTrigger/Configuration/python/.
  cp HLTriggerOffline/Btag/src/RequireModule.cc HLTriggerOffline/Btag/src/RequireModule.cc-original
  cp /afs/cern.ch/user/o/owen/public/hlt-work/RequireModule.cc-hack HLTriggerOffline/Btag/src/RequireModule.cc

The new version of HLTrigger/Configuration/python/HLTrigger_EventContent_cff.py is required in order to save all of the relevant collections needed to do the post-HLT analysis. The hacked version of RequireModule.cc is my way of getting around a runtime error. I don't know if it's safe.

Try building everything, from your CMSSW_7_1_0_pre6/src directory, with

  scramv1 b -j4 |& tee build1.log

There are a few more tweaks necessary. Before proceeding, save the original versions for reference.

cd HLTriggerOffline/Btag/test
cp my.ini my.ini-original
cp hltHarvesting_cfg.py hltHarvesting_cfg.py-original

Edit my.ini and fix the names of the three triggers. These are the ones that we copied from the GRun menu into our reduced menu in the example above. Here's my final copy of my.ini for this recipe: https://twiki.cern.ch/twiki/pub/Sandbox/OwenLongSandbox/my.ini

hltpathnames=HLT_DiCentralPFJet30_PFMET80_BTagCSV07_v6
             HLT_QuadJet75_55_35_20_BTagIP_VBF_v9
             HLT_QuadJet75_55_38_20_BTagIP_VBF_v9

Edit hltHarvesting_cfg.py and add this SkipEvent option, which tells it to skip events where inputs are not available. If you don't, the job will abort under these circumstances. Don't forget to add the comma at the end of the previous line. Here's my final version of hltHarvesting_cfg.py for this example: https://twiki.cern.ch/twiki/pub/Sandbox/OwenLongSandbox/hltHarvesting_cfg.py.txt

process.options = cms.untracked.PSet(
    fileMode = cms.untracked.string('FULLMERGE'),
    # owen: add this to prevent runtime abort
    SkipEvent = cms.untracked.vstring('ProductNotFound')
)

Next, there are a round of fixes necessary in HLTriggerOffline/Common/python. First, save the originals, then copy my hacked versions.

cd src/HLTriggerOffline/Common/python
cp HLTValidationHarvest_cff.py HLTValidationHarvest_cff.py-original
cp HLTValidationQT_cff.py HLTValidationQT_cff.py-original
cp HLTValidation_cff.py HLTValidation_cff.py-original
cp /afs/cern.ch/user/o/owen/public/hlt-work/HLTValidationHarvest_cff.py .
cp /afs/cern.ch/user/o/owen/public/hlt-work/HLTValidation_cff.py .
cp /afs/cern.ch/user/o/owen/public/hlt-work/HLTValidationQT_cff.py .

For the record here's an outline of the changes/fixes. I only care about the Btag stuff, so I removed everything but that. The fixes are necessary because the checked-out code won't run without them, since there's some Top stuff missing.

  • HLTValidation_cff.py - comment out or remove everything except this
from HLTriggerOffline.Btag.Validation.HLTBTagValidation_cff import *
hltvalidation = cms.Sequence(
      HLTBTagValSeq
      )
  • HLTValidationHarvest_cff.py - At a minimum, comment out all HLTTopPost stuff. Here's my bare-bones version.
from HLTriggerOffline.Common.FourVectorHLTriggerOfflineClient_cfi import *
from HLTriggerOffline.Common.HLTValidationQT_cff import *
from HLTriggerOffline.Btag.Validation.HLTBTagValidationHarvesting_cff import *

hltpostvalidation = cms.Sequence(
HLTBTagHarvestingSequence
    )

hltpostvalidation_prod = cms.Sequence(
    hltriggerFourVectorClient
    )
  • HLTValidationQT_cff.py - At a minimum, comment out all HLTTopQualityTester stuff. Here's my bare-bones version.
import FWCore.ParameterSet.Config as cms

hltvalidationqt = cms.Sequence(
    )
That should be all.

Run the HLT Btag validation harvesting step

Go to your CMSSW_7_1_0_pre6/src/HLTriggerOffline/Btag/test directory. To set the input file, edit my.ini and point it to your file. For this example, this should work, even for you, since this file is in my public directory

files= file:/afs/cern.ch/work/o/owen/public/hlt-recipe-test/post-hlt-ttbar-13tev-btagonly-test-v1.root

Try running it with

cmsRun hltHarvesting_cfg.py | & tee harvest-try2.log 

If you have the input file set to the output of the previous HLT step from this recipe or if you are using my file above, it should run successfully on 100 events and produce a root file in the current directory with the name DQM_V0001_R000000001__CMSSW_test__RelVal__TrigVal.root. The easiest way to have a quick look at the plots is to start root, open a TBrowser, and click through these directories: DQMData > Run > HLT > Run summary > BTag. Here's a screenshot of what the first plot should look like for the 100 events we ran on.

TBrowser-view.png

I put a copy of DQM_V0001_R000000001__CMSSW_test__RelVal__TrigVal.root for the 100 event example in my directory /afs/cern.ch/work/o/owen/public/hlt-recipe-test. I also added the file harvest-ttbar-40k.root to that directory, which is the output for 40k events.

Notes on scaling up the statistics with crab

I have tried running the HLT step using crab on 100k TTbar MC events from this dataset : /TT_Tune4C_13TeV-pythia8-tauola/Fall13dr-tsg_PU40bx25_POSTLS162_V2-v1/GEN-SIM-RAW. I submitted 200 jobs, so 500 events / job. This gives a ~900 Mb output file for each job. A little more than half of the jobs failed with exit code 60317, which means "forced timeout for stuck stage out". I don't know if this means the size of the file for each job was too big or if too many jobs finished at around the same time, clogging the pipe. Rerunning on 100k events in 400 jobs, so 250 events / job, only 28 percent of the jobs failed with exit code 60317 and the typical file size was 750 Mb / job.

My driver python for the crab job is basically the same as the one above (src/HLTrigger/Configuration/test/run_hlt_withio.py) except there are two minor changes:

  • I set the Max number of events to -1.
  • I renamed the output file 'output.root'
I copied run_hlt_withio.py to hlt_crab_driver.py and made those changes. You can see the file here: https://twiki.cern.ch/twiki/pub/Sandbox/OwenLongSandbox/hlt_crab_driver.py.txt

You can find my crab.cfg file here: https://twiki.cern.ch/twiki/pub/Sandbox/OwenLongSandbox/crab.cfg I have the output directed to T3_US_UCR. On down.ucr.edu, the output root files go to this directory /mnt/hadoop/cms/store/user/owen/TT_Tune4C_13TeV-pythia8-tauola

To run things, I start with a new shell and then do this, to ensure that the environment is set up properly. Log in to lxplus6.cern.ch and then do

source /afs/cern.ch/cms/LCG/LCG-2/UI/cms_ui_env.csh
cd rel-dirs/CMSSW_7_1_0_pre6/src
cmsenv
source /afs/cern.ch/cms/ccs/wm/scripts/Crab/crab.csh
where rel-dirs/CMSSW_7_1_0_pre6/src is the src directory for my working area. After you have hlt_crab_driver.py and crab.cfg appropriately modified in your CMSSW_7_1_0_pre6/src/HLTrigger/Configuration/test directory, do this to create and launch the jobs
cd HLTrigger/Configuration/test
crab -create
crab -submit -c HLT_crab_test5b
where HLT_crab_test5b is whatever you have ui_working_dir set to in your crab.cfg file.

Reducing the edm output size -- saving only the essential HLT output

If you don't change anything, running HLT, as described above, will produce output that's roughly 3 MB / event, which is huge. If you don't plan on running the RECO step on your HLT output, there's no point in saving most of the output. To save the bare minimum, do this in your main cfg.py file in the output module configuration:

    ###outputCommands = process.FEVTDEBUGHLTEventContent.outputCommands,
    outputCommands = cms.untracked.vstring( *(
        'drop *_*_*_*',
        'keep *_genParticles_*_SIM',
        'keep *_ak5GenJets_*_*',
        'keep *_genMetTrue_*_*',
        'keep *_genMetCalo_*_*',
        'keep *_genMetCaloAndNonPrompt_*_*',
        'keep *_TriggerResults_*_HLTX',
        'keep *_hltSelectorJets20L1FastJet_*_HLTX',
        'keep *_hltL3CombinedSecondaryVertexBJetTags_*_HLTX',
        'keep *_hltMet_*_HLTX',
        'keep *_hltPFMETProducer_*_HLTX',
        'keep *_hltPFchMETProducer_*_HLTX',
        'keep *_hltMetClean_*_HLTX',
        'keep *_hltHtMht_*_HLTX',
        'keep *_hltPFHT_*_HLTX',
        'keep *_hltL1s*_*_HLTX',
        'keep *_hltL1GtObjectMap_*_HLTX',
        'keep *_hltSelector4CentralJetsL1FastJet_*_HLTX',
        'keep *_hltCaloJetL1FastJetCorrected_*_HLTX',
        'keep *_hltAK4PFJetL1FastL2L3Corrected_*_HLTX',
        'keep *_hltSelectorCentralJets20L1FastJet_*_HLTX',
        'keep *_hltSelector4CentralJetsL1FastJet_*_HLTX',
        'keep *_hltAK4PFJetsCorrected_*_HLTX',
        'keep L1GlobalTriggerObjectMapRecord_hltL1GtObjectMap_*_HLTX',
        'keep L1GlobalTriggerReadoutRecord_hltGtDigis_*_HLTX',
        'keep *_hltL1extraParticles_*_HLTX',
        'keep *_ak5PFJetsCHS_*_RECO',
        'keep *_ak4PFJetsCHS_*_*',
        'keep *_combinedSecondaryVertexBJetTags_*_RECO',
        'keep *_offlinePrimaryVertices_*_RECO',
        'keep *_fixedGrid*_*_RECO',
        'keep *_pileupJetId_*_*',
    ) ) ,

If you do this, the size of the output will go down by a factor of about 60, or 50 kB / event. There are some reco collections in the list above, but those don't matter for an HLT job. Depending on how you are looking at the HLT output, there may be things missing from this "opt-in" list, which you may need to add. Last updated on Aug. 28, 2014.

How to save only events that pass the Level 1 seeds you care about

Another way to save a lot of space in your output files and subsequent processing is to save only events that pass the Level 1 seeds you are interested in. In the HLT, your trigger path will stop processing if the event doesn't pass the Level 1 seeds for your trigger, so in many cases you don't care about these events. Here's one way to save only the events that pass the Level 1 seeds you are interested in, when running HLT. There may be a more elegant way, but I know this way works.

  • Create HLT paths where the only requirement in the path is the Level 1 seeds. For example, take one of the full HLT paths you are interested in, copy it, and strip out everything that comes after the Level1 seed requirement in the copy except the HLTEndSequence. For an example, see my /users/owen/dev_7_1_0/add_btag_to_ht_met/V8 menu, either in the ConfDb GUI or the ConfDb browser. In that menu, I have L1_only_ETM36_or_ETM40, which just has one filter (hltL1sL1ETM36ORETM40) that requires L1_TEM36 OR L1_ETM40, which are the Level1 seeds for the HLT_PFMET150* and HLT_DiCentralPFJet30_PFMET80_BTagCSV07_v6 paths.

  • In the parameters of your OutputModule for your HLT job, add a line that says SelectEvents = cms.vstring('L1_only_ETM36_or_ETM40') as shown below. You can specify more than one path and it will save an event if it passed any of the listed paths.
process.FEVTDEBUGHLToutput = cms.OutputModule("PoolOutputModule",
    splitLevel = cms.untracked.int32(0),
    eventAutoFlushCompressedSize = cms.untracked.int32(1048576),
    outputCommands = process.FEVTDEBUGHLTEventContent.outputCommands,
    fileName = cms.untracked.string('output.root'),
  #-- only save events that pass a given path
    SelectEvents = cms.untracked.PSet(
         SelectEvents = cms.vstring('L1_only_ETM36_or_ETM40','L1_only_HTT150_or_HTT175')
    ),
    dataset = cms.untracked.PSet(
        filterName = cms.untracked.string(''),
        dataTier = cms.untracked.string('GEN-SIM-RAW')
    )
)

In my menu /users/owen/dev_7_1_0/add_btag_to_ht_met/V8, I also have paths set up for saving only events that pass the Level 1 seeds and the calorimeter-based MET, HT, and MHT requirements. If you do something like this, then it will only save events that pass all filters that appear before the btag sequences. This is useful if all you care about are events where the btag sequences run.

Run the offline reconstruction and compare online vs offline CSV output

The agreement between the online and offline tagging output is important. In particular, we will need to know how often the online btagging misses tagging a jet that is tagged by the offline btagging (trigger inefficiency). If you run the offline reconstruction on the output of your HLT jobs, you can do a jet-by-jet comparison of the online and offline btag output. This is one of the standard Btag validation plots. Here are the instructions from Anne-Catherine Le Bihan, as of April 29, 2014.

First, go through these two steps:

After you have a working HLT and offline validation setup, you can proceed with the rest. In your src/HLTriggerOffline/Btag/test directory, prepare a python cfg file to run the offline reconstruction with this command

cmsDriver.py step.py --step RAW2DIGI,L1Reco,RECO --conditions=auto:startup \
        --pileup_input dbs:/MinBias_TuneA2MB_13TeV-pythia8/Fall13-POSTLS162_V1-v1/GEN-SIM \
        --pileup AVE_20_BX_25ns --datamix NODATA_MIXER --filein=file:hlt-output.root --fileout=output_2.root --mc \
        --no_exec --number=10 --eventcontent FEVTDEBUGHLT --python_filename=reco_my_8e33_13TeV_v1.py --processName=RECO
You will need to then edit the output (reco_my_8e33_13TeV_v1.py) and replace the input file (hlt-output.root) and output file (output_2.root) with whatever you want to use. After you test run it, you will probably also want to remove the 10-event limit. You will probably eventually want to run this step in crab. For this example, I will run on the output of the HLT recipie above, which is /afs/cern.ch/work/o/owen/public/hlt-recipe-test/post-hlt-ttbar-13tev-btagonly-test-v1.root, and send the output to /afs/cern.ch/work/o/owen/public/hlt-recipe-test/post-reco-hlt-ttbar-13tev-btagonly-test-v1.root. Here's the version I used in this example: https://twiki.cern.ch/twiki/pub/Sandbox/OwenLongSandbox/reco_my_8e33_13TeV_v1.py.txt . Run the offline reconstruction with
cmsRun reco_my_8e33_13TeV_v1.py |& tee reco-test-try1a.log
Before you run the offline HLT btag validation on the output, you need to set a couple of collections for the offline processing in HLTriggerOffline/Btag/python/Validation/HLTBTagValidation_cff.py. Specifically,
ValidationMCBTagIP3DFastPV.TrackIPTagInfo  = cms.InputTag('secondaryVertexTagInfos')
ValidationMCBTagIP3DFastPV.OfflineJetTag   = cms.InputTag('combinedSecondaryVertexBJetTags')
Here's the modified version: https://twiki.cern.ch/twiki/pub/Sandbox/OwenLongSandbox/HLTBTagValidation_cff.py-post-reco-version.txt . You may need to revert back to the original version of HLTBTagValidation_cff.py if you switch back to running on HLT output without the offline reconstruction. After making this change, you can then run the offline HLT btag validation as described above. Before you do, set the input collection to the output of the offline reconstruction step. That is, edit my.ini and, for this example, you would set files as shown below
files= file:/afs/cern.ch/work/o/owen/public/hlt-recipe-test/post-reco-hlt-ttbar-13tev-btagonly-test-v1.root
Run the harvesting
cmsRun hltHarvesting_cfg.py |& tee harvest-reco-try1.log

I have taken the output of running the harvesting on the 100 event test file (DQM_V0001_R000000001__CMSSW_test__RelVal__TrigVal.root) and copied it into my public directory here /afs/cern.ch/work/o/owen/public/hlt-recipe-test/hlt-harvest-post-reco.root. The statistics are low, but if you look at the JetTag_OffvsL3 plot, you will see the expected correlation between the online and offline CSV output.

screenshot-on-vs-off.png

Another way to compare the online vs offline CSV output is to use the TTree made by the analysis module described below.

How to add the pileup jet ID to your RECO output

If you want pileup jet ID (you do) then it makes sense to run the pileup jet ID code in the RECO step. Here's how to add that.

  • Import it into your configuration python with this line
from RecoJets.JetProducers.PileupJetIDParams_cfi import full_5x_chs
  • Configure it with this section. I made a couple of changes that seemed prudent to me at the moment.
process.pileupJetId = cms.EDProducer('PileupJetIdProducer',
     produceJetIds = cms.bool(True),
     jetids = cms.InputTag(""),
   # runMvas = cms.bool(True),
     runMvas = cms.bool(False),
     jets = cms.InputTag("ak5PFJetsCHS"),
     vertexes = cms.InputTag("offlinePrimaryVertices"),
     algos = cms.VPSet(full_5x_chs),
     rho = cms.InputTag("fixedGridRhoFastjetAll"),
     jec = cms.string("AK5PFchs"),
   # applyJec = cms.bool(True),
     applyJec = cms.bool(False),
     inputIsCorrected = cms.bool(False),
     residualsFromTxt = cms.bool(False),
     residualsTxt = cms.FileInPath("RecoJets/JetProducers/data/download.url") # must be an existing file
)
  • Add it to the execution path. See the places where pujetid appears in the lines below
# Path and EndPath definitions
process.raw2digi_step = cms.Path(process.RawToDigi)
process.L1Reco_step = cms.Path(process.L1Reco)
process.reconstruction_step = cms.Path(process.reconstruction)
process.pujetid_step = cms.Path(process.pileupJetId)
process.endjob_step = cms.EndPath(process.endOfProcess)
process.FEVTDEBUGHLToutput_step = cms.EndPath(process.FEVTDEBUGHLToutput)

# Schedule definition
process.schedule = cms.Schedule(process.raw2digi_step,process.L1Reco_step,process.reconstruction_step,process.pujetid_step,process.endjob_step,process.FEVTDEBUGHLToutput_step)
  • Don't forget to save the pileupJetId collection and also offlinePrimaryVertices collection the in the output by adding these lines to your outputCommands of your OutputModule
        'keep *_offlinePrimaryVertices_*_RECO',
        'keep *_pileupJetId_*_*',

More info on pileup jet ID can be found here.

How to save only the essential edm collections in your output after running RECO on your HLT output

After running the offline reconstruction (the RECO step), you probably only care about a few of the collections in the output, so you can discard the rest. In the output module of your RECO configuration python file, replace the outputCommands line as shown below

process.FEVTDEBUGHLToutput = cms.OutputModule("PoolOutputModule",
    splitLevel = cms.untracked.int32(0),
    eventAutoFlushCompressedSize = cms.untracked.int32(1048576),
    fileName = cms.untracked.string('file:/data/down/owen/hlt-work2/post-reco-hlt-ttbar-13tev-btagonly-test6a-v1b.root'),
  # outputCommands = process.FEVTDEBUGHLTEventContent.outputCommands,
    outputCommands = cms.untracked.vstring( *(
        'drop *_*_*_*',
        'keep *_genParticles_*_SIM',
        'keep *_TriggerResults_*_HLTX',
        'keep *_hltSelectorJets20L1FastJet_*_HLTX',
        'keep *_hltL3CombinedSecondaryVertexBJetTags_*_HLTX',
        'keep *_hltMet_*_HLTX',
        'keep *_hltHtMht_*_HLTX',
        'keep *_hltL1s*_*_HLTX',
        'keep L1GlobalTriggerObjectMapRecord_hltL1GtObjectMap_*_HLTX',
        'keep L1GlobalTriggerReadoutRecord_hltGtDigis_*_HLTX',
        'keep *_hltL1extraParticles_*_HLTX',
        'keep *_ak5PFJetsCHS_*_RECO',
        'keep *_combinedSecondaryVertexBJetTags_*_RECO',
        'keep *_offlinePrimaryVertices_*_RECO',
        'keep *_fixedGrid*_*_RECO',
        'keep *_pileupJetId_*_*',
    ) ) ,
    dataset = cms.untracked.PSet(
        filterName = cms.untracked.string(''),
        dataTier = cms.untracked.string('')
    )
)

This tosses away most of the output and keeps everything I'm currently (June 13, 2014) interested in looking at. It reduces the average size of ttbar MC events from 3.7 Mb/event to 90 kb/event.

Timing (CPU) studies

I tried the procedure for using the FastTimerService, described in detail here, and it worked. I did it just using one of the lxplus6 machines. This page says to do the study on a specific machine, but as of today (May 12, 2014), that's running SL5, so I'm not sure what to do for a release based on SL6 at the moment. The first step generates a root file named DQM.root. To produce the plots from the FastTimerService, you need to run a second harvesting step which takes DQM.root as input. The python files that do the timing study for this HLT recpie are

The input edm file for the HLT step is set to the ttbar MC file in my public afs directory and the edm output is sent to that directory, so you will need to change one or both of those. The timing study output for the first step goes into DQM.root, again in my public afs directory, so change that as well. For the second step, the input file is the DQM.root file in my public afs directory, so point that to your file. The output of the second step will be DQM_V0001_R000000001__HLT__FastTimerService__All.root in the working directory where you executed the cmsRun command for the second step. The two steps are

cmsRun run_hlt_withio_timing.py | & tee hlt-timing-try1a.log
cmsRun harvestTiming_cfg.py |& tee timing-try1a.log

I copied the DQM_V0001_R000000001__HLT__FastTimerService__All.root to my public afs directory (/afs/cern.ch/work/o/owen/public/hlt-recipe-test/) in case you want to look at it. If you start root and open a TBrowser and examine the contents of DQM_V0001_R000000001__HLT__FastTimerService__All.root, the most interesting results are probably in the Paths folder as shown in the screenshot below

screenshot-timing.png

The plot shown above gives the total execution time for each module in the HLT_DiCentralPFJet30_PFMET80_BTagCSV07_v6 path, for the 100 events we ran on. The order of the bins, one for each module, is the order of execution. You can use the interactive zoom feature on the canvas to look at specific regions in a way where you can actually read the bin labels. In addition to the pathname_module_total histogram, the histogram pathname_total histogram is important. This is the total amout of time per event spent in the path, integrating over all modules. Note that I had to adjust the process.FastTimerService.dqmTimeRange and process.FastTimerService.dqmPathTimeRange to 14000 ms (or 14 seconds) in the run_hlt_withio_timing.py file in order not to have any overflows. It takes a long time for some events!

Use an analysis module to create a TTree for analyzing your HLT paths

I started with the official validation packages above, but wanted to understand what it was doing better and also wanted more control over things that I could look at, so I wrote my own analysis module. It's evolving as I go, but this will get you going with a working snapshot of my setup.

I got started by following the Writing your own EDAnalyzer section of the Offline Workbook and just kept going, at first using the HLT Btag validation package as a guide.

Go to your CMSSW_7_1_0_pre6 release directory where you have already set up everything else and do this

cd src
cmsenv
mkdir AnalyzerOfHLT
cd AnalyzerOfHLT
mkedanlzr HLTMyAnalyzer
cd HLTMyAnalyzer/plugins
Download my versions of HLTMyAnalyzer.cc and BuildFile.xml and put them in that (HLTMyAnalyzer/plugins) directory. Then go back to the HLTMyAnalyzer directory and continue
cd ..
scram b
The output should look something like this
strange.ucr.edu /home/cms/owen/rel-dirs/CMSSW_7_1_0_pre6/src/AnalyzerOfHLT/HLTMyAnalyzer/plugins :cd ..
strange.ucr.edu /home/cms/owen/rel-dirs/CMSSW_7_1_0_pre6/src/AnalyzerOfHLT/HLTMyAnalyzer :scram b
Reading cached build data
>> Local Products Rules ..... started
>> Local Products Rules ..... done
>> Entering Package RecipeTest2/HLTMyAnalyzer
>> Creating project symlinks
  src/RecipeTest2/HLTMyAnalyzer/python -> python/RecipeTest2/HLTMyAnalyzer
Entering library rule at src/RecipeTest2/HLTMyAnalyzer/plugins
>> Compiling edm plugin /home/cms/owen/rel-dirs/CMSSW_7_1_0_pre6/src/RecipeTest2/HLTMyAnalyzer/plugins/HLTMyAnalyzer.cc
>> Building edm plugin tmp/slc6_amd64_gcc481/src/RecipeTest2/HLTMyAnalyzer/plugins/RecipeTest2HLTMyAnalyzerAuto/libRecipeTest2HLTMyAnalyzerAuto.so
Leaving library rule at src/RecipeTest2/HLTMyAnalyzer/plugins
@@@@ Running edmWriteConfigs for RecipeTest2HLTMyAnalyzerAuto
--- Registered EDM Plugin: RecipeTest2HLTMyAnalyzerAuto
>> Leaving Package RecipeTest2/HLTMyAnalyzer
>> Package RecipeTest2/HLTMyAnalyzer built
>> Local Products Rules ..... started
>> Local Products Rules ..... done
gmake[1]: Entering directory `/home/cms/owen/rel-dirs/CMSSW_7_1_0_pre6'
>> Creating project symlinks
>> Done python_symlink
>> Compiling python modules python
>> Compiling python modules src/RecipeTest2/HLTMyAnalyzer/python
>> All python modules compiled
@@@@ Refreshing Plugins:edmPluginRefresh
>> Pluging of all type refreshed.
gmake[1]: Leaving directory `/home/cms/owen/rel-dirs/CMSSW_7_1_0_pre6'
strange.ucr.edu /home/cms/owen/rel-dirs/CMSSW_7_1_0_pre6/src/RecipeTest2/HLTMyAnalyzer :
Now, go to the test directory.
cd test
Download my versions of test_ttbar_cfg.py and hltJetMCTools_cff.py and put them in that directory. Rename the files with "*.py.txt" changed to "*.py" after downloading. This points to an input file I made and put in my public directory at CERN. If you are logged in to lxplus6 at CERN, you can now try running it.
cmsRun test_ttbar_cfg.py
If all goes well, it will generate test-ttbar-analysis-ttree.root in the current directory. I placed a copy in my public directory here, in case you want to look at it and haven't made your own: /afs/cern.ch/work/o/owen/public/hlt-recipe-test/.

Have a look at the TTree, by doing this in root

root
TChain ch("demo/hlt_ttree")
ch.Add("test-ttbar-analysis-ttree.root")
ch.Print("toponly")
ch.Draw("n_jets")
ch.Draw("jet_pt")
ch.Draw("jet_csv")
ch.Draw("jet_csv","abs(jet_flav)==5")
ch.Draw("n_offjets")
ch.Draw("offjet_pt")
ch.Draw("offjet_pt","offjet_passLoose&&offjet_beta>0.2")
ch.Draw("offjet_csv","offjet_passLoose&&offjet_beta>0.2")
ch.Draw("offjet_csv","offjet_passLoose&&offjet_beta>0.2&&abs(offjet_flav)==5")
ch.Draw("jet_csv:offjet_csv[jet_offind]","jet_offind>=0")
ch.SetMarkerStyle(20)
ch.Draw("jet_csv:offjet_csv[jet_offind]","jet_offind>=0&&jet_csv>0&&offjet_csv[jet_offind]>0")

The contents of the TTree are roughly described below

  • The HLT jets that are input to btagging are in the jet_* arrays, which have size n_jets.
  • The offline PF jets, if available, are in the offjet_* arrays, wich have size n_offjets.
  • The MC truth info, if available, for all generated b and c quarks are in the true_hq_* arrays with size n_true_hq.
  • The associations between online, offline, and truth are in the *ind variables. For example, if an online jet has a matched offline jet, the index of the offline jet in the offjet arrays is jet_offind. See the last root command in the previous section for an example.
For the rest, please have a look at the code.

How to run the UCT2015 simulation in your HLT job (added Sept. 24, 2014).

The general instructions for the Upgrade Calorimeter Trigger (UCT) 2015 simulation are here. You need to do this to get the simulation of the L1 seeds that we will have in 2015. Check there first to see if anything below is now out of date. To set up your release directory, do these steps

git cms-addpkg DataFormats/L1CaloTrigger
git cms-addpkg L1TriggerConfig/L1ScalesProducers
git cms-addpkg L1Trigger/RegionalCaloTrigger     

git clone https://github.com/uwcms/UCT2015.git L1Trigger/UCT2015
cd L1Trigger/UCT2015
git checkout uct2015Core
cd ../..

scramv1 b -j 8

The following are modifications that you need to make to your HLT driver cfg.py file. At the beginning, add

process.load("L1Trigger.UCT2015.emulationMC_cfi")
process.load("L1Trigger.UCT2015.uctl1extraparticles_cfi")

Down near the bottom where the HLT sequences are defined, create this new sequence

process.MyUCTExtraSequence = cms.Sequence( process.emulationSequence *  process.uct2015L1Extra )

Modify the HLTBeginSequence definition so that it will run your MyUCTExtraSequence

process.HLTBeginSequence = cms.Sequence( process.hltTriggerType + process.HLTL1UnpackerSequence + process.MyUCTExtraSequence + process.HLTBeamSpot )

You can see a complete example cfg.py file here. Please note that this will not change the L1 seeds that your HLT paths will see. Instead, what it will do is create the following collections in the event.

vector l1extra::L1EmParticle         "l1extraParticlesUCT"       "Isolated"      "HLTX"
vector l1extra::L1EmParticle         "l1extraParticlesUCT"       "NonIsolated"   "HLTX"
vector l1extra::L1EtMissParticle     "l1extraParticlesUCT"       "MET"           "HLTX"
vector l1extra::L1EtMissParticle     "l1extraParticlesUCT"       "MHT"           "HLTX"
vector l1extra::L1HFRings            "l1extraParticlesUCT"       ""              "HLTX"
vector l1extra::L1JetParticle        "l1extraParticlesUCT"       "Central"       "HLTX"
vector l1extra::L1JetParticle        "l1extraParticlesUCT"       "Forward"       "HLTX"
vector l1extra::L1JetParticle        "l1extraParticlesUCT"       "Tau"           "HLTX"
vector l1extra::L1MuonParticle       "l1extraParticlesUCT"       ""              "HLTX"

where I used the process name HLTX when I ran HLT. You will want to save these collections in the edm output or access these collections in your HLT job and save the relevant variables so that you can construct your L1 seeds from them. Be sure you save the collections in your HLT edm output if you run your analysis in a second step by doing something like this in your HLT job cfg.py

process.FEVTDEBUGHLToutput = cms.OutputModule("PoolOutputModule",
  blah blah blah
      outputCommands = cms.untracked.vstring( *(
        'drop *_*_*_*',
        blah
        blah
        blahhhh
        'keep *_l1extraParticles*_*_*',
    ) ) ,
    blah
)

See here for a complete example without the blah's. For example, if you wanted to simulate the L1_ETM70 seed, you would do something like this in your analysis code.

     //--: Try accessing UCT info, MET
      edm::Handle h_uct_met_col ;
      iEvent.getByLabel( edm::InputTag("l1extraParticlesUCT","MET",hltProcess_.label().c_str()), h_uct_met_col ) ;
      if ( h_uct_met_col.isValid() ) {
         if ( verbose_ ) {
            printf("\n\n UCT Level 1 MET object(s):\n") ;
            for ( size_t mi=0; misize(); mi++ ) {
               const l1extra::L1EtMissParticle& etmp = (*h_uct_met_col)[mi] ;
               printf("   %2lu : etMiss() = %6.1f\n", mi, etmp.etMiss() ) ;
            } // mi
            printf("\n") ;
         }
         //-- looks like this always has just one entry.
         tv_l1uct_met = 0. ;
         if ( h_uct_met_col->size() > 0 ) {
            const l1extra::L1EtMissParticle& etmp = (*h_uct_met_col)[0] ;
            tv_l1uct_met = etmp.etMiss() ;
         }
      } else {
         if (verbose_) printf("\n\n *** Invalid l1extraParticlesUCT handle for MET.\n\n" ) ;
      }
      if ( tv_l1uct_met >= 70 ) tv_l1uct_etm70 = true ;

You can find a recent (Sept. 24, 2014) version of my HLT analysis module here. Look at the l1uct_* variables to see how I'm doing it.

Checklist for setting up a new CMSSW release directory

Here are the main steps. Most recently done for CMSSW_7_1_7 (Aug. 27, 2014). As usual, check SWGuideGlobalHLT to see if this is still the right procedure.

  • Go to the confDB GUI, make a new menu, and import relevant paths from a recent GRun menu. I just created /users/owen/dev_7_1_6/addbtag3/V2 from /dev/CMSSW_7_1_1/AlternativeTrackingScenarios/GRun_TK1B/V28.
  • Adjust thresholds in your copied paths (if you want).
  • Follow the SWGuideGlobalHLT instructions in the "Preparing a CMSSW developer area" section.
  • Setup the configuration python in your HLTrigger/Configuration/test area. Create the general setup file
edmConfigFromDB --cff --configName /dev/CMSSW_7_1_1/AlternativeTrackingScenarios/GRun_TK1B/V28 --nopaths --services -PrescaleService > setup_hlt_trk1b_v28_cff.py
  • Dump your paths
hltGetConfiguration /users/owen/dev_7_1_6/addbtag3/V2 --full --offline --mc --unprescale --process HLTX --globaltag auto:startup_GRun > run_hlt_trk1b_v28_nomod.py
  • Make the by-hand changes to the output of the previous step to get something runnable with input, output, etc...
  • Make sure you have the necessary collections saved in HLTrigger/Configuration/python/HLTrigger_EventContent_cff.py before launching a ton of HLT crab jobs!
  • Setup your analysis module (AnalyzerOfHLT/HLTMyAnalyzer) so that you can check that everything is working.
  • After checking that the analyzer can access all relevant collections from the HLT output, launch a ton of crab HLT jobs.
  • Set up the cfg.py for running the offline reconstruction (for signal MC). Try something like this (inspired by the SWGuideGlobalHLT instructions, except with the global tag set to auto:startup).
cmsDriver.py step3 --conditions POSTLS162_V2::All -n 10 --eventcontent FEVTDEBUGHLT -s RAW2DIGI,L1Reco,RECO --datatier GEN-SIM-RECO --customise SLHCUpgradeSimulations/Configuration/postLS1Customs.customisePostLS1 --geometry Extended2015 --magField 38T_PostLS1 --conditions=auto:startup --no_exec --filein file:hlt-output-reco-input.root --fileout file:reco-output.root
You will want to limit the output collections and perhaps add in the beta variable for pileup jet ID.

Attachments

Topic attachments
I Attachment History Action Size Date Who Comment
XMLxml BuildFile.xml r1 manage 0.2 K 2014-06-14 - 00:55 OwenLong  
PNGpng ConfDB-browser1.png r2 r1 manage 511.5 K 2014-04-25 - 10:12 OwenLong  
Texttxt HLTBTagValidation_cff.py-post-reco-version.txt r1 manage 1.1 K 2014-05-03 - 00:23 OwenLong  
Unknown file formatcc HLTMyAnalyzer-sept24-2014.cc r2 r1 manage 96.7 K 2014-09-25 - 23:41 OwenLong  
Unknown file formatcc HLTMyAnalyzer.cc r1 manage 44.8 K 2014-06-14 - 00:54 OwenLong Snapshot from June 13, 2014
PNGpng TBrowser-view.png r1 manage 202.4 K 2014-04-25 - 14:34 OwenLong  
Unknown file formatcxx ToyMCSampler.cxx r1 manage 27.3 K 2011-12-14 - 01:33 OwenLong Hacked roostats code to save toy results in a TTree.
Unknown file formatcfg crab.cfg r1 manage 4.6 K 2014-04-25 - 16:49 OwenLong  
Texttxt dump.txt r1 manage 25.8 K 2014-04-25 - 14:47 OwenLong  
Texttxt harvestTiming_cfg.py.txt r1 manage 0.7 K 2014-05-12 - 22:10 OwenLong  
PDFpdf hcal-noise-offline-study-july3-2017.pdf r1 manage 2026.9 K 2017-07-03 - 20:44 OwenLong  
Unknown file formatlog hlt-try1.log r1 manage 135.1 K 2014-04-25 - 14:42 OwenLong  
Texttxt hlt-with-uct2015-example-cfg_py.txt r1 manage 239.6 K 2014-09-24 - 17:58 OwenLong  
Texttxt hltHarvesting_cfg.py.txt r1 manage 8.3 K 2014-04-25 - 14:54 OwenLong  
Texttxt hltJetMCTools_cff.py.txt r1 manage 1.7 K 2014-06-14 - 01:36 OwenLong  
Texttxt hlt_crab_driver.py.txt r1 manage 231.1 K 2014-04-25 - 16:48 OwenLong  
Unknown file formatpptx hltbtag-for-july2014-trigger-ws.pptx r1 manage 394.9 K 2014-06-28 - 19:19 OwenLong  
PDFpdf ht5overht-mar2-2017.pdf r1 manage 2487.6 K 2017-03-07 - 17:01 OwenLong Studies on HT5/HT>2 background
Unknown file formatpptx ht5overht-mar2-2017.pptx r1 manage 2448.8 K 2017-03-07 - 17:01 OwenLong Studies on HT5/HT>2 background
C source code filec jc_fom_toymc.c r1 manage 3.9 K 2015-09-16 - 19:20 OwenLong  
PDFpdf jet-charge-figure-of-merit.pdf r1 manage 131.6 K 2015-09-16 - 19:17 OwenLong  
PDFpdf likelihood-estimates-for-observables.pdf r1 manage 75.2 K 2012-03-06 - 19:26 OwenLong Description of method for making profile likelihood scans of any likelihood parameter.
PDFpdf met-extrapolation-sensitivity-dec04-2014.pdf r1 manage 1064.6 K 2014-12-06 - 00:59 OwenLong  
Textini my.ini r1 manage 1.0 K 2014-04-25 - 14:50 OwenLong  
PDFpdf pfht-eta-range-june2-2017.pdf r1 manage 338.7 K 2017-06-02 - 22:21 OwenLong  
PDFpdf phys14-ra2b-ra2-sept19-2014-v0.pdf r1 manage 627.2 K 2014-09-18 - 02:06 OwenLong Draft version
PDFpdf phys14-ra2b-ra2-sept19-2014-v1.pdf r1 manage 633.6 K 2014-09-19 - 05:42 OwenLong  
C source code filec plotAllStrongHiggs.c r1 manage 1.0 K 2013-09-01 - 03:12 OwenLong  
C source code filec plotStrongHiggsInput.c r1 manage 24.6 K 2013-09-01 - 03:12 OwenLong  
PDFpdf plots-dec09-2014.pdf r1 manage 1801.3 K 2014-12-10 - 02:12 OwenLong  
Unknown file formatpptx qcd-mdp-slides-for-christian-june9-2015.pptx r2 r1 manage 655.0 K 2015-06-09 - 22:55 OwenLong  
PDFpdf ra2b-profile-plots-CLs-comp-dec13-2011.pdf r1 manage 1753.1 K 2011-12-14 - 01:27 OwenLong Likelihood profile plots and CLs comparison
Unknown file formatpptx ra2b-profile-plots-CLs-comp-dec13-2011.pptx r1 manage 1840.8 K 2011-12-14 - 01:28 OwenLong Likelihood profile plots and CLs comparison
Texttxt reco_my_8e33_13TeV_v1.py.txt r1 manage 3.4 K 2014-05-03 - 00:27 OwenLong  
Texttxt run_hlt.py.txt r1 manage 229.5 K 2014-04-25 - 10:28 OwenLong  
Texttxt run_hlt_withio.py.txt r1 manage 231.2 K 2014-04-25 - 14:38 OwenLong  
Texttxt run_hlt_withio_timing.py.txt r1 manage 234.0 K 2014-05-12 - 22:10 OwenLong  
PNGpng screenshot-on-vs-off.png r1 manage 137.6 K 2014-05-03 - 00:08 OwenLong  
PNGpng screenshot-timing.png r1 manage 208.2 K 2014-05-12 - 22:22 OwenLong  
Texttxt setup_cff.py.txt r2 r1 manage 273.8 K 2014-04-25 - 11:26 OwenLong  
PDFpdf strong-higgs-plots-v130830-all.pdf r1 manage 474.5 K 2013-09-01 - 03:02 OwenLong Plots of strong SUSY production of 2xH(bb)+MET for RA2b observables. Quark jets are equal BFs for u,d,c,s,b.
PDFpdf strong-higgs-plots-v130830-bb.pdf r1 manage 449.0 K 2013-09-01 - 03:04 OwenLong Plots of strong SUSY production of 2xH(bb)+MET for RA2b observables. Quark jets are bb.
PDFpdf strong-higgs-plots-v130830-lf.pdf r1 manage 470.7 K 2013-09-01 - 03:06 OwenLong Plots of strong SUSY production of 2xH(bb)+MET for RA2b observables. Quark jets are equal BFs for u,d,s,c(?).
PDFpdf strong-higgs-plots.pdf r1 manage 473.1 K 2013-08-22 - 18:59 OwenLong Some plots of strong susy production of 2xH(bb)+MET final state. Aug. 22, 2013
Unknown file formattex strong-higgs-plots.tex r1 manage 6.8 K 2013-09-01 - 03:13 OwenLong  
PDFpdf susy-triggers-hltbtag-meeting-june12-2014-v2.pdf r1 manage 2654.9 K 2014-06-28 - 18:57 OwenLong  
Unknown file formatpptx susy-triggers-hltbtag-meeting-june12-2014-v2.pptx r1 manage 2691.8 K 2014-06-28 - 18:57 OwenLong  
Texttxt test_ttbar_cfg.py.txt r1 manage 2.3 K 2014-06-14 - 01:36 OwenLong  
C source code filec toylikelihood1ln.c r1 manage 10.0 K 2015-01-29 - 18:32 OwenLong  
PDFpdf trigger-and-ht5overht-mar7-2017.pdf r1 manage 1124.5 K 2017-03-07 - 17:01 OwenLong Studies on HT5/HT>2 background
Unknown file formatpptx trigger-and-ht5overht-mar7-2017.pptx r1 manage 1179.1 K 2017-03-07 - 17:01 OwenLong Studies on HT5/HT>2 background
PDFpdf turn-on-curves-ht350-met120.pdf r1 manage 255.3 K 2015-01-07 - 16:17 OwenLong  
Edit | Attach | Watch | Print version | History: r34 < r33 < r32 < r31 < r30 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r34 - 2017-07-03 - OwenLong
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback