Contents:
-
-
- CMS approval procedures
- CMS approval procedures
- CondDb
- GlobalTag
- Rucio
- ROOT Friend - sort by index
- Tunnel ssh proxy
- DAS dasgoclient das_client.py dasgoclient with loop
- DQM and DQM harvesting in HLT
- Streamer files
- igProf virtual shared memory RSS
- Install a machine with all CMS services (script used in OpenStack CERN)
- Install CVMFS
- Install and configure Centos7 (EOS, CVMFS, AFS, ...)
- Let a process run after SSH logout
- Make CMSSW Replay at Tier0
- RooFit PDF Parton Distribution Function LHAPDF
- RooFit RooStat
- Timing b-tagging
- GetQuantiles TH1F
- STORM
- How to use a new L1 menu from .xml
- Eras at HLT
- Useful strings in HLT configuration
- Print HLTriggerResults from edmTriggerResults
- Servers
- PSI
- Copy a local folder to PSI T3
- hadd directly from the T2_CH_CSCS or T3_CH_PSI
- Search/find a file recursively using xrdfs (xrootd)
- Open a file from EOS Tier0 T0_CH_CERN
- GIT conflict solving
- Automatic tests in gitlab (PR test, pipeline, CI/CD, ...)
- GIT conflict solving (gitlab recipes)
- GIT solving stupid conflicts
- GIT solving stupid conflicts
- GIT try to merge locally two pull request changing the same file
- GIT delete all old tags
- Useful CMSSW GitHub commands
- GIT setup a cmssw repository locally
- Grid control
- Grid control
- Example of code to run with grid-control (add several variables into TTree)
- How to submit jobs at T3 without grid-control (qsub, bsub)
- Where to find the proxy of T2 and T3
- Command for jobs in batch system queue
- Heppy VHbb Hbb ntuplizer
- VHbb analysis (ETH)
-
- How to get impact plot (rho, pulls)
- Check nuisance parameters
- Debug
- Redo MHT on-fly
- Redo HT on-fly
- Stack from DC - datacard
- Lecture on ATLAS/CMS Statistical Analysis Methods
- Count number of files in a folder/directory (bash)
- Jet Energy Correction (JEC)
- MET Missing ET Type-I Type-II Type-0 (Type1, Type2, Type0)
- Fake MET check plots
- Paper PAS AN Analysis Note GitLab TdrProcessing
- Download an AN from SVN for editing
- Comparison HLT run-1 vs run-2 Phys14 samples PU30BX50
- Best RelVal HLT b-tag and Higgs performance DQM plots
- Validation frozen menu
- AFS website
- Override L1 menu with XML
- Filter LumiSection and run using JSON (>74X)
- Repeated Event PoolSource
- How to get luminosity ( lumicalc brilcalc )
- How to get luminosity or pile-up event-by-event lumi-by-lumi (JSON, PU, pileup)
- Event asymmetry (boost)
- Homemade JSON filer on FWLite (pyROOT)
- Golden JSON file
- ROOT colors
- TTreeFormula
- pyROOT how to fill a TTree
- use C++ template in pyROOT
- DeltaR in CMSSW
- Program to extract PDF images from PDF
- Run list
- How to add a new branch in the usercode with GIT
- github API graphql
- GIT clone
- Syncronize two folders
- Run over a list of event (by EventNumber) - Remove events with repeated PU collisions
- How to plot a turn-on curve directly with CMSSW ROOT file
- How to pick and event
- CSVv2IVF
- C++ replace/erase/remove
- Slimmed jets offline b-tag
- Delete file from T2
- Integration test
- Output module to keep the online b-tag information to do the performance plot
- ROOT default binning
- Snippet to keep only information useful for trigger development (WH)
- MuEnriched EMEnriched QCD generator
- Initialize
- GIT
- DAS dasgoclient (it was DBS query, das_client.py)
- NANOAOD files examples
- XROOTD
- Less option (updated output)
- hltGetConfiguration
- STEAM stuff
- Candidate generic string filter
- Download a file from CMSSW by GIT
- Forward porting (rebase) CMSSW git
- GIT diff offline (use git as diff or sdiff)
- GIT test a pull request (PR) locally
- BTAG DQM Validation:
- TChain multifiles
- Btag discriminant -1 and -10
- Load standard sequence
- CMSSW Handle is present
- CRAB3 - set up & launch
- CRAB3 - pyhton configuration file
- CRAB 3 - Infos
- launch
-
-
- CRAB3 - set up & launch
- CRAB3 - pyhton configuration file
- CRAB 3 - Infos
- Git common commands
- Git download from users
- Simplest python config
- TFileService line
- Crab limits
- How to match the CaloJets with the b-quarks directly by ROOT
- Command to wait for jobs (bsub) do be done
- How to take the two b quarks from the Higgs decay
-
-
- How to get a histo (or profile) from a TCanvas
- How to use an external files list in cmsRun
- How to use multi processes
- How to get the Integral of a TH1D
- Trigger objects and trigger filter in MINIAOD
- AWK
- Use perl to replace a word inside a txt
- Draw Feynman diagrams
- Useful python commands
- Useful CMSSW commands
- TLegend
- PileUp PU info (number of true vertices)
- Skim file root
- Example with edm::View and reco::Candidate
- Run a trigger path using an external configuration
- Turn-on functions
- DeltaR and DeltaPhi in ROOT
- File merger (how to merge root CMSSW files)
- Soft kill command (equivalent to CTRL+C)
- Get trigger with L1 emulator
- How to allow some warnings with CMSSW scram b
- How to find the cross-section (crosssection , xsection ) of a sample with DAS
- How to calculate cross sections (xsect)
- Plot on ROOT the hltFastPrimaryVertex error
- Madgraph aMC generate events and simulation
- Event size
- Run CMSSW in a folder/directory
- Get the IP of your machine
- <a name="How_to_install_CERN_printers_on"></a> How to install CERN printers on Ubuntu 16.04
- Printer CERN
- Links
- inline LOOP Bash (resubmit folders crab)
- VBF trigger test (Feb 2016)
- Matching b-quark from Higgs to Higgs jets with deltaR
- Get HLT BTag plots from DQM
- Get n-th element after a selection in ROOT tree->Draw() (TTree::Draw)
- CSV
- Modify/override the global-tag (GT,globaltag). Use the old/new btag training
- How to convert certificates from .P12 to .PEM formats?
- Find largest file or directories in a folder
- How to obtain eigenvalue and eigenvector of a fit in ROOT (useful to set Up/Down systematics )
- Get plots for .C ROOT macro (macro.C)
- Mia's plot HIP using my HLT ntuples
- hadd alternative ROOT
- ROOT loop in a directory TDirectory TKey
- Python print the whole history
- Python pyROOT skim and merger of a list of TTree to a file
- Install CERN AFS, CVMS, SSO, printer, ... on Ubuntu
- STEAM rate - QCD Mu Enriched samples
- Example of AOD and RAW matching files
- Which samples are used to simulate pile-up (PU)
- What's my IP bash? Found the IP address from bash
- Scouting ntuples
- How to convert a TH1F into a TH3F and a TH3F in TH1F
- From pT and eta to energy
- LHC/CMS numbers
- Print run number, lumisection, event number from ROOT file
- Pile-up (PU), instantaneous luminosity, number of bunches
- Skim trigger HLT and L1
- Material budget
CMS approval procedures
cmsRun options inputFiles=
https://twiki.cern.ch/twiki/bin/viewauth/CMS/OnlineWBHowToUpdateCMSRunModes
CMS approval procedures
https://twiki.cern.ch/twiki/bin/viewauth/CMS/PhysicsApprovals https://twiki.cern.ch/twiki/bin/view/CMS/Internal/Publications#PubApproval https://cms-docdb.cern.ch/cgi-bin/DocDB/RetrieveFile?docid=3384
http://cms.web.cern.ch/content/cms-official-documents-rules-regulations-and-guidelines
http://cms.web.cern.ch/content/how-does-cms-publish-analysis
NanOAOD file UL2018
https://cmsweb.cern.ch/das/request?instance=prod/global&input=file+dataset%3D%2FDYJetsToLL_M-50_TuneCP5_13TeV-madgraphMLM-pythia8%2FRunIISummer20UL18NanoAODv9-106X_upgrade2018_realistic_v16_L1v1-v1%2FNANOAODSIM
/DYJetsToLL_M-50_TuneCP5_13TeV-madgraphMLM-pythia8/RunIISummer20UL18NanoAODv9-106X_upgrade2018_realistic_v16_L1v1-v1/NANOAODSIM
/store/mc/RunIISummer20UL18NanoAODv9/DYJetsToLL_M-50_TuneCP5_13TeV-madgraphMLM-pythia8/NANOAODSIM/106X_upgrade2018_realistic_v16_L1v1-v1/120000/0CF0CDED-7582-7A49-84CD-0E5F73DE27B0.root
CondDb
conddb listGTsForTag L1Menu_Collisions2018_v2_1_0-d1_xml | grep Queue | grep 123X
conddb --db /eos/cms/store/group/dpg_trigger/comm_trigger/TriggerStudiesGroup/PF/PFCalibration.db listTags
conddb --db /eos/cms/store/group/dpg_trigger/comm_trigger/TriggerStudiesGroup/PF/PFCalibration.db list PFCalibration_CMSSW_13_0_0_pre4_HLT_126X_mcRun3_2023
conddb --db /eos/cms/store/group/dpg_trigger/comm_trigger/TriggerStudiesGroup/PF/PFCalibration.db dump 27989dc09d261e3832ffa7d421ec3b5532328071
process.GlobalTag.toGet.append(
cms.PSet(
record = cms.string("PFCalibrationRcd"),
label = cms.untracked.string('HLT'),
connect = cms.string("sqlite_file:/eos/cms/store/group/dpg_trigger/comm_trigger/TriggerStudiesGroup/PF/PFCalibration.db"),
tag = cms.string('PFCalibration_CMSSW_13_0_0_pre4_HLT_126X_mcRun3_2023'),
snapshotTime = cms.string('9999-12-31 23:59:59.000'),
)
Rucio
https://twiki.cern.ch/twiki/bin/view/CMS/Rucio
### Setup Rucio
source /cvmfs/cms.cern.ch/cmsset_default.sh &&
source /cvmfs/cms.cern.ch/rucio/setup-py3.sh &&
export RUCIO_ACCOUNT="t2_ch_cern_local_users" &&
voms-proxy-init -voms cms -rfc -valid 192:00
### Add Rucio rule (only FOG conveners and Trigger Coordinators allowed)
rucio add-rule --account t2_ch_cern_local_users --lifetime 12960000 --comment "Copy NanoAOD for TSG studies" cms:/HLTPhysics/Run2022E-PromptNanoAODv10_v1-v1/NANOAOD 1 T2_CH_CERN
### Add Rucio rule
### EOS quota eosquota
eos quota /eos/cms/store/group/dpg_trigger/comm_trigger/TriggerStudiesGroup
### Check status Rucio Rules
rucio rule-info 412fb67bef5b4e4e95c55c8c35c4d90d
### Example of useful commands
rucio help
rucio whoami
rucio list-account-limits t2_ch_cern_local_users
rucio list-account-usage t2_ch_cern_local_users
rucio list-rules-history
rucio list-rules --id 412fb67bef5b4e4e95c55c8c35c4d90d
rucio list-rules --account t2_ch_cern_local_users
ROOT Friend - sort by index
import ROOT
fileGPU = ROOT.TFile.Open("output_GPU_badEvent.root")
fileCPU = ROOT.TFile.Open("output_CPU_badEvent.root")
treeGPU = fileGPU.Get("Events")
treeCPU = fileCPU.Get("Events")
treeGPU.BuildIndex("EventAuxiliary.event()")
treeCPU.BuildIndex("EventAuxiliary.event()")
treeCPU.AddFriend(treeGPU,"gpu")
treeCPU.Scan("EventAuxiliary.event():Max$(recoJetedmRefToBaseProdTofloatsAssociationVector_hltDeepCombinedSecondaryVertexBJetTagsPF_probb_HLTX.obj.data_):Max$(gpu.recoJetedmRefToBaseProdTofloatsAssociationVector_hltDeepCombinedSecondaryVertexBJetTagsPF_probb_HLTX.obj.data_)","abs(Max$(recoJetedmRefToBaseProdTofloatsAssociationVector_hltDeepCombinedSecondaryVertexBJetTagsPF_probb_HLTX.obj.data_)-Max$(gpu.recoJetedmRefToBaseProdTofloatsAssociationVector_hltDeepCombinedSecondaryVertexBJetTagsPF_probb_HLTX.obj.data_))>0.7")
Tunnel ssh proxy
ssh -f -N
sdonato@galilinuxNOSPAMPLEASE.pi.infn.it -D 1090
On firefox: proxy: SOCKS, SOCKS5
localhost:1090
DAS dasgoclient das_client.py dasgoclient with loop
datasets_query="dataset dataset=/*ZeroBias9*/*2018*/RAW"
second_query="summary dataset=\$item"
dasgoclient -query="$datasets_query" > datasets.txt
while read item; do eval dasgoclient -query=\"$second_query\"; done < datasets.txt
DQM and DQM harvesting in HLT
Useful TWiki:
https://twiki.cern.ch/twiki/bin/view/CMS/HLTValidationAndDQM
Add DQM plot output in HLT menu
# load the DQMStore and DQMRootOutputModule
process.load( "DQMServices.Core.DQMStore_cfi" )
process.dqmOutput = cms.OutputModule("DQMRootOutputModule",
fileName = cms.untracked.string("DQMIO.root")
)
process.DQMOutput = cms.FinalPath( process.dqmOutput )
Run HLT online DQM:
DQM/Integration/python/clients/hlt_dqm_sourceclient-live_cfg.pyand overwrite the source file:
process.source = cms.Source( "PoolSource",
fileNames = cms.untracked.vstring( 'file:out.root',),
)
or
cmsrel CMSSW_12_5_1
cd CMSSW_12_5_1/src
cmsenv
git-cms-addpkg DQM/Integration
cmsRun DQM/Integration/python/clients/sistrip_dqm_sourceclient-live_cfg.py runInputDir=/afs/cern.ch/work/r/rosma/public runNumber=360991 runkey=hi_run scanOnce=True
Run harvesting of the HLT online DQM:
cmsDriver.py harvesting -s HARVESTING:@hlt --conditions auto:run3_hlt_relval --data --filein file:DQMIO.root --filetype DQM --scenario pp
Streamer files
Convenvert streamer files (.dat) to the standard files (.root)
import FWCore.ParameterSet.Config as cms
process = cms.Process("TEST")
import FWCore.ParameterSet.VarParsing as VarParsing
process.source = cms.Source("NewEventStreamFileReader",
fileNames = cms.untracked.vstring("file:/eos/cms/store/t0streamer/Data/PhysicsCommissioning/000/361/475/run361475_ls0058_streamPhysicsCommissioning_StorageManager.dat")
)
process.outp1=cms.OutputModule("PoolOutputModule",
fileName = cms.untracked.string('out.root'),
outputCommands = cms.untracked.vstring('keep *'),
)
process.out = cms.EndPath( process.outp1 )
igProf virtual shared memory RSS
https://twiki.cern.ch/twiki/bin/view/CMS/RecoIntegration
https://github.com/slava77/cms-reco-tools
/afs/cern.ch/work/s/sdonato/public/MemStudy
IgProf server :
http://igprof.org/analysis.html#setting-up-the-web-navigable-reports
https://sdonato.web.cern.ch/sdonato/cgi-bin/igprof-navigator/dev_CMSSW_12_3_0_GRun_V26/
a
/afs/cern.ch/user/s/sdonato/AFSwork/public/website/cgi-bin/data/dev_CMSSW_12_3_0_GRun_V26.sql3
Install a machine with all CMS services (script used in OpenStack CERN)
yum -y install git
yum -y install puppet-agent
yum -y install locmap-release
yum -y install locmap
#yum -y install rpm-build
locmap --enable all
locmap --list
locmap --configure all
yum -y install voms-clients-java
yum -y install git make cmake gcc-c++ gcc binutils libX11-devel libXpm-devel libXft-devel libXext-devel python39 openssl-devel
echo "export CMS_LOCAL_SITE=T2_CH_CERN" > /etc/cvmfs/config.d/cms.cern.ch.local
echo "CVMFS_HTTP_PROXY='http://cmsmeyproxy.cern.ch:3128;http://ca-proxy.cern.ch:3128'" >> /etc/cvmfs/config.d/cms.cern.ch.local
cvmfs_config reload
### Login as sdonato
sudo scp -r sdonato@lxplus.cern.ch:/etc/grid-security /etc/grid-security
sudo scp -r sdonato@lxplus.cern.ch:/etc/vomses /etc/vomses
ln -s /afs/cern.ch/user/s/sdonato afs
ln -s /afs/cern.ch/user/s/sdonato/.globus .
#locmap --enable all
#locmap --configure all
printf "DONE" #clear screen
source /cvmfs/cms.cern.ch/cmsset_default.sh
cmsrel CMSSW_13_0_0_pre4
cd CMSSW_13_0_0_pre4
cmsenv
voms-proxy-init
runTheMatrix.py -l 140.115
exit 0
Install CVMFS
First check " Install and configure Centos7 (EOS, CVMFS, AFS, ...)"
CVMFS
https://cvmfs.readthedocs.io/en/stable/cpt-quickstart.html
sudo yum install
https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm
sudo yum install -y cvmfs
cvmfs_config setup
In /etc/cvmfs/default.local
CVMFS_REPOSITORIES=atlas.cern.ch,atlas-condb.cern.ch,grid.cern.ch
CVMFS_CLIENT_PROFILE=single #or something else
CVMFS_HTTP_PROXY=DIRECT #or CVMFS_HTTP_PROXY='http://ca-proxy.cern.ch:3128' ?
cvmfs_config probe
cvmfs_config chksetup
cvmfs_config showconfig cms.cern.ch
systemctl restart autofs
Install and configure Centos7 (EOS, CVMFS, AFS, ...)
https://twiki.cern.ch/twiki/bin/view/Sandbox/SetupVmSLC78
See
https://linux.web.cern.ch/centos7/docs/install/
/usr/bin/locmap --list
/usr/bin/locmap --enable cernbox
/usr/bin/locmap --enable eosclient
/usr/bin/locmap --enable cvmfs ???
/usr/bin/locmap --configure all
https://twiki.cern.ch/twiki/bin/view/CvmFS/ClientSetupCERN
See
https://twiki.cern.ch/twiki/bin/view/CMS/TriggerDevelopmentWithGPUs#Configure_the_online_machines_fo
Configure the online machines for connecting to
GitHub Proxy SOCKS5 HTTPS
wget proxy
curl --preproxy socks5://localhost:18081
https://developer.download.nvidia.com/compute/cuda/11.4.2/local_installers/cuda-repo-rhel7-11-4-local-11.4.2_470.57.02-1.x86_64.rpm
--output cuda-repo-rhel7-11-4-local-11.4.2_470.57.02-1.x86_64.rpm
https://www.cyberciti.biz/faq/how-to-install-nvidia-driver-on-centos-7-linux/
About "Valid site-local-config not found at /cvmfs/cms.cern.ch/SITECONF/local/JobConfig/site-local-config.xml"
https://twiki.cern.ch/twiki/bin/view/CvmFS/ClientSetupCERN
/etc/cvmfs/config.d/cms.cern.ch.local
export CMS_LOCAL_SITE=T2_CH_CERN
CVMFS_HTTP_PROXY='http://cmsmeyproxy.cern.ch:3128;http://ca-proxy.cern.ch:3128'
https://twiki.cern.ch/twiki/bin/view/CMSPublic/CernVMFS4cms
cvmfs_config reload
(togliere puppet???)
Installare voms-proxy-init
yum install voms-clients-java
https://italiangrid.github.io/voms/documentation/voms-clients-guide/3.0.3/
copy /etc/vomses and /etc/grid-security from lxplus
https://twiki.cern.ch/twiki/bin/view/Sandbox/SetupUbuntu1804
needed fro ROOT
sudo yum install git make cmake gcc-c++ gcc binutils libX11-devel libXpm-devel libXft-devel libXext-devel python openssl-devel
Let a process run after SSH logout
disown %1 #or PID number
Make CMSSW Replay at Tier0
https://twiki.cern.ch/twiki/bin/viewauth/CMS/CompOpsTier0TeamDeployReplayFromRepository
etc/ReplayOfflineConfiguration.py
run replay please
RooFit PDF Parton Distribution Function LHAPDF
https://github.com/silviodonato/usercode/tree/PartonDistributionFunction_LHAPDF
https://lhapdf.hepforge.org/pdfsets
https://lhapdf.hepforge.org/
See
RooFit and python
https://www.nikhef.nl/~vcroft/GettingStartedWithRooFit.html
RooFit tutorial
https://root.cern.ch/roofit-20-minutes
RooFit manual
https://root.cern.ch/guides/roofit-manual
My code
https://github.com/silviodonato/DijetRootTreeAnalyzer/blob/fromDaniel/silvio/matchingStudyFit.py#L80
https://github.com/silviodonato/usercode/blob/master/plot_toy_from_RooFit_RooWorkspace.py
https://github.com/silviodonato/usercode/blob/master/plots_from_RooFit_WorkSpace_RooStat.py
Plot a dataset
CaloTrijet2016_qq = wCaloTrijet2016->data("CaloTrijet2016_qq")
th1x = wCaloTrijet2016->var("th1x")
frame = th1x->frame()
wCaloTrijet2016->plotOn(frame)
CaloTrijet2016_qq->plotOn(frame)
frame->Draw()
Timing b-tagging
hltGetConfiguration orcoff:/cdaq/physics/Run2016/25ns15e33/v4.2.2/HLT/V1 \
--path HLTriggerFirstPath,DST_HT250_CaloBTagScouting_v3,DST_HT410_BTagScouting_v7,HLTriggerFinalPath \
--input root://eoscms//eos/cms/store/data/Run2016H/ParkingScoutingMonitor/RAW/v1/000/283/407/00000/6058CC14-B794-E611-994E-02163E013456.root,root://eoscms//eos/cms/store/data/Run2016H/ParkingScoutingMonitor/RAW/v1/000/283/407/00000/6058CC14-B794-E611-994E-02163E013456.root,root://eoscms//eos/cms/store/data/Run2016H/ParkingScoutingMonitor/RAW/v1/000/283/407/00000/745CAE0B-B794-E611-A656-FA163E0818D6.root \
--timing --output none \
--offline --data \
--unprescale \
--globaltag auto:hltonline \
--max-events -1 \
> hlt.py
N = 10
#canv.SetTitle(title)
preselect += "&& (%s < %d)"%(varX,varX_max)
file_ = ROOT.TFile(fileName)
# tree = file_.Get("rootTupleTree/tree")
tree = file_.Get("tree")
tree.Draw("dijet_eta >> deta(30,0,3)","%s"%(preselect) ,"")
deta = ROOT.gDirectory.Get("deta")
deta.Draw("HIST")
x = array.array('d',[i*1./N for i in range(N)])
y = array.array('d',[0 for i in range(N)])
deta.GetQuantiles(N,y,x)
bins = list(y)
STORM
hltIntegrationTests /dev/CMSSW_8_0_0/GRun -i root://eoscms.cern.ch//eos/cms/tier0/store/data/Run2016G/MuonEG/RAW/v1/000/280/385/00000/863D6034-FD76-E611-A0D6-02163E01391F.root -n 100 --extra '--globaltag auto:run2_hlt_GRun' &>logIntegrationTest &
hltIntegrationTests /dev/CMSSW_8_0_0/GRun -i file:///mnt/t3nfs01/data01/shome/sdonato/MuonEG_Run281707_RAW.root -n 100 --extra '--globaltag auto:run2_hlt_GRun --l1 L1Menu_Collisions2016_v7_xml' &>logIntegrationTest &
## setup cms-tsg-git cmssw repository locally
git cms-init
git cms-addpkg HLTrigger/Configuration
git cms-addpkg Configuration/HLT
git remote add STORM-cmssw https://github.com/cms-tsg-storm/cmssw.git
git remote update
git checkout 80XHLTAfterMD4Train
scram b -j8
How to use a new L1 menu from .xml
Hi,
For the L1T side, follow:
[[CMSPublic.SWGuideL1TStage2Instructions][https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideL1TStage2Instructions]]
and combine it with the HLT part:
[[CMSPublic.SWGuideGlobalHLT#Running_the_HLT_with_CMSSW_9_0_2][https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideGlobalHLT#Running_the_HLT_with_CMSSW_9_0_2]]
The new L1T menu is described here:
[[CMS.L1TriggerDPG#L1_Trigger_Menus][https://twiki.cern.ch/twiki/bin/view/CMS/L1TriggerDPG#L1_Trigger_Menus]]
The actual L1T menu in xml format for use with HLT should be extracted this way:
<developer area> setup following the above recipe
cd src
git cms-addpkg HLTrigger/Configuration
git cms-addpkg L1Trigger/L1TGlobal
mkdir -p L1Trigger/L1TGlobal/data/Luminosity/startup
scram b -j 4
cd ..
git clone [[https://github.com/cms-l1-dpg/2017-pp-menu-dev][https://github.com/cms-l1-dpg/2017-pp-menu-dev]]
cp 2017-pp-menu-dev/Apr12/L1Menu_20170412.xml src/L1Trigger/L1TGlobal/data/Luminosity/startup/
cd src/HLTrigger/Configuration/test
hltGetConfiguration /dev/CMSSW_9_0_1/HLT --globaltag 90X_upgrade2017_TSG_Hcal_V2 --path HLTriggerFirstPath,HLTriggerFinalPath,HLTAnalyzerEndpath --input root://eoscms.cern.ch//eos/cms/store/mc/PhaseIFall16DR/TT_TuneCUETP8M2T4_13TeV-powheg-pythia8/GEN-SIM-RAW/FlatPU28to62HcalNZSRAW_90X_upgrade2017_realistic_v6_C1-v2/130000/BE521173-FD10-E711-A3FE-02163E0176C2.root --mc --process MYHLT --full --offline --l1-emulator FullSimHcalTP --l1Xml L1Menu_20170412.xml --unprescale --max-events 10 --output none > hlt83X.py
ie, note in particular --l1Xml L1Menu_20170412.xml
To get L1T prescales, the most recent info I know of is this:
[[https://hypernews.cern.ch/HyperNews/CMS/get/L1TriggerSW/725/1.html][https://hypernews.cern.ch/HyperNews/CMS/get/L1TriggerSW/725/1.html]]
Good luck
Martin
Eras at HLT
from Configuration.StandardSequences.Eras import eras
process = cms.Process( "TEST", eras.phase1Pixel)
[.....]
# Eras-based customisations
from HLTrigger.Configuration.Eras import modifyHLTforEras
modifyHLTforEras(process)
Useful strings in HLT configuration
from Configuration.AlCa.GlobalTag import GlobalTag as customiseGlobalTag
process.GlobalTag = customiseGlobalTag(process.GlobalTag, conditions = 'L1Menu_Collisions2017_dev_r5_m4_patch_921,L1TUtmTriggerMenuRcd,frontier://FrontierProd/CMS_CONDITIONS,,9999-12-31 23:59:59.000')
# enable TrigReport, TimeReport and MultiThreading
process.options = cms.untracked.PSet(
wantSummary = cms.untracked.bool( True ),
numberOfThreads = cms.untracked.uint32( 4 ),
numberOfStreams = cms.untracked.uint32( 0 ),
sizeOfStackForThreadsInKB = cms.untracked.uint32( 10*1024 )
)
from HLTrigger.Configuration.CustomConfigs import L1REPACK
process = L1REPACK(process,"uGT")
process.GlobalTag.toGet.append(
cms.PSet(
record = cms.string("L1TUtmTriggerMenuRcd"),
tag = cms.string("L1Menu_Collisions2017_dev_r5_m4_patch_921")),
)
step-1
process.hltOutput = cms.OutputModule( "PoolOutputModule",
fileName = cms.untracked.string( "trigRes_1.root" ),
outputCommands = cms.untracked.vstring(
'drop *',
'keep *TriggerResults_*_*_*',
)
)
process.Output = cms.EndPath( process.hltOutput )
step-2
import FWCore.ParameterSet.Config as cms
process = cms.Process("TRIGREP")
process.load("FWCore.MessageLogger.MessageLogger_cfi")
process.maxEvents = cms.untracked.PSet(
input = cms.untracked.int32(30)
)
process.options = cms.untracked.PSet(
wantSummary = cms.untracked.bool( True ),
)
process.source = cms.Source("PoolSource",
fileNames = cms.untracked.vstring(
'file:outputFULLNew_1.root'
),
)
process.hltTrigReport = cms.EDAnalyzer( "HLTrigReport",
ReferencePath = cms.untracked.string( "HLTriggerFinalPath" ),
ReferenceRate = cms.untracked.double( 100.0 ),
serviceBy = cms.untracked.string( "never" ),
resetBy = cms.untracked.string( "never" ),
reportBy = cms.untracked.string( "job" ),
HLTriggerResults = cms.InputTag( 'TriggerResults','','HLT' )
)
process.hltTrigReportRECO = cms.EDAnalyzer( "HLTrigReport",
ReferencePath = cms.untracked.string( "HLTriggerFinalPath" ),
ReferenceRate = cms.untracked.double( 100.0 ),
serviceBy = cms.untracked.string( "never" ),
resetBy = cms.untracked.string( "never" ),
reportBy = cms.untracked.string( "job" ),
HLTriggerResults = cms.InputTag( 'TriggerResults','','RECO' )
)
process.hltTrigReportTEST = cms.EDAnalyzer( "HLTrigReport",
ReferencePath = cms.untracked.string( "HLTriggerFinalPath" ),
ReferenceRate = cms.untracked.double( 100.0 ),
serviceBy = cms.untracked.string( "never" ),
resetBy = cms.untracked.string( "never" ),
reportBy = cms.untracked.string( "job" ),
HLTriggerResults = cms.InputTag( 'TriggerResults','','TEST' )
)
#process.TrigR = cms.Path(process.hltTrigReport) # + process.hltTrigReportRECO)
process.MessageLogger.cerr.HLTrigReport = cms.untracked.PSet(
limit = cms.untracked.int32(10000000),
reportEvery = cms.untracked.int32(1)
)
process.MessageLogger.cerr.FwkReport.reportEvery = 1000
from Configuration.AlCa.GlobalTag import GlobalTag
process.GlobalTag.globaltag = '123X_dataRun3_HLT_v7'
### Re-run all alca reco streams
#from Configuration.StandardSequences.AlCaRecoStreams_cff import *
process.load("Configuration.StandardSequences.AlCaRecoStreams_cff")
#process.TrigR = cms.EndPath(process.hltTrigReport + process.hltTrigReportRECO + process.hltTrigReportTEST)
https://github.com/silviodonato/usercode/blob/triggerRates/triggerRates.py
Servers
root://eoscms.cern.ch//eos/cms/store/
root://eoscms.cern.ch//eos/cms/tier0//store/
root://t3dcachedb.psi.ch:1094//pnfs/psi.ch/cms/trivcat/store/
root://storage01.lcg.cscs.ch//pnfs/lcg.cscs.ch/cms/trivcat/store/
root://cmseos.fnal.gov//eos/uscms/store/
PSI
https://wiki.chipp.ch/twiki/bin/view/CmsTier3/HowToAccessSe
internal XROOTD:
INFN (
Legnaro) xrdfs xrootd-cms.infn.it:1194 ls /store/user/sdonato/
Pisa: xrdfs stormgf1.pi.infn.it ls /store/user/sdonato/
Legnaro: xrdfs t2-xrdcms.lnl.infn.it:7070 ls -l -u /store/user/sdonato/
root://t3dcachedb.psi.ch:1094//
PSI: xrdfs t3se01.psi.ch ls -l -u /store/user/sdonato
CSCS: xrdfs storage01.lcg.cscs.ch ls -l -u /pnfs/lcg.cscs.ch/cms/trivcat/store/ or xrdfs cms02.lcg.cscs.ch ls -l -u /store/user/sdonato
Global: xrdfs cms-xrd-global.cern.ch ls -l -u /store/user/sdonato
---------------------------------------------------------------
=====================================================
SRM
PISA srmls srm://stormfe1.pi.infn.it:8444/srm/managerv2?SFN=/cms/store/user/sdonato/
Delete folder from T2 PISA srmrmdir -recursive=true -2 "srm://stormfe1.pi.infn.it:8444/srm/managerv2?SFN=/cms/store/user/sdonato/pippoepluto"
=========
LEGNARO srmls srm://t2-srm-02.lnl.infn.it:8443/srm/managerv1?SFN=/pnfs/lnl.infn.it/data/cms/store/user/sdonato
===========
Check
http://wlcg-sam-cms.cern.ch/templates/ember/#/plot?flavours=XROOTD&profile=CMS_CRITICAL_FULL&sites=T2_IT_Pisa
Sites->Service Flavour = XROOTD -> Show Results
Link taken from
https://twiki.cern.ch/twiki/bin/view/CMSPublic/CompOpsAAATroubleshootingGuide
Check2
https://cmsweb.cern.ch/sitedb/prod/sites/T2_IT_Pisa
https://gitlab.cern.ch/SITECONF/T2_US_WISCONSIN/-/blob/master/storage.json#L25
From T2 CSCS:
root://storage01.lcg.cscs.ch//pnfs/lcg.cscs.ch/cms/trivcat/store/user/sdonato/tth/Oct19/BTagCSV/Oct19/161019_194157/0002/tree_2552.root
copy a folder from T3 to local:
xrdfs storage01.lcg.cscs.ch ls -l -u /pnfs/lcg.cscs.ch/cms/trivcat/store/user/sdonato/triggerNtupleAOD_FWLite_2016_v3/JetHT/triggerNtupleAOD_FWLite_2016_v3_JetHT_Run2016C-PromptReco-v2/160713_132405/0000/ |\
awk '{ print "xrdcp "$5" "ENVIRON["PWD"] }' |\
grep ".root" | grep -v "log\|failed" \
> list
parallel -j16 < list >& log &
Copy a local folder to PSI T3
xrdcp --force --recursive -p Jun3/ root://t3dcachedb.psi.ch:1094///pnfs/psi.ch/cms/trivcat/store/user/sdonato/tth_skim_June4/ >& logCopy &
hadd directly from the T2_CH_CSCS or T3_CH_PSI
code
echo "hadd test.root \\" > list && \
xrdfs storage01.lcg.cscs.ch ls -l -u /pnfs/lcg.cscs.ch/cms/trivcat/store/user/sdonato/triggerNtupleAOD_FWLite_2016_v2p3/JetHT/triggerNtupleAOD_FWLite_2016_v2p3_JetHT_Run2016G-PromptReco-v1/160820_164414/0000/ | \
awk '{ print $5 " \\" }' | \
grep ".root" >> list && \
echo "" >> list && \
source list
Search/find a file recursively using xrdfs (xrootd)
See
https://github.com/silviodonato/DijetRootTreeAnalyzer/blob/master/silvio/oldCode/make_list_file.py
and then
echo "hadd test.root \\" > list && \
cat list_JetHT.txt | \
awk '{ print $1 " \\" }' | \
grep ".root" >> list && \
echo "" >> list && \
source list
Open a file from EOS Tier0 T0_CH_CERN
root -l root://eoscms.cern.ch//eos/cms/tier0//store/data/Run2016F/BTagCSV/RAW/v1/000/278/308/00000/B81CBFE0-D85B-E611-915E-02163E011CF7.root
GIT conflict solving
Step 1: From your project repository, check out a new branch and test the changes.
git checkout -b jpata-meanalysis-80x-V24 ttH80X
git pull https://github.com/jpata/tthbb13.git meanalysis-80x-V24
Step 2: Merge the changes and update on
GitHub.
git checkout ttH80X git merge --no-ff jpata-meanalysis-80x-V24
git push origin ttH80X
Automatic tests in gitlab (PR test, pipeline, CI/CD, ...)
See
https://gitlab.cern.ch/sdonato/xpog-json-test/-/blob/master/.gitlab-ci.yml
GIT conflict solving (gitlab recipes)
Step 1. Fetch and check out the branch for this merge request
git fetch origin git checkout -b FH_dev_DS origin/FH_dev_DS
Step 2. Review the changes locally
Step 3. Merge the branch and fix any conflicts that come up
git checkout FH_dev_SD git merge --no-ff FH_dev_DS
Step 4. Push the result of the merge to
GitLab
git push origin FH_dev_SD
GIT solving stupid conflicts
https://easyengine.io/tutorials/git/git-resolve-merge-conflicts/
grep -lr '<<<<<<<' . | xargs git checkout --theirs
grep -lr '<<<<<<<' . | xargs git checkout --ours
GIT solving stupid conflicts
git mergetool
GIT try to merge locally two pull request changing the same file
git cms-init
git fetch official-cmssw pull/30013/head:PR30013
git checkout PR30013
git pull official-cmssw pull/29630/head
GIT delete all old tags
http://www.alwaystwisted.com/articles/deleting-git-tags-locally-and-on-github
For CMSSW I used:
git tag -d `git tag | grep -E '.' | grep CMSSW | grep -v CMSSW_8`
git ls-remote --tags my-cmssw | awk '/^(.*)(s+)(.*[a-zA-Z0-9])$/ {print ":" $2}' | xargs git push
my-cmssw
Useful CMSSW GitHub commands
git cms-fetch-pr
git cms-show-pr
git cms-cherry-pick-pr
git cms-checkdeps -A -a
git cm-init
https://github.com/cms-sw/cms-git-tools/pull/110
https://github.com/fwyzard/cms-git-tools/blob/master/git-cms-checkdeps
GIT setup a cmssw repository locally
## setup cms-tsg-git cmssw repository locally
git cms-init
git remote add STORM-cmssw https://github.com/cms-tsg-storm/cmssw.git
git checkout 80XHLTAfterMD4Train
git cms-checkdeps -a
scram b -j8
OLD
## setup cms-tsg-git cmssw repository locally
git cms-init
git cms-addpkg HLTrigger/Configuration
git cms-addpkg Configuration/HLT
git remote add STORM-cmssw https://github.com/cms-tsg-storm/cmssw.git
git checkout 80XHLTAfterMD4Train
scram b -j8
Grid control
./grid-control/go.py confs/projectSkim.conf -cG ## use -s to skip submission
./grid-control/go.py confs/projectSkim.conf -d ALL
Grid control
try this to see the state of jobs: qstat -u "*" | tail -n +2 | awk '{print $4,$5}' | tail -n +2 | sort | uniq -c | sort
Example of code to run with grid-control (add several variables into TTree)
see
https://github.com/silviodonato/usercode/tree/addCSVtth
How to submit jobs at T3 without grid-control (qsub, bsub)
see
https://wiki.chipp.ch/twiki/bin/view/CmsTier3/HowToSubmitJobs#Queues
qsub -q short.q example_jobscript.sh
see
https://wiki.chipp.ch/twiki/bin/view/CmsTier3/HowToSubmitJobs#Queues
Example:
#!/bin/bash
source /cvmfs/cms.cern.ch/cmsset_default.sh
export SCRAM_ARCH=slc6_amd64_gcc530
cd /mnt/t3nfs01/data01/shome/sdonato/scouting/CMSSW_8_0_30/src/CMSDIJET/DijetRootTreeAnalyzer
eval `scramv1 runtime -sh`
./main config/lists_silvio_signal/splitted__20180202_212308/list_VectorDiJet1Jet_125_13TeV_SIGNALS_20180202_212308_1.txt batch/mycutFile_20180202_212308__silvio_cutFile_mainDijetCaloScoutingSelection_mc.txt dijetscouting/events /tmp/rootfile_list_VectorDiJet1Jet_125_13TeV_SIGNALS_20180202_212308_1 /tmp/cutEfficiencyFile_list_VectorDiJet1Jet_125_13TeV_SIGNALS_20180202_212308_1 >& /mnt/t3nfs01/data01/shome/sdonato/scouting/CMSSW_8_0_30/src/CMSDIJET/DijetRootTreeAnalyzer/batch/SIGNALS_20180202_212308/logfile_list_VectorDiJet1Jet_125_13TeV_SIGNALS_20180202_212308_1.log
echo xrdcp --force --recursive /tmp/rootfile_list_VectorDiJet1Jet_125_13TeV_SIGNALS_20180202_212308_1_reduced_skim.root output_20180202_212308
xrdcp --force --recursive /tmp/rootfile_list_VectorDiJet1Jet_125_13TeV_SIGNALS_20180202_212308_1_reduced_skim.root output_20180202_212308
xrdcp --force --recursive /tmp/rootfile_list_VectorDiJet1Jet_125_13TeV_SIGNALS_20180202_212308_1_reduced_skim.root output_20180202_212308
xrdcp --force --recursive /tmp/rootfile_list_VectorDiJet1Jet_125_13TeV_SIGNALS_20180202_212308_1.root output_20180202_212308
xrdcp --force --recursive /tmp/cutEfficiencyFile_list_VectorDiJet1Jet_125_13TeV_SIGNALS_20180202_212308_1.dat output_20180202_212308
rm /tmp/rootfile_list_VectorDiJet1Jet_125_13TeV_SIGNALS_20180202_212308_1_reduced_skim.root
rm /tmp/rootfile_list_VectorDiJet1Jet_125_13TeV_SIGNALS_20180202_212308_1.root
rm /tmp/cutEfficiencyFile_list_VectorDiJet1Jet_125_13TeV_SIGNALS_20180202_212308_1.dat
Where to find the proxy of T2 and T3
cat /cvmfs/cms.cern.ch/SITECONF/T2_CH_CSCS/PhEDEx/storage.xml
Command for jobs in batch system queue
qstat
qdel -u sdonato
qstat -explain E -j 2239717
Heppy VHbb Hbb ntuplizer
https://twiki.cern.ch/twiki/bin/viewauth/CMS/VHiggsBB
https://twiki.cern.ch/twiki/bin/view/CMS/VHiggsBBCodeUtils
VHbb analysis (ETH)
Install combine and Xbb code
setenv SCRAM_ARCH slc6_amd64_gcc481
cmsrel CMSSW_7_1_5 ### must be a 7_1_X release >= 7_1_5; (7.0.X and 7.2.X are NOT supported either)
cd CMSSW_7_1_5/src
cmsenv
## Install combine (from https://twiki.cern.ch/twiki/bin/viewauth/CMS/SWGuideHiggsAnalysisCombinedLimit#ROOT6_SLC6_release_CMSSW_7_4_X_N)
git clone https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit.git HiggsAnalysis/CombinedLimit
cd HiggsAnalysis/CombinedLimit
git fetch origin
git checkout v5.0.3
git checkout -f slc6-root5.34.17
git pull origin
scramv1 b clean
scramv1 b -j32
scram b
cd ../..
## Install ETH VHbb code
git clone -b V21 https://github.com/silviodonato/Xbb.git Xbb
cd Xbb/python
- Prepare a folder with the link with the MC and data samples:
mkdir MCAndDataLinks
cd MCAndDataLinks/
cp -r -s /gpfs/ddn/srm/cms/store/user/arizzi/VHBBHeppyV21/* .
rm -r TT_TuneCUETP8M1_13TeV-powheg-pythia8/VHBB_HEPPY_V21_*
rm -r DYBJetsToLL_M-50_TuneCUETP8M1_13TeV-madgraphMLM-pythia8/VHBB_HEPPY_V21_*
rm -r DYBJetsToNuNu_Zpt-40toInf_TuneCUETP8M1_13TeV-madgraphMLM-pythia8/VHBB_HEPPY_V21_*
rm -r WBJetsToLNu_Wpt-40toInf_TuneCUETP8M1_13TeV-madgraphMLM-pythia8/VHBB_HEPPY_V21_*
cp -r -s /gpfs/ddn/srm/cms/store/user/arizzi/VHBBHeppyV21b/* .
- Edit <b>ZvvHbb13TeVconfig/paths.ini </b>and update the variables *Wdir* and *samplepath*
NB. at the moment, we have to use MC for
V14,
Run2015C _25ns-05Oct2015-v1 ,
Run2015D -05Oct2015-v1 from
V15, and
Run2015D -PromptReco-v4 from
v15a. The data corresponds to 2.19 fb-1 with an uncertainty of 6%.
- Check the
ZvvHbb13TeVconfig/paths.in in particular to the cuts, samples,
newprefix,
lumi, and cross-ts.
mkdir ../env
mkdir ../weights
mkdir ../env/syst/
root -l ../interface/VHbbNameSpace.h++
- run
ZvvHbb13TeVSteps/Step0_prep.sh
Get Scale Factors
To get the scale factors:
1) produce a datacard (unblinded!) in the control regions (TT, W+jets, Z+Jets)
2) combine the three-datacards with:
combineCards.py vhbb_DC_TH_BDT_M125_ZnnHbbHighPt_CR_W_13TeV.txt vhbb_DC_TH_BDT_M125_ZnnHbbHighPt_CR_Z_13TeV.txt vhbb_DC_TH_BDT_M125_ZnnHbbHighPt_CR_TT_13TeV.txt > vhbb_DC_TH_BDT_M125_ZnnHbbHighPt_CR_13TeV.txt
3) if needed, change the datacard:
cp vhbb_DC_TH_BDT_M125_ZnnHbbHighPt_CR_13TeV.txt test.txt
##change test.txt
4) perform the maximum likelihood script and get the SFs:
combine -M MaxLikelihoodFit test.txt --saveShapes --saveWithUncertainties -v 3 --forceRecreateNLL --expectSignal=0
5) in one line (if the datacard is already produced properly)
combineCards.py vhbb_DC_TH_BDT_M125_ZnnHbbHighPt_CR_W_13TeV.txt vhbb_DC_TH_BDT_M125_ZnnHbbHighPt_CR_Z_13TeV.txt vhbb_DC_TH_BDT_M125_ZnnHbbHighPt_CR_TT_13TeV.txt > vhbb_DC_TH_BDT_M125_ZnnHbbHighPt_CR_13TeV.txt && cat vhbb_DC_TH_BDT_M125_ZnnHbbHighPt_CR_13TeV.txt SFs > test.txt && combine -M MaxLikelihoodFit test.txt --saveShapes --saveWithUncertainties -v 3 --forceRecreateNLL --expectSignal=0
6) test.txt
Combination of vhbb_DC_TH_BDT_M125_ZnnHbbHighPt_CR_W_13TeV.txt vhbb_DC_TH_BDT_M125_ZnnHbbHighPt_CR_Z_13TeV.txt vhbb_DC_TH_BDT_M125_ZnnHbbHighPt_CR_TT_13TeV.txt
imax 3 number of bins
jmax 12 number of processes minus 1
kmax 0 number of nuisance parameters
----------------------------------------------------------------------------------------------------------------------------------
shapes * ch1 vhbb_TH_BDT_M125_ZnnHbbHighPt_CR_W_13TeV.root Znn_13TeV/$PROCESS Znn_13TeV/$PROCESS$SYSTEMATIC
shapes * ch2 vhbb_TH_BDT_M125_ZnnHbbHighPt_CR_Z_13TeV.root Znn_13TeV/$PROCESS Znn_13TeV/$PROCESS$SYSTEMATIC
shapes * ch3 vhbb_TH_BDT_M125_ZnnHbbHighPt_CR_TT_13TeV.root Znn_13TeV/$PROCESS Znn_13TeV/$PROCESS$SYSTEMATIC
----------------------------------------------------------------------------------------------------------------------------------
bin ch1 ch2 ch3
observation 2457.0 6601.0 3803.0
----------------------------------------------------------------------------------------------------------------------------------
bin ch1 ch1 ch1 ch1 ch1 ch1 ch1 ch1 ch1 ch1 ch1 ch1 ch1 ch2 ch2 ch2 ch2 ch2 ch2 ch2 ch2 ch2 ch2 ch2 ch2 ch2 ch3 ch3 ch3 ch3 ch3 ch3 ch3 ch3 ch3 ch3 ch3 ch3 ch3
process ZH ggZH WH s_Top Wj0b TT Wj1b Zj0b Zj1b VVLF Wj2b Zj2b VVHF ZH ggZH WH s_Top Wj0b TT Wj1b Zj0b Zj1b VVLF Wj2b Zj2b VVHF ZH ggZH WH s_Top Wj0b TT Wj1b Zj0b Zj1b VVLF Wj2b Zj2b VVHF
process -2 -1 0 1 2 3 4 5 6 7 8 9 10 -2 -1 0 1 2 3 4 5 6 7 8 9 10 -2 -1 0 1 2 3 4 5 6 7 8 9 10
rate 0.0004872 0.0003357 2.2899 137.2405 1143.1896 774.6907 94.5279 0.0172 2.3509 23.4767 57.1053 1.0825 6.4710 2.5064 0.8310 0.6173 120.2839 1615.2660 774.0349 129.2801 2064.1787 336.0279 70.5631 91.1960 278.5828 32.1379 0.000322 0.0007272 1.1925 356.4863 221.7242 3336.7743 106.1835 0.0338 3.2373 3.6182 103.8597 4.1226 6.0862
----------------------------------------------------------------------------------------------------------------------------------
CMS_vhbb_TT_SF_Znn_13TeV rateParam ch1 TT 1
CMS_vhbb_Wj0b_SF_Znn_13TeV rateParam ch1 Wj0b 1
CMS_vhbb_Wj1b_SF_Znn_13TeV rateParam ch1 Wj1b 1
CMS_vhbb_Wj2b_SF_Znn_13TeV rateParam ch1 Wj2b 1
CMS_vhbb_Zj0b_SF_Znn_13TeV rateParam ch1 Zj0b 1
CMS_vhbb_Zj1b_SF_Znn_13TeV rateParam ch1 Zj1b 1
CMS_vhbb_Zj2b_SF_Znn_13TeV rateParam ch1 Zj2b 1
CMS_vhbb_TT_SF_Znn_13TeV rateParam ch2 TT 1
CMS_vhbb_Wj0b_SF_Znn_13TeV rateParam ch2 Wj0b 1
CMS_vhbb_Wj1b_SF_Znn_13TeV rateParam ch2 Wj1b 1
CMS_vhbb_Wj2b_SF_Znn_13TeV rateParam ch2 Wj2b 1
CMS_vhbb_Zj0b_SF_Znn_13TeV rateParam ch2 Zj0b 1
CMS_vhbb_Zj1b_SF_Znn_13TeV rateParam ch2 Zj1b 1
CMS_vhbb_Zj2b_SF_Znn_13TeV rateParam ch2 Zj2b 1
CMS_vhbb_TT_SF_Znn_13TeV rateParam ch3 TT 1
CMS_vhbb_Wj0b_SF_Znn_13TeV rateParam ch3 Wj0b 1
CMS_vhbb_Wj1b_SF_Znn_13TeV rateParam ch3 Wj1b 1
CMS_vhbb_Wj2b_SF_Znn_13TeV rateParam ch3 Wj2b 1
CMS_vhbb_Zj0b_SF_Znn_13TeV rateParam ch3 Zj0b 1
CMS_vhbb_Zj1b_SF_Znn_13TeV rateParam ch3 Zj1b 1
CMS_vhbb_Zj2b_SF_Znn_13TeV rateParam ch3 Zj2b 1
SFs txt file:
SF_TT rateParam ch1 TT 1
SF_Wjl rateParam ch1 Wj0b 1
SF_Wjb rateParam ch1 Wj1b 1
SF_Wjb rateParam ch1 Wj2b 1
SF_Zjl rateParam ch1 Zj0b 1
SF_Zjb rateParam ch1 Zj1b 1
SF_Zjb rateParam ch1 Zj2b 1
SF_TT rateParam ch2 TT 1
SF_Wjl rateParam ch2 Wj0b 1
SF_Wjb rateParam ch2 Wj1b 1
SF_Wjb rateParam ch2 Wj2b 1
SF_Zjl rateParam ch2 Zj0b 1
SF_Zjb rateParam ch2 Zj1b 1
SF_Zjb rateParam ch3 Zj2b 1
SF_TT rateParam ch3 TT 1
SF_Wjl rateParam ch3 Wj0b 1
SF_Wjb rateParam ch3 Wj1b 1
SF_Wjb rateParam ch3 Wj2b 1
SF_Zjl rateParam ch3 Zj0b 1
SF_Zjb rateParam ch3 Zj1b 1
SF_Zjb rateParam ch3 Zj2b 1
SF_QCD rateParam ch1 QCD 1
SF_QCD rateParam ch2 QCD 1
SF_QCD rateParam ch3 QCD 1
SF_QCD rateParam ch4 QCD 1
SF_TT rateParam ch4 TT 1
SF_Wjl rateParam ch4 Wj0b 1
SF_Wjb rateParam ch4 Wj1b 1
SF_Wjb rateParam ch4 Wj2b 1
SF_Zjl rateParam ch4 Zj0b 1
SF_Zjb rateParam ch4 Zj1b 1
SF_Zjb rateParam ch4 Zj2b 1
ttH FH, use ddQCD normalizations a free parameters:
ddQCD_norm_fh_j7_t4 rateParam ch1 ddQCD 1
ddQCD_norm_fh_j8_t4 rateParam ch2 ddQCD 1
ddQCD_norm_fh_j9_t4 rateParam ch3 ddQCD 1
ddQCD_norm_fh_j7_t3 rateParam ch4 ddQCD 1
ddQCD_norm_fh_j8_t3 rateParam ch5 ddQCD 1
ddQCD_norm_fh_j9_t3 rateParam ch6 ddQCD 1
or
bgnorm_ddQCD_ch1 lnN - - - - - - - - - - - 1.05 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
bgnorm_ddQCD_ch2 lnN - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1.05 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
bgnorm_ddQCD_ch3 lnN - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1.05 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
bgnorm_ddQCD_ch4 lnN - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1.05 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
bgnorm_ddQCD_ch5 lnN - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1.05 - - - - - - - - - - - - - - - - - - - - - - - - - - - -
bgnorm_ddQCD_ch6 lnN - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1.05 - - - - - - - -
Link:
https://twiki.cern.ch/twiki/bin/viewauth/CMS/SWGuideHiggsAnalysisCombinedLimit
https://twiki.cern.ch/twiki/bin/view/CMS/HiggsWG/HiggsPAGPreapprovalChecks
https://indico.cern.ch/event/456547/contribution/1/attachments/1188554/1724450/diagnostics.pdf
tutorial:
https://indico.cern.ch/event/456547/
print nuisance:
python $CMSSW_BASE/src/HiggsAnalysis/CombinedLimit/test/systematicsAnalyzer.py shapes_group_group_fh_noMCstat.txt -f html > shapes_group_group_fh_noMCstat.html
combineCards.py for simultaneus fit SRs and CRs
mkdir SixCRs
cp *_SF_* SixCRs
cd SixCRs
combineCards.py \
vhbb_DC_TH_SF_TTbarTight.txt \
vhbb_DC_TH_SF_WLight.txt \
vhbb_DC_TH_SF_Wbb.txt \
vhbb_DC_TH_SF_ZLight.txt \
vhbb_DC_TH_SF_Zbb.txt \
vhbb_DC_TH_SF_QCD.txt \
> vhbb_DC_TH_SF.txt
combine -M MaxLikelihoodFit vhbb_DC_TH_SF.txt -v 3 --plots >& logMaxLikelihoodFit.txt
cat logMaxLikelihoodFit.txt | grep Iteration -B100 | grep SF_ -A1 > SFs.txt
combine -M Asymptotic -t -1 vhbb_DC_TH_SF.txt >& logAsymptotic.txt
python ../diffNuisances.py mlfit.root -f text >& nuisance.txt
python ../diffNuisances.py mlfit.root -f twiki >& nuisanceTwiki.txt
cd ..
## ... or ...
mkdir FourCRs
cp *_SF_* FourCRs
cd FourCRs
combineCards.py \
vhbb_DC_TH_SF_TTbar.txt \
vhbb_DC_TH_SF_W.txt \
vhbb_DC_TH_SF_Z.txt \
vhbb_DC_TH_SF_QCD.txt \
> vhbb_DC_TH_SF.txt
combine -M MaxLikelihoodFit vhbb_DC_TH_SF.txt -v 3 --plots >& logMaxLikelihoodFit.txt
cat logMaxLikelihoodFit.txt | grep Iteration -B100 | grep SF_ -A1 > SFs.txt
combine -M Asymptotic -t -1 vhbb_DC_TH_SF.txt >& logAsymptotic.txt
python ../diffNuisances.py mlfit.root -f text >& nuisance.txt
python ../diffNuisances.py mlfit.root -f twiki >& nuisanceTwiki.txt
cd ..
https://twiki.cern.ch/twiki/bin/view/CMS/VHiggsBBCodeUtils#Usefull_combine_commands
###########################################################################################
mkdir FitTwoBinsTight
cp *_13TeVTight*Pt_* FitTwoBinsTight
cd FitTwoBinsTight
combineCards.py \
Znn_HighPt_CR_TT=vhbb_DC_TH_Znn_13TeVTightHighPt_TTbarTight.txt \
Znn_HighPt_CR_WLight=vhbb_DC_TH_Znn_13TeVTightHighPt_WLight.txt \
Znn_HighPt_CR_Wbb=vhbb_DC_TH_Znn_13TeVTightHighPt_Wbb.txt \
Znn_HighPt_CR_ZLight=vhbb_DC_TH_Znn_13TeVTightHighPt_ZLight.txt \
Znn_HighPt_CR_Zbb=vhbb_DC_TH_Znn_13TeVTightHighPt_Zbb.txt \
Znn_HighPt_CR_QCD=vhbb_DC_TH_Znn_13TeVTightHighPt_QCD.txt \
Znn_LowPt_CR_TT=vhbb_DC_TH_Znn_13TeVTightLowPt_TTbarTight.txt \
Znn_LowPt_CR_WLight=vhbb_DC_TH_Znn_13TeVTightLowPt_WLight.txt \
Znn_LowPt_CR_Wbb=vhbb_DC_TH_Znn_13TeVTightLowPt_Wbb.txt \
Znn_LowPt_CR_ZLight=vhbb_DC_TH_Znn_13TeVTightLowPt_ZLight.txt \
Znn_LowPt_CR_Zbb=vhbb_DC_TH_Znn_13TeVTightLowPt_Zbb.txt \
Znn_LowPt_CR_QCD=vhbb_DC_TH_Znn_13TeVTightLowPt_QCD.txt \
> CRs.txt
combineCards.py \
Znn_HighPt_CR_TT=vhbb_DC_TH_Znn_13TeVTightHighPt_TTbarTight.txt \
Znn_HighPt_CR_WLight=vhbb_DC_TH_Znn_13TeVTightHighPt_WLight.txt \
Znn_HighPt_CR_Wbb=vhbb_DC_TH_Znn_13TeVTightHighPt_Wbb.txt \
Znn_HighPt_CR_ZLight=vhbb_DC_TH_Znn_13TeVTightHighPt_ZLight.txt \
Znn_HighPt_CR_Zbb=vhbb_DC_TH_Znn_13TeVTightHighPt_Zbb.txt \
Znn_HighPt_CR_QCD=vhbb_DC_TH_Znn_13TeVTightHighPt_QCD.txt \
Znn_HighPt_SR=vhbb_DC_TH_Znn_13TeVTightHighPt_Signal.txt \
Znn_LowPt_CR_TT=vhbb_DC_TH_Znn_13TeVTightLowPt_TTbarTight.txt \
Znn_LowPt_CR_WLight=vhbb_DC_TH_Znn_13TeVTightLowPt_WLight.txt \
Znn_LowPt_CR_Wbb=vhbb_DC_TH_Znn_13TeVTightLowPt_Wbb.txt \
Znn_LowPt_CR_ZLight=vhbb_DC_TH_Znn_13TeVTightLowPt_ZLight.txt \
Znn_LowPt_CR_Zbb=vhbb_DC_TH_Znn_13TeVTightLowPt_Zbb.txt \
Znn_LowPt_CR_QCD=vhbb_DC_TH_Znn_13TeVTightLowPt_QCD.txt \
Znn_LowPt_SR=vhbb_DC_TH_Znn_13TeVTightLowPt_Signal.txt \
> SRandCRs.txt
combine -M MaxLikelihoodFit CRs.txt -v 3 --plots --rMin -20 --rMax 20 --robustFit 1 >& CRlogMaxLikelihoodFit.txt
combine -M Asymptotic -t -1 CRs.txt >& CRlogAsymptotic.txt
combine -M Asymptotic -t -1 SRandCRs.txt | grep 'Expected 50.0%' | awk '{print $5 }' >& SR_expLimit.txt
python ../diffNuisances.py mlfit.root -f text >& CRnuisance.txt
python ../diffNuisances.py mlfit.root -f twiki >& CRnuisanceTwiki.txt
### unblinded commands!! ###
combine -M MaxLikelihoodFit SRandCRs.txt -v 3 --plots --rMin -20 --rMax 20 --robustFit 1 >& SRandCRslogMaxLikelihoodFit.txt
combine -M Asymptotic -t -1 SRs.txt >& SRlogAsymptotic.txt
python ../diffNuisances.py mlfit.root -f text >& SRandCRsnuisance.txt
python ../diffNuisances.py mlfit.root -f twiki >& SRandCRsnuisanceTwiki.txt
combine -M ProfileLikelihood -m 125 --signif --pvalue -t -1 --expectSignal=1 SRandCRs.txt
combine -M ProfileLikelihood -m 125 --signif --pvalue -t -1 --toysFreq --expectSignal=1 SRandCRs.txt
combine -M MaxLikelihoodFit -m 125 -t -1 --expectSignal=1 --robustFit=1 --stepSize=0.05 --rMin=-5 --rMax=5 --saveNorm --saveShapes --plots
cd ..
How to get impact plot (rho, pulls)
See
https://indico.cern.ch/event/456547/contributions/1126035/attachments/1188389/1724407/Combine-Tutorial-Impacts.pdf
Install
http://cms-analysis.github.io/CombineHarvester/
(and
https://twiki.cern.ch/twiki/bin/viewauth/CMS/SWGuideHiggsAnalysisCombinedLimit)
text2workspace.py shapes_group_group_fh_noMCstat.txt && \
combineTool.py -M Impacts -d shapes_group_group_fh_noMCstat.root -m 125 --robustFit 1 --doInitialFit >& logInitFit && \
combineTool.py -M Impacts -d shapes_group_group_fh_noMCstat.root -m 125 --robustFit 1 --doFits --parallel 4 >& logFit && \
combineTool.py -M Impacts -d shapes_group_group_fh_noMCstat.root -m 125 -o impacts.json >& logImpacts && \
plotImpacts.py -i impacts.json -o impacts >& logImpactsPdf &
Check nuisance parameters
cd $CMSSW_BASE/src
git clone https://github.com/cms-analysis/CombineHarvester.git CombineHarvester
cd CombineHarvester
scram b -j 6
cd -
text2workspace.py SRandCRs.txt -m 125
combineTool.py -M Impacts -d SRandCRs.root -m 125 --doInitialFit --robustFit 1 --rMax 100 --rMin -100
combineTool.py -M Impacts -d SRandCRs.root -m 125 --robustFit 1 --doFits --rMax 100 --rMin -100
combineTool.py -M Impacts -d SRandCRs.root -m 125 -o impacts.json
plotImpacts.py -i impacts.json -o impacts
##########################
#likelihood scan of nuisance parameter
combine SRandCRs.txt -M MultiDimFit --algo grid --setPhysicsModelParameterRanges CMS_vhbb_puWeight=-2,2 --redefineSignalPOI CMS_vhbb_puWeight --freezeNuisances r -n MyScan
root -l higgsCombineMyScan.MultiDimFit.mH120.root
limit->Draw("2*deltaNLL:CMS_vhbb_puWeight","quantileExpected<1","L")
#likelihood scan plot
combine SRandCRsnuisance.txt -M MultiDimFit --algo grid --setPhysicsModelParameterRanges r=-2,2 --redefineSignalPOI r -n MyScan
root -l higgsCombineMyScan.MultiDimFit.mH120.root
limit->Draw("2*deltaNLL:r","quantileExpected<1","L")
Debug
combine -M ProfileLikelihood cards_qq_freq//dijet_combine_qq_460_lumi-27.637_CaloTrijet2016.txt --minimizerTolerance 0.000001 --setPhysicsModelParameterRanges r=0,20.000000 --saveWorkspace --freezeNuisances p3_CaloTrijet2016,p4_CaloTrijet2016,jer,jes -v 3 >& log
Redo MHT on-fly
tree->Draw("(Sum$(Jet_pt * sin(Jet_phi) * ( abs(Jet_eta)<2.4 && Jet_pt>30 ) )**2 +Sum$(Jet_pt * cos(Jet_phi) * ( abs(Jet_eta)<2.4 && Jet_pt>30 ) )**2)**0.5 : mhtJet30")
tree->Draw("((Sum$(selLeptons_pt * sin(selLeptons_phi)) + Sum$(Jet_pt * sin(Jet_phi) * ( abs(Jet_eta)<2.4 && Jet_pt>30 )))**2 + (Sum$(selLeptons_pt * cos(selLeptons_phi)) + Sum$(Jet_pt * cos(Jet_phi) * ( abs(Jet_eta)<2.4 && Jet_pt>30 )))**2)**0.5 : mhtJet30")
Redo HT on-fly
tree->Draw("Sum$(Jet_pt * ( abs(Jet_eta)<2.4 && Jet_pt>30 && Jet_puId==1 )) + Sum$(selLeptons_pt): htJet30")
Stack from DC - datacard
python stack_from_dc.py \
-D ../Limit_expertAllnominal/vhbb_DC_TH_ZnnHighPt_13TeV.txt \
--bin ZnnHighPt_13TeV \
-M ../Limit_expertAllnominal/mlfit.root \
-F b \
-V ZvvBDT \
-C ZvvHbb13TeVconfig/general.ini \
-C ZvvHbb13TeVconfig/vhbbPlotDef.ini \
-C ZvvHbb13TeVconfig/plots.ini \
-C ZvvHbb13TeVconfig/paths.ini \
-C ZvvHbb13TeVconfig/datacards.ini
Auto-crop image from bash
convert -trim image.jpg image.jpg
Lecture on ATLAS/CMS Statistical Analysis Methods
combine likelihood p-value Berger
https://indico.cern.ch/event/559774/contributions/2656973/attachments/1510171/2355090/ICNFP2017-Berger.pdf
Count number of files in a folder/directory (bash)
ls -1 /usr/bin/ | wc -l
Jet Energy Correction (JEC)
https://twiki.cern.ch/twiki/bin/view/CMS/IntroToJEC
MET Missing ET Type-I Type-II Type-0 (Type1, Type2, Type0)
https://twiki.cern.ch/twiki/bin/view/CMSPublic/WorkBookMetAnalysis
https://twiki.cern.ch/twiki/bin/view/CMS/METType1Type2Formulae
Fake MET check plots
FakeMET_count->GetBinContent(1)/Count->GetBinContent(1)
5.27882008315988749e-01
tree->Draw("met_pt","FakeMET_met==met_pt"); tree->Draw("met_pt","5.27882008315988749e-01","same")
https://twiki.cern.ch/twiki/bin/view/CMS/Internal/TdrProcessing
SSH Key
https://docs.gitlab.com/ee/ssh/README.html#adding-an-ssh-key-to-your-gitlab-account
ssh
git@gitlabNOSPAMPLEASE.cern.ch -T -p 7999
Download an AN from SVN for editing
See the TWiki
https://twiki.cern.ch/twiki/bin/view/CMS/Internal/TdrProcessing (it includes also the gitlab recipes)
svn co -N svn+ssh://sdonato@svn.cern.ch/reps/tdr2
cd tdr2/
svn update utils
svn update -N notes
svn update notes/AN-18-269
cd notes
eval `./tdr runtime -csh`
cd AN-18-269/trunk
tdr -style an b AN-18-269
#COMMITS
cd [papers|notes]/XXX-YY-NNN/trunk/
svn add file1 file2 # tell svn to include these files for next check-in
svn ci # check-in (=save) everything that is new
Comparison HLT run-1 vs run-2 Phys14 samples PU30BX50
a
hltGetConfiguration \
/dev/CMSSW_7_2_0/2014/V10 \
--unprescale \
--path HLTriggerFirstPath,HLT_DiCentralPFJet30_PFMET80_BTagCSV07_v6,HLTriggerFinalPath\
--globaltag PHYS14_ST_V1 \
--l1 L1Menu_Collisions2015_25nsStage1_v4 \
--input /store/mc/Phys14DR/QCD_Pt-170to300_Tune4C_13TeV_pythia8/GEN-SIM-RAW/AVE30BX50_tsg_castor_PHYS14_ST_V1-v1/00000/025271B2-DAA8-E411-BB6E-002590D94F8E.root \
--max-events -1 \
--full --offline --mc --process TEST --no-output --timing \
> hlt.py
a
Best RelVal HLT b-tag and Higgs performance DQM plots
/RelValTTbar_13_HS/CMSSW_7_6_0-PU25ns_76X_mcRun2_asymptotic_v11_TSGstudy-v1/DQMIO
https://goo.gl/PfWGoU
Validation frozen menu
hltGetConfiguration /online/collisions/2015/25ns14e33/v4.2/HLT/V1 --full --data --process TEST --globaltag auto:run2_hlt_GRun --l1-emulator 'stage1,gt' --l1=L1Menu_Collisions2015_25nsStage1_v5 --input root://cms-xrd-global.cern.ch//store/data/Run2015C/HLTPhysics/RAW/v1/000/254/833/00000/14D08A38-4149-E511-89B9-02163E0146C8.root --max-events -1 --no-output > hlt.py
AFS website
from
https://webservices.web.cern.ch/webservices/Services/ManageSite/Default.aspx?SiteName=sdonato
set the AFS path and then set permission with (
https://espace.cern.ch/webservices-help/websitemanagement/ConfiguringAFSSites/Pages/PermissionsforyourAFSfolder.aspx
):
<span id="ctl00_ContentPlaceHolder1_HPage1_lblPage"><span id="ctl00_ContentPlaceHolder1_HPage1_lblPage">fs setacl webDirectoryRoot webserver:afs read</span><span id="ctl00_ContentPlaceHolder1_HPage1_lblPage">
</span></span><span id="ctl00_ContentPlaceHolder1_HPage1_lblPage">afind webDirectoryRoot -t d -e "fs setacl -dir {} -acl webserver:afs read"
<b>
Browse folder (https://espace.cern.ch/webservices-help/websitemanagement/ConfiguringAFSSites/Pages/DirectoryBrowsingforAFSsites.aspx)
</b>echo "Options +Indexes" > .htaccess</span>
<span id="ctl00_ContentPlaceHolder1_HPage1_lblPage"></span>
Override L1 menu with XML
git cms-addpkg L1TriggerConfig/L1GtConfigProducers
cp /afs/cern.ch/user/t/tmatsush/public/L1Menu/L1Menu_Collisions2015_25nsStage1_v5/xml/L1Menu_Collisions2015_25nsStage1_v5_L1T_Scales_20141121.xml L1TriggerConfig/L1GtConfigProducers/data/Luminosity/startup/
scram b -j8
hltIntegrationTests /users/sdonato/TriggerWH/5p6E33Menu/V11 -s /dev/CMSSW_7_4_0/HLT -i root://xrootd.ba.infn.it///store/relval/CMSSW_7_4_8_patch1/RelValTTbarLepton_13/GEN-SIM-DIGI-RAW-HLTDEBUG/MCRUN2_74_V11_mulTrh-v1/00000/08570001-3B3C-E511-8F05-0025905A612C.root -x "-globaltag MCRUN2_74_V11 --l1Xml L1Menu_Collisions2015_25nsStage1_v5_L1T_Scales_20141121.xml"
Filter LumiSection and run using JSON (>74X)
import FWCore.PythonUtilities.LumiList as LumiList
process.source.lumisToProcess = LumiList.LumiList(filename = 'goodList.json').getVLuminosityBlockRange()
https://github.com/cms-sw/cmssw/blob/master/IOPool/Input/test/test_repeating_cfg.py
import
ParameterSet.Config as cms
process = cms.Process("TEST")
process.source = cms.Source("RepeatingCachedRootSource", fileName = cms.untracked.string("file:PoolInputRepeatingSource.root"), repeatNEvents = cms.untracked.uint32(2))
process.maxEvents.input = 10000
process.checker = cms.EDAnalyzer("OtherThingAnalyzer")
#process.dump = cms.EDAnalyzer("EventContentAnalyzer")
process.p = cms.Path(process.checker)
#process.o = cms.EndPath(process.dump)
How to get luminosity ( lumicalc brilcalc )
From lxplus:
eval `scramv1 unsetenv -sh` ## or -csh
#this avoid any conflict with CMSSW. This command is the opposite of cmsenv
export PATH=$HOME/.local/bin:/afs/cern.ch/cms/lumi/brilconda-1.1.7/bin:$PATH
To install brilacalc:
pip install --install-option="--prefix=$HOME/.local" brilws
To upgrade brilacalc:
pip install --install-option="--prefix=$HOME/.local" --upgrade brilws
To run brilacalc:
brilcalc lumi -b "STABLE BEAMS" -i /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions16/13TeV/Final/Cert_271036-284044_13TeV_PromptReco_Collisions16_JSON.txt --normtag /cvmfs/cms-bril.cern.ch/cms-lumi-pog/Normtags/normtag_PHYSICS.json --hltpath "DST_L1HTT_CaloScouting_PFScouting_v*" --end 280385
For updates and details, look at:-
http://cms-service-lumi.web.cern.ch/cms-service-lumi/brilwsdoc.html
-
https://twiki.cern.ch/twiki/bin/viewauth/CMS/PdmV2016Analysis-
https://twiki.cern.ch/twiki/bin/view/CMS/TWikiLUM (for normtag)
How to get luminosity or pile-up event-by-event lumi-by-lumi (JSON, PU, pileup)
In
https://cms-service-dqm.web.cern.ch/cms-service-dqm/CAF/certification/Collisions17/13TeV/PileUp/
you can find a JSON file. Its format is:
...
"306091": [
[45,2.7535e+05,1.4005e-04,1.0648e-03],
[46,4.9120e+05,1.3976e-04,1.0637e-03],
[47,4.9046e+05,1.3942e-04,1.0619e-03]],
"306092": [
[45,2.7535e+05,1.4005e-04,1.0648e-03],
[46,4.9120e+05,1.3976e-04,1.0637e-03],
[47,4.9046e+05,1.3942e-04,1.0619e-03],
...
The numers correspond to:
...
"run": [
[lumisection,integrated lumi (ub-1) , RMS of inst. lumi (ub-1), inst. lumi (ub-1)],
...
[LS, integrated luminosity (/ub), rms bunch luminosity (/ub/Xing), average bunch instantaneous luminosity (/ub/Xing)]
And so you can get the pile-up from:
inst. lumi (ub-1) * minBias = 1.0648e-03 * 69200 = 73.6842 (example above run=306091 LS=45 )
The minBias cross section for 2016 data (13
TeV) is 69.2 mb and is taken from
https://twiki.cern.ch/twiki/bin/viewauth/CMS/PileupJSONFileforData#Pileup_JSON_Files_For_Run_II
More info:
https://twiki.cern.ch/twiki/bin/viewauth/CMS/PileupJSONFileforData#Calculating_the_Relevant_Pileup
Event asymmetry (boost)
Sum$(Jet_pt/(tan(2*atan(exp(-Jet_eta)))))
Homemade JSON filer on FWLite (pyROOT)
def goodEvent(run,lumi):
JSONlist={"251244": [[85, 86], [88, 93], [96, 121], [123, 156], [158, 428], [430, 442]],
"251251": [[1, 31], [33, 97], [99, 167]],
"251252": [[1, 283], [285, 505], [507, 554]],
"251561": [[1, 94]],
"251562": [[1, 439], [443, 691]],
"251643": [[1, 216], [222, 606]],
"251721": [[21, 36]],
"251883": [[56, 56], [58, 60], [62, 144], [156, 437]]}
if str(run) in JSONlist.keys():
for rg in JSONlist[str(run)]:
if len(rg) ==2:
if lumi>=rg[0] and lumi<=rg[1]:
return True
return False
JSON = goodEvent(event.eventAuxiliary().run(),event.eventAuxiliary().luminosityBlock())
Golden JSON file
https://twiki.cern.ch/twiki/bin/view/CMS/PdmVDataReprocessingUL2018
/afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions18/13TeV/Legacy_2018/
ROOT colors
colors = [
ROOT.kRed +3,
ROOT.kRed +1,
ROOT.kRed -4,
ROOT.kRed -7,
ROOT.kRed -9,
ROOT.kGreen +3,
ROOT.kGreen +1,
ROOT.kGreen -4,
ROOT.kGreen -7,
ROOT.kGreen -9,
ROOT.kBlue +3,
ROOT.kBlue +1,
ROOT.kBlue -4,
ROOT.kBlue -7,
ROOT.kBlue -9,
]
colors = [
ROOT.kBlack,
ROOT.kYellow+1,
ROOT.kRed,
ROOT.kMagenta,
ROOT.kBlue,
ROOT.kCyan+1,
ROOT.kGreen+1,
ROOT.kOrange,
ROOT.kPink,
ROOT.kViolet,
ROOT.kAzure,
ROOT.kTeal,
ROOT.kSpring,
ROOT.kGray,
]
#include"TChain.h"
#include"TFile.h"
#include"TTreeFormula.h"
void skim(){
auto fileout = new TFile("test.root","recreate");
auto mchain = new TChain("tree");
mchain->Add("Nov3/Oct19-__QCD_HT1000to1500_TuneCUETP8M1_13TeV-madgraphMLM-pythia8.root");
mchain->SetBranchStatus("*",0);
mchain->SetBranchStatus("HLT*",1);
mchain->SetBranchStatus("HLT_PFHT800",1);
mchain->SetBranchStatus("caloJet_*",1);
mchain->SetBranchStatus("pfJet_*",1);
mchain->SetBranchStatus("offJet_*",1);
mchain->SetBranchStatus("hltQuadCentralJet45",1);
mchain->SetBranchStatus("hltBTagCaloCSVp087Triple",1);
mchain->SetBranchStatus("hltBTagPFCSVp016SingleWithMatching",1);
auto HLT_BIT_HLT_PFHT450_SixJet40_BTagCSV_p056_v = new TTreeFormula("HLT_BIT_HLT_PFHT450_SixJet40_BTagCSV_p056_v","jets_pt[0]",mchain);
fileout->cd();
auto newTree = mchain->CloneTree(0);
bool trigger = false;
for(int i=0; i<mchain->GetEntries(); i++){
HLT_BIT_HLT_PFHT450_SixJet40_BTagCSV_p056_v->GetNdata();
trigger = HLT_BIT_HLT_PFHT450_SixJet40_BTagCSV_p056_v->EvalInstance();
if(trigger)
{
newTree->Fill();
}
}
newTree->Write();
}
pyROOT how to fill a TTree
http://wlav.web.cern.ch/wlav/pyroot/tpytree.html
use C++ template in pyROOT
aa = getattr(ROOT,"TMatrixTSym<double>")()
aa
#include "DataFormats/Math/interface/deltaR.h"
Events->Draw("Min$(reco::deltaR(recoCaloJets_hltAK4CaloJetsCorrectedIDPassed__reHLT.obj[0].eta(),recoCaloJets_hltAK4CaloJetsCorrectedIDPassed__reHLT.obj[0].phi(),recoGenJets_ak4GenJets__reHLT.obj.eta(),recoGenJets_ak4GenJets__reHLT.obj.phi()))>0.5")
Program to extract PDF images from PDF
gpdfx
Run list
248009 -> 0Tesla
How to add a new branch in the usercode with GIT
## setup cms-tsg-git cmssw repository locally
git cms-init
git cms-addpkg HLTrigger/Configuration
git cms-addpkg Configuration/HLT
git remote add STORM-cmssw https://github.com/cms-tsg-storm/cmssw.git
git checkout 80XHLTAfterMD4Train
scram b -j8
## update a GIT area
git remote update
git pull
##git clone a specific branch
git clone -b NtuplerHLTdata3 git@github.com:silviodonato/usercode.git
## set a new area as origin
git remote add origin git@github.com:silviodonato/usercode.git
## push branch on the new origin
git push origin NtuplerHLTdata3
## getting an existing branch as master
git clone https://github.com/silviodonato/Xbb.git
cd Xbb/
git branch -a
git checkout -b fromLuca remotes/origin/fromLuca
## revert one changed not committed file
git checkout -- example.txt
## revert all not committed files
git checkout -f or git reset --HARD
## view commit history
git log
## revert a single file
git reset 32928ed960e85d732549bf3624cdfe7d9fad1ad9 -- subtables.sh
git checkout subtables.sh
## revert all files to a previous commit and remove it also online
git log
git reset --hard 9402f76b25a1
git push --force origin fromLuca
## revert all files to a previous commit only locally
git log
git reset 9402f76b25a1
git checkout .
## create a new repository
https://help.github.com/articles/create-a-repo/
## create a new branch from scratch
mkdir test4
cd test4
git init
git clone git@github.com:silviodonato/usercode.git
cd usercode/
git fetch origin master
git checkout -b test4
echo test4 > test4
git add test4
git commit -m test4
git push origin test4
## create a new branch from a previous branch
mkdir test5
cd test5
#git init
git clone git@github.com:silviodonato/usercode.git
cd usercode/
git fetch origin test4
git checkout -b test5
echo test5 > test5
git add test5
git commit -m test5
git push origin test5
## quick change in the master
mkdir test6
cd test6
git clone git@github.com:silviodonato/usercode.git
cd usercode/
echo testMaster > testMaster
git add testMaster
git commit -m testMaster
git push origin master
## get an existing branch
mkdir test5
cd test5
#git init
git clone git@github.com:silviodonato/usercode.git
cd usercode/
git checkout test4
echo test6 > test6
git add test6
git commit -m test6
git push origin test6
## get put a branch as master
git clone git@github.com:silviodonato/RateEstimate.git RateEstimate
cd RateEstimate
git merge HLTRate ## HLTRate=new-branch
git push origin master
## GIT how to add a new branch in the CMSSW usercode
# (if needed) git init .
git checkout -b newBranch
# git add ........ and git commit ..........
git remote add origin git@github.com:silviodonato/usercode.git
git push origin newBranch
## GIT merge commits in one commit (git squash)
git log
## to merge the latest 4 commits
git rebase -i HEAD~4
## write squash instead of pick, eg.
pick d0bb066 bugfix1
s 8b89249 bugfix2
s 9dea47f improvement
s 6fb4b58 final bug fix
## replace the previous commit text with the new one
# This is a combination of 4 commits.
bug fixes and improvements
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# ....
## Finally push (--force if needed)
git push .........
## git cherry-pick
git remote add makortel https://github.com/makortel/cmssw.git
git fetch makortel
git cherry-pick 336f8afcc7ef839fd973539af2a31ebe7a3a138a
---+++ rootcore rootmath rootmathcore....
cat /cvmfs/cms-ib.cern.ch/week0/slc7_amd64_gcc900/cms/cmssw-patch/CMSSW_11_2_X_2020-11-23-2300/config/toolbox/slc7_amd64_gcc900/tools/selected/rootcore.xml
cat /cvmfs/cms-ib.cern.ch/week0/slc7_amd64_gcc900/cms/cmssw-patch/CMSSW_11_2_X_2020-11-23-2300/config/toolbox/slc7_amd64_gcc900/tools/selected/rootmath.xml
---+++ Check dependecies (Dependecy violations)
see
https://cmssdt.cern.ch/SDT/cgi-bin/buildlogs/slc7_ppc64le_gcc820/CMSSW_11_2_X_2020-11-23-2300/depViolationLogs/Geometry/CSCGeometryBuilder
see
https://github.com/cms-sw/cms-bot/blob/master/runTests.py
Run: ReleaseDepsChecks.pl
---+++ github API
https://api.github.com/repos/cms-sw/cmssw/issues/28854
https://api.github.com/repos/cms-sw/cmssw/issues?state=open&per_page=100&page=0&access_token=145d0c4edb06fa43755400d4dafa712e146eafe4&milestone=82
Diff:
https://patch-diff.githubusercontent.com/raw/cms-sw/cmssw/pull/28854.diff
Patch:
https://patch-diff.githubusercontent.com/raw/cms-sw/cmssw/pull/28854.patch
github API graphql
curl -i -H 'Accept: application/json' -H 'Content-Type: application/json' -H "Authorization: bearer ghp_xxxxxxxxxMYTOKENxxxxxxxxxxx" -X POST -d @cmssw.graphql https://api.github.com/graphql
with cmssw.graphql containng:
{
"query": "{
repository(owner: \"cms-sw\", name:\"cmssw\"){
description
url
pullRequest(number: 33097){
number
id
title
milestone {
title
}
headRepository {
owner {
login
}
}
headRefName
baseRefName
headRef {
target {
oid
}
}
baseRef {
target {
oid
}
}
labels(first: 20) {
nodes {
name
}
}
}
}
}"
}
(for the token see
https://docs.github.com/en/graphql/guides/forming-calls-with-graphql#authenticating-with-graphql
)
GIT clone
git clone https://github.com/degrutto/VHbbUF.git
git clone git@github.com:cms-steam/HLTrigger temp
Syncronize two folders
rsync -r -v --size-only /gpfs/ddn/srm/cms/store/user/sdonato/HLTRates_74X_50ns_Phys14_V8/ HLTRates_74X_50ns_Phys14_V8
Run over a list of event (by EventNumber) - Remove events with repeated PU collisions
count = 0
eventsToProcessFiltered = cms.untracked.VEventRange()
folder = '/scratch/sdonato/STEAM/PUsorting/CMSSW_7_4_0_patch1/src/RemovePileUpDominatedEvents/RemovePileUpDominatedEvents/test/makeGoodEventList/files/'
for fileName in process.source.fileNames:
goodEventsFileName = 'store'+(fileName.split("store")[1]).split(".root")[0]
goodEventsFileName = goodEventsFileName.replace("/","_")
goodEventsFileName = folder + goodEventsFileName + '.txt'
try:
goodEventsFile = open(goodEventsFileName, 'r')
except:
print
print "*"*30
print goodEventsFileName," not found. All events will be skimmed."
continue
for line in goodEventsFile.readlines():
if not line.find("\n"): continue
eventsToProcessFiltered.append("1:"+line.replace("\n", ""))
count = count +1
process.source.eventsToProcess = eventsToProcessFiltered
How to plot a turn-on curve directly with CMSSW ROOT file
Events->Draw("edmTriggerResults_TriggerResults__HLTX.obj.accept(0):recoGenJets_ak5GenJets__DIGI2RAW.obj[3].pt()","triggerTriggerFilterObjectWithRefs_hltTripleCSV0p5__HLTX.obj.jetSize()>=3","prof")
How to pick and event
https://twiki.cern.ch/twiki/bin/view/CMSPublic/WorkBookPickEvents
edmPickEvents.py "/QCD_HT700to1000_TuneCUETP8M1_13TeV-madgraphMLM-pythia8/RunIISpring15DR74-Asympt25ns_MCRUN2_74_V9-v1/AODSIM" 1:66537:197626056
edmCopyPickMerge outputFile=pickevents.root \
? eventsToProcess=1:197626056 \
? inputFiles=/store/mc/RunIISpring15DR74/QCD_HT700to1000_TuneCUETP8M1_13TeV-madgraphMLM-pythia8/AODSIM/Asympt25ns_MCRUN2_74_V9-v1/70000/C073061E-DD15-E511-9902-047D7B10618E.root
CSVv2IVF = -10 means #selected tracks <= 2. You can check it immediatly with:
Events->Draw("recoTracksRefsrecoJTATagInforecoIPTagInfos_hltBLifetimeTagInfosPF__HLTX.obj.selectedTracks().size():recoJetedmRefToBaseProdTofloatsAssociationVector_hltCombinedSecondaryVertexBJetTagsPF__HLTX.obj.data_","","COLZ")
C++ replace/erase/remove
bool ok;
do{
ok=true
if(newName.find("/")<newName.size()) {newName[newName.find("/")]="_"; ok=false;}
if(nameFile.find(";")<nameFile.size()) {nameFile.erase(nameFile.find(";"),1); ok=false;}
}while(!ok);
Slimmed jets offline b-tag
.....bDiscriminator("combinedInclusiveSecondaryVertexV2BJetTags")
Delete file from T2
lcg-del -D srmv2 -b -l srm://<a href="http://www.google.com/url?q=http%3A%2F%2Fstormfe1.pi.infn.it%3A8444%2Fsrm%2Fmanagerv2%3FSFN%3D%2Fcms%2Fstore%2Fuser%2Fcvernier%2FCombinedSVIVFNoVertex_B_7_2_Rl2.root&sa=D&sntz=1&usg=AFQjCNHPd2y9yYWNr7hVJ7Jqnsot_niC1Q" rel="nofollow noreferrer" tabindex="-1" target="_blank">stormfe1.pi.infn.it:8444/srm/managerv2?SFN=/cms/store/user/cvernier/CombinedSVIVFNoVertex_B_7_2_Rl2.root</a>
Integration test
hltIntegrationTests -s /dev/CMSSW_7_4_0/GRun/V41 /users/sdonato/HLT_PFMET_IDTight/V3 --extra '--globaltag auto:run2_hlt_GRun' -i /store/relval/CMSSW_7_4_0/RelValTTbarLepton_13/GEN-SIM-DIGI-RAW-HLTDEBUG/MCRUN2_74_V7_gensim_740pre7-v1/00000/08499872-26DD-E411-9D16-0025905B8562.root
Output module to keep the online b-tag information to do the performance plot
process.hltOutputFULL = cms.OutputModule( "PoolOutputModule",
fileName = cms.untracked.string( "outputFULLNew_1.root" ),
fastCloning = cms.untracked.bool( False ),
dataset = cms.untracked.PSet(
dataTier = cms.untracked.string( 'RECO' ),
filterName = cms.untracked.string( '' )
),
outputCommands = cms.untracked.vstring(
'drop *',
'keep *Jet*_*_*_*',
'keep *_genParticles_*_*',
'keep *Vertex*_*_*_*',
'keep *Trigger*_*_*_*',
'keep recoJetedmRefToBaseProdTofloatsAssociationVector_*_*_*',
)
)
ROOT default binning
gEnv->SetValue("Hist.Binning.2D.Prof",20)
#Hist.Binning.1D.x: 100
#Hist.Binning.2D.x: 40
#Hist.Binning.2D.y: 40
#Hist.Binning.2D.Prof: 100
#Hist.Binning.3D.x: 20
#Hist.Binning.3D.y: 20
#Hist.Binning.3D.z: 20
#Hist.Binning.3D.Profx: 100
#Hist.Binning.3D.Profy: 100
Snippet to keep only information useful for trigger development (WH)
process.hltOutputFULL = cms.OutputModule( "PoolOutputModule",
fileName = cms.untracked.string( "outputFULL.root" ),
fastCloning = cms.untracked.bool( False ),
dataset = cms.untracked.PSet(
dataTier = cms.untracked.string( 'RECO' ),
filterName = cms.untracked.string( '' )
),
outputCommands = cms.untracked.vstring(
'drop *',
'keep l1extra*_*_*_*',
'keep recoRecoEcalCandidates_*_*_*',
'keep trigger*_*_*_*',
'keep *Jets_*_*_*',
'keep *Jets_*_*_*',
'keep *METs_*_*_*',
'keep recoJetedmRefToBaseProdTofloatsAssociationVector_*_*_*',
)
)
process.FULLOutput = cms.EndPath( process.hltOutputFULL )
MuEnriched EMEnriched QCD generator
EMEnriched:
https://raw.githubusercontent.com/cms-sw/genproductions/7a54b5a/python/ThirteenTeV/QCD_Pt_30to80_EMEnriched_Tune4C_13TeV_pythia8_cff.py
MuEnriched:
https://cms-pdmv.cern.ch/mcm/public/restapi/requests/get_fragment/BTV-Fall13-00031
Initialize
source /cvmfs/cms.cern.ch/cmsset_default.csh
or
source /cvmfs/cms.cern.ch/cmsset_default.sh
GIT
setenv CMSSW_GIT_REFERENCE /scratch/sdonato/.cmsgit-cache
DAS dasgoclient (it was DBS query, das_client.py)
dasgoclient --query "dataset dataset=/MinBias_TuneCUETP8M1_13TeV-pythia8/RunIISummer15GS-MCRUN2_71_V1_ext1-v1/GEN-SIM"
das_client.py --query "dataset dataset=/MinBias_TuneCUETP8M1_13TeV-pythia8/RunIISummer15GS-MCRUN2_71_V1_ext1-v1/GEN-SIM"
------------------------ OLD -----------------------------------------------------
wget --no-check-certificate https://cmsweb.cern.ch/das/cli -O ./das_client.py
dasgoclient
--query "find dataset where dataset=*/VBF_HToBB_M-125_8TeV-powheg-pythia6_ext* "
dasgoclient
--query "find sum(file.numevents) where dataset=/Neutrino_Pt-2to20_gun/Fall13dr-tsg_PU40bx25_POSTLS162_V2-v1/GEN-SIM-RAW"
NANOAOD files examples
das_client --query "find dataset dataset=/QC*/*/NANOAODSIM"
/QCD_HT300to500_TuneCP5_13TeV-madgraph-pythia8/RunIIFall17NanoAOD-PU2017_12Apr2018_94X_mc2017_realistic_v14-v1/NANOAODSIM
/store/mc/RunIIFall17NanoAOD/QCD_HT300to500_TuneCP5_13TeV-madgraph-pythia8/NANOAODSIM/PU2017_12Apr2018_94X_mc2017_realistic_v14-v1/20000/3E022B93-3442-E811-B494-549F3525A64C.root
XROOTD
root://cms-xrd-global.cern.ch//store/express/Commissioning2015
/ExpressCosmics/FEVT/Express-v1/000/233/942/00000
/76B732A7-41B2-E411-91DF-02163E0121BF.root
Less option (updated output)
less -B log.txt
hltGetConfiguration
hltGetConfiguration /frozen/2015/25ns14e33/v3.0/HLT/V1 --input '/store/mc/
15Digi74/QCD_Pt_800to1000_TuneCUETP8M1_13TeV_pythia8/GEN-SIM-RAW/AVE_40_BX_25ns_tsg_MCRUN2_74_V7-v1/00000/0C82E19F-A0F0-E411-A5F5-20CF300E9ED3.root' --path
HLTriggerFirstPath,HLT_CaloMHTNoPU90_PFMET90_PFMHT90_IDTight_v1 --open --no-prescale --output full --globaltag 74X_HLT_mcRun2_asymptotic_fromSpring15DR_v0 > hlt.py
STEAM stuff
hltGetConfiguration /dev/CMSSW_7_3_0/GRun/V47 --input '/store/mc/Phys14DR/QCD_Pt-30to50_Tune4C_13TeV_pythia8/GEN-SIM-RAW/AVE30BX50_tsg_castor_PHYS14_ST_V1-v2/10000/CEF7EB2E-AB8D-E411-A336-001E673986B0.root' --no-output --globaltag PHYS14_50_V1 > hlt.py
hltGetConfiguration /dev/CMSSW_7_3_0/GRun/V47 --input '/store/mc/Phys14DR/QCD_Pt-30to50_Tune4C_13TeV_pythia8/GEN-SIM-RAW/AVE20BX25_tsg_castor_PHYS14_25_V3-v2/00000/002008F5-8698-E411-BE62-002481E15104.root' --no-output --globaltag PHYS14_25_V1 > hlt.py
hltGetConfiguration /dev/CMSSW_7_3_0/GRun/V47 --input '/store/mc/Phys14DR/QCD_Pt-30to50_Tune4C_13TeV_pythia8/GEN-SIM-RAW/AVE20BX25_tsg_castor_PHYS14_25_V3-v2/00000/002008F5-8698-E411-BE62-002481E15104.root' --no-output --globaltag PHYS14_25_V1 --prescale 7e33 --l1-emulator 'stage1,gt' --l1Xml
L1Menu _Collisions2015_25ns_v2_L1T_Scales_20141121_Imp0_0x1030.xml > hlt25ns.py
https://cmsweb.cern.ch/das/request?view=list&limit=10&instance=prod%2Fglobal&input=dataset+dataset%3D%2FQCD_Pt-30to50_Tune4C_13TeV_pythia8%2FPhys14DR-AVE30BX50_tsg_castor_PHYS14_ST_V1-v2%2FGEN-SIM-RAW
dataset=/QCD_Pt-30to50_Tune4C_13TeV_pythia8/Phys14DR-AVE30BX50_tsg_castor_PHYS14_ST_V1-v2/GEN-SIM-RAW
Candidate generic string filter
process.selector30 = cms.EDFilter("CandViewSelector",
src = cms.InputTag("genParticles"),
cut = cms.string("pt > 30.0")
)
process.filter = cms.EDFilter("PATCandViewCountFilter", maxNumber = cms.uint32(999), src = cms.InputTag("selector30"), minNumber = cms.uint32(1))
Download a file from CMSSW by GIT
wget
https://raw.githubusercontent.com/cms-sw/cmssw/CMSSW_7_4_0_pre6/HLTriggerOffline/Btag/test/py
Forward porting (rebase) CMSSW git
cmsrel *daily version of CMSSW*
cd CMSSW**/src
cmsenv
git cms-merge-topic silviodonato:*****
git diff
[solve the problems...]
<code>git push my-cmssw HEAD:</code>*****
Details:
http://cms-sw.github.io/tutorial-forward-port.html#rewriting-history-and-cleaning-up-your-changes
GIT diff offline (use git as diff or sdiff)
git diff --no-index configs/Jan18_ddQCD.py configs/Jan18_mcQCD.py
GIT test a pull request (PR) locally
git fetch origin pull/8/head:dijet_81x
git checkout dijet_81x
See
https://help.github.com/en/articles/checking-out-pull-requests-locally#modifying-an-inactive-pull-request-locally
BTAG DQM Validation:
https://cmsweb.cern.ch/dqm/relval/start?runnr=1;dataset=/RelValTTbar/CMSSW_7_3_0-PU_MCRUN1_73_V2_FastSim-v1/DQMIO;sampletype=offline_relval;filter=all;referencepos=overlay;referenceshow=customise;referenceobj1=refobj;referenceobj2=none;referenceobj3=none;referenceobj4=none;search=;striptype=object;stripruns=;stripaxis=run;stripomit=none;workspace=MC%20HLT;size=M;root=HLT/BTag/Discrimanator/HLT_PFMET120_NoiseCleaned_BTagCSV07_/efficiency;focus=;zoom=no
;
TChain multifiles
TChain chain("l1ExtraTreeProducer/L1ExtraTree");
chain.Add("eos/cms/store/group/dpg_trigger/comm_trigger/L1Trigger/L1Menu2015/v10/62X/13TeV/40PU/25bx/ReEmul2015/L1Tree*");
chain.Draw("","met>60")
Btag discriminant -1 and -10
http://cmslxr.fnal.gov/lxr/source/RecoBTau/JetTagComputer/src/GenericMVAJetTagComputer.cc
Load standard sequence
process.load("Configuration.StandardSequences.Services_cff")
process.load("Configuration.StandardSequences.Geometry_cff")
process.load("Configuration.StandardSequences.MagneticField_cff")
process.load("Configuration.StandardSequences.FrontierConditions_GlobalTag_cff")
CMSSW Handle is present
isValid()
AFS public
/afs/pi.infn.it/user/vernieri/public
/afs/pi.infn.it/user/sdonato/public
process.load("FWCore.MessageLogger.MessageLogger_cfi")
process.MessageLogger.cerr.FwkReport.reportEvery = 1000
All Moriond 2016 (Moriond16) samples
https://raw.githubusercontent.com/silviodonato/usercode/master/allSamples_PUMoriond17_80X_mcRun2_asymptotic_2016_TrancheIV.txt
(ie.
https://cmsweb.cern.ch/das/request?view=plain&limit=50&instance=prod%2Fglobal&input=dataset+dataset%3D%2F*%2FRunIISummer16MiniAODv2-PUMoriond17_80X_mcRun2_asymptotic_2016_TrancheIV*%2FMINIAODSIM
dataset dataset=/*/RunIISummer16MiniAODv2-PUMoriond17_80X_mcRun2_asymptotic_2016_TrancheIV*/MINIAODSIM
)
Clone method in CMSSW
newName = oldName.clone (changedParameter = 42)
Timing studies in vocms003 or in vocms004
Download the trigger:
hltGetConfiguration \
/frozen/2015/25ns14e33/v3.3/HLT/V1 \
--prescale 1e34 \
--globaltag 74X_HLT_mcRun2_asymptotic_fromSpring15DR_v0 \
--l1 L1Menu_Collisions2015_25nsStage1_v4 \
--input file:/data/samples/Neutrino_Pt-2to20_gun_PU40bx25_Stage1v4Menu/Neutrino_Pt-2to20_gun_Spring15_PU40bx25_L1-Stage1v4_CMSSW748p1_part0.root,file:/data/samples/Neutrino_Pt-2to20_gun_PU40bx25_Stage1v4Menu/Neutrino_Pt-2to20_gun_Spring15_PU40bx25_L1-Stage1v4_CMSSW748p1_part1.root,file:/data/samples/Neutrino_Pt-2to20_gun_PU40bx25_Stage1v4Menu/Neutrino_Pt-2to20_gun_Spring15_PU40bx25_L1-Stage1v4_CMSSW748p1_part2.root,file:/data/samples/Neutrino_Pt-2to20_gun_PU40bx25_Stage1v4Menu/Neutrino_Pt-2to20_gun_Spring15_PU40bx25_L1-Stage1v4_CMSSW748p1_part3.root \
--max-events -1 \
--full --offline --mc --process TEST --no-output --timing \
> hlt.py
##50ns menu
hltGetConfiguration \
/frozen/2015/50ns_5e33/v3.4/HLT/V1 \
--globaltag SPR1574_STV1 \
--l1 L1Menu_Collisions2015_50nsGct_v4 \
--input file:/data/samples/Neutrino_Pt-2to20_gun_PU30bx50_L1_2015_50nsCollisionsMenu/Neutrino_Pt-2to20_gun_Spring15Digi74_PU30bx50_L1_2015_50nsCollisionsMenu_v4_part0.root,file:/data/samples/Neutrino_Pt-2to20_gun_PU30bx50_L1_2015_50nsCollisionsMenu/Neutrino_Pt-2to20_gun_Spring15Digi74_PU30bx50_L1_2015_50nsCollisionsMenu_v4_part1.root,file:/data/samples/Neutrino_Pt-2to20_gun_PU30bx50_L1_2015_50nsCollisionsMenu/Neutrino_Pt-2to20_gun_Spring15Digi74_PU30bx50_L1_2015_50nsCollisionsMenu_v4_part2.root,file:/data/samples/Neutrino_Pt-2to20_gun_PU30bx50_L1_2015_50nsCollisionsMenu/Neutrino_Pt-2to20_gun_Spring15Digi74_PU30bx50_L1_2015_50nsCollisionsMenu_v4_part3.root \
--max-events -1 \
--full --offline --data --process TEST --no-output --timing --type=50nsGRun \
> hlt.py
### only one path
hltGetConfiguration \
/frozen/2015/25ns14e33/v3.3/HLT/V1 \
--prescale 1e34 \
--globaltag 74X_HLT_mcRun2_asymptotic_fromSpring15DR_v0 \
--l1 L1Menu_Collisions2015_25nsStage1_v4 \
--input /store/mc/RunIISpring15Digi74/QCD_Pt_170to300_TuneCUETP8M1_13TeV_pythia8/GEN-SIM-RAW/AVE_40_BX_25ns_tsg_MCRUN2_74_V7-v1/00000/16CC3FB0-FDF3-E411-82A7-485B39800BC7.root \
--max-events -1 \
--full --offline --mc --process TEST --no-output --timing \
--paths HLTriggerFirstPath,HLT_CaloMHTNoPU90_PFMET90_PFMHT90_IDTight_BTagCSV0p72_v1,HLTriggerFinalPath \
--open \
> hlt.py
### run-1 menu
## download with CMSSW_7_3_2_patch1 and run with CMSSW_7_2_1
hltGetConfiguration \
/dev/CMSSW_7_2_0/2014/V10 \
--unprescale \
--path HLTriggerFirstPath,HLT_DiCentralPFJet30_PFMET80_BTagCSV07_v6,HLTriggerFinalPath \
--globaltag PHYS14_ST_V1 \
--input /store/mc/RunIISpring15Digi74/QCD_Pt_300to470_TuneCUETP8M1_13TeV_pythia8/GEN-SIM-RAW/AVE_40_BX_25ns_tsg_MCRUN2_74_V7-v1/00000/02E6D6EB-15F4-E411-A90E-00259073E3A6.root \
--max-events -1 \
--full --offline --mc --process TEST --no-output --timing \
> hlt.py
Add fasttimerservice
With data HLTPhysics
Download the trigger:
hltGetConfiguration \
/frozen/2015/25ns14e33/v3.3/HLT/V1 \
--prescale 1e34 \
--globaltag auto:run2_hlt_GRun \
--l1 L1Menu_Collisions2015_25nsStage1_v4 \
--input /store/data/Run2015D/HLTPhysics4/RAW/v1/000/256/843/00000/FECB8DB8-265F-E511-AD86-02163E011CF1.root \
--max-events -1 \
--full --offline --data --process TEST --no-output --timing \
> hlt.py
Add fasttimerservice
Plot running timing from DQM plots FastTimerService
add (see
https://twiki.cern.ch/twiki/bin/view/CMS/FastTimerService):
# remove any instance of the FastTimerService
if 'FastTimerService' in process.__dict__:
del process.FastTimerService
# instrument the menu with the FastTimerService
process.load( "HLTrigger.Timer.FastTimerService_cfi" )
# print a text summary at the end of the job
process.FastTimerService.printEventSummary = False
process.FastTimerService.printRunSummary = False
process.FastTimerService.printJobSummary = True
# enable DQM plots
process.FastTimerService.enableDQM = True
# enable per-path DQM plots (starting with CMSSW 9.2.3-patch2)
process.FastTimerService.enableDQMbyPath = True
# enable per-module DQM plots
process.FastTimerService.enableDQMbyModule = True
# enable DQM plots vs lumisection
process.FastTimerService.enableDQMbyLumiSection = True
process.FastTimerService.dqmLumiSectionsRange = 2500 # lumisections (23.31 s)
# set the time resolution of the DQM plots
process.FastTimerService.dqmTimeRange = 1000. # ms
process.FastTimerService.dqmTimeResolution = 5. # ms
process.FastTimerService.dqmPathTimeRange = 100. # ms
process.FastTimerService.dqmPathTimeResolution = 0.5 # ms
process.FastTimerService.dqmModuleTimeRange = 40. # ms
process.FastTimerService.dqmModuleTimeResolution = 0.2 # ms
# set the base DQM folder for the plots
process.FastTimerService.dqmPath = "HLT/TimerService"
process.FastTimerService.enableDQMbyProcesses = False
# save the DQM plots in the DQMIO format
process.dqmOutput = cms.OutputModule("DQMRootOutputModule",
fileName = cms.untracked.string("DQM.root")
)
process.FastTimerOutput = cms.EndPath( process.dqmOutput )
# DQMStore service
process.load('DQMServices.Core.DQMStore_cfi')
# FastTimerService client
process.load('HLTrigger.Timer.fastTimerServiceClient_cfi')
process.fastTimerServiceClient.dqmPath = "HLT/TimerService"
# DQM file saver
process.load('DQMServices.Components.DQMFileSaver_cfi')
process.dqmSaver.workflow = "/HLT/FastTimerService/All"
process.DQMFileSaverOutput = cms.EndPath( process.fastTimerServiceClient + process.dqmSaver )
Ratio
TH1F* total;
TH1F* counter;
TH1F* Ratio;
total = (TH1F*) gDirectory->Get("DQMData/Run 1/HLT/Run summary/TimerService/Running 1 processes/process TEST/Paths/HLT_CaloMHTNoPU90_PFMET90_PFMHT90_IDTight_BTagCSV0p72_v1_module_total");
counter = (TH1F*) gDirectory->Get("DQMData/Run 1/HLT/Run summary/TimerService/Running 1 processes/process TEST/Paths/HLT_CaloMHTNoPU90_PFMET90_PFMHT90_IDTight_BTagCSV0p72_v1_module_counter");
Ratio = (TH1F*) total->Clone("Ratio")
for(size_t i=0;i<counter->GetNbinsX();i++){
float num = total->GetBinContent(i);
float den = counter->GetBinContent(i);
if(den<=0) den=0.000001;
Ratio->SetBinContent(i,num/den);
}
Ratio->Draw("HIST")
How to dowload an HLT trigger path
edmConfigFromDB --cff --configName /dev/CMSSW_5_2_6/GRun --nopaths --nooutput --noservices > setup_cff.py
hltGetConfiguration /dev/CMSSW_5_2_1/GRun/V4 --offline --no-output --path HLT_DiCentralJet20_BTagIP_MET65_HBHENoiseFiltered_dPhi1_v1 --timing --data --globaltag auto:hltonline --l1-emulator --l1 L1GtTriggerMenu _L1Menu_Collisions2012_v0_mc --timing > HLT_DiCentralJet20_BTagIP_MET65_HBHENoiseFiltered_dPhi1_v1.py
hltGetConfiguration orcoff:/cdaq/physics/Run2012/7e33/v2.5/HLT/V1 --no-output --path HLT_QuadPFJet78_61_44_31_BTagCSV_VBF_v1 --mc --globaltag START53_V7C::All --unprescale > hlt3.py
hltGetConfiguration /dev/CMSSW_6_2_0/GRun/V24 --output full --path HLT_DoubleJet20_ForwardBackward_v4 --mc --globaltag auto:upgradePLS1 --unprescale --input file:/gpfs/ddn/srm/cms/store/mc/Fall13dr/Neutrino_Pt-2to20_gun/GEN-SIM-RAW/tsg_PU40bx25_POSTLS162_V2-v1/00000/1EE8B655-0378-E311-9905-0030486791F2.root --max-events 100
How to use process.source with keep and drop
process.source = cms.Source( "PoolSource",
fileNames = cms.untracked.vstring(
'file:skimL1TripleJetCVBFfromPhys198609.root',
),
secondaryFileNames = cms.untracked.vstring(
),
dropDescendantsOfDroppedBranches=cms.untracked.bool(False),
inputCommands=cms.untracked.vstring(
'drop *',
'keep *_genParticles_*_*',
'keep *_ak5GenJets_*_*',
'keep *_g4SimHits_*_*',
'keep FEDRawDataCollection_*_*_*'
)
)
Download a folder/directory for GIT GITHUB
svn export http://github.com/silviodonato/usercode.git/branches/NtuplerHLTdata2016/NtupleAOD/ntuple/25ns
#in case of problems, try
sudo svn export http://github.com/silviodonato/usercode.git/branches/NtuplerHLTdata2016/NtupleAOD/ntuple/25ns
Output Module
process.outp1=cms.OutputModule("PoolOutputModule",
fileName = cms.untracked.string('savep1.root'),
outputCommands = cms.untracked.vstring('keep *'),
# SelectEvents = cms.untracked.PSet(
# SelectEvents = cms.vstring('p1')
# )
)
process.out = cms.EndPath( process.outp1 )
How to options in CMSSW
process.options = cms.untracked.PSet(
wantSummary = cms.untracked.bool( True ),
SkipEvent = cms.untracked.vstring('ProductNotFound')
)
From files.root to JSON file
edmLumisInFiles.py files.root
From JSON file to PileUp histo
pileupCalc.py -i lumis.txt --inputLumiJSON /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions12/8TeV/PileUp/pileup_JSON_DCSONLY_190389-208686_All_2012_pixelcorr.txt --calcMode observed --minBiasXsec 69400 --maxPileupBin 50 --numPileupBins 50 MyDataPileupHistogram.root
Lumi mask / where to find JSON file for data / pileup ...
/afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions12/8TeV/Prompt/Cert_190456-208686_8TeV_PromptReco_Collisions12_JSON.txt
Where to find all PileUp PU Scensario (eg. S10 = 2012_Summer_50ns_PoissonOOTPU)
https://twiki.cern.ch/twiki/bin/view/CMS/PdmVPileUpDescription#S10
cmsDriver example (REDIGI of VBF dataset GEN-SIM -> RAW,HLT...)
cmsDriver.py REDIGI --step DIGI,L1,DIGI2RAW,HLT:7E33v2 --conditions START53_V7C::All --pileup 2012_Summer_50ns_PoissonOOTPU --datamix NODATAMIXER --eventcontent RAWSIM --datatier GEN-SIM-RAW --filein=file:/gpfs/ddn/cms/user/donato/FastPV/CMSSW_5_3_10_patch2/src/VBF/test2012/04DE2282-3384-E211-B29F-60EB69BACA86.root --fileout=file:VBFrawS10_withHLT.root -n -1
cmsDriver RECO (AOD)
cmsDriver.py Step_RECO.py -s RAW2DIGI,RECO --conditions=92X_dataRun2_HLT_v7 --eventcontent=AOD -n 10 --no_exec --runUnscheduled --filein=file:/scratch/sdonato/ScoutingPFHT_Run2017E_RAW_304797.root --fileout=RECO.root
cmsDriver example (MINIAOD)
cmsDriver.py Step_MINIAOD.py -s PAT --conditions=92X_dataRun2_HLT_v7 --eventcontent=MINIAOD -n 10 --no_exec --runUnscheduled
https://twiki.cern.ch/twiki/bin/view/CMSPublic/WorkBookMiniAOD2017#Producing_MiniAOD
How to access to EOS from lxplus
eg.
root -l root://eoscms//eos/cms/store/cmst3/group/vbfhbb/CMG/VBF1Parked/Run2012B/22Jan2013/PAT_CMG_V5_17_0/cmgTuple_973_1_XoS.root
How to mount EOS from lxplus
eosmount $HOME/eos
#umount
eosumount $HOME/eos
http://eos.cern.ch/index.php?option=com_content&view=article&id=87:using-eos-at-cern&catid=31:general&Itemid=41
Usefull edm tools
edmConfigDump pippo.py > dump.py
print process.dumpPython()
CRAB example in MC
[CMSSW]
total_number_of_events = -1
number_of_jobs = 40
#events_per_job = 1
#number_of_jobs = 1
pset = HLT_QuadPFJet78_61_44_31_BTagCSV_VBF_v7.py
datasetpath = /Neutrino_Pt-2to20_gun/Fall13dr-tsg_PU40bx25_POSTLS162_V2-v1/GEN-SIM-RAW
output_file = AfterOldTrigger_64_44_24_CaloJetSelection.root
[USER]
return_data = 0
copy_data = 1
storage_element = T2_IT_Pisa
user_remote_dir = MinBias_PU40_25ns_HLT-VBF_GEN-SIM-RAW
publish_data = 0
[CRAB]
use_server = 0
scheduler = remoteGlidein
#scheduler = glite
jobtype = cmssw
CRAB example in Data
[CMSSW]
lumis_per_job = 1000
number_of_jobs = 25
#events_per_job = 1
#number_of_jobs = 1
pset = HLT_QuadPFJet78_61_44_31_BTagCSV_VBF_v7_data.py
datasetpath = /HLTPhysicsParked/Run2012D-v1/RAW
output_file = AfterOldTrigger_64_44_24_CaloJetSelection.root
lumi_mask = /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions12/8TeV/Prompt/Cert_190456-208686_8TeV_PromptReco_Collisions12_JSON.txt
[USER]
return_data = 0
copy_data = 1
storage_element = T2_IT_Pisa
user_remote_dir = Run2012_CMSSW_6_2_5_HLT-VBF_GEN-SIM-RAW
publish_data = 0
[CRAB]
use_server = 0
scheduler = remoteGlidein
#scheduler = glite
jobtype = cmssw
CRAB3 - set up & launch
cmsenv
source /cvmfs/cms.cern.ch/crab3/crab.csh
voms-proxy-init --voms cms --valid 168:00
crab submit -c my_crab_config_file.py
http://glidemon.web.cern.ch/glidemon/user.php?userid=1001&range=30days&type=crab3
http://dashb-cms-job.cern.ch/dashboard/templates/task-analysis/#user=SilvioDonato&refresh=0&table=Mains&p=1&records=25&activemenu=2&pattern=&task=&from=&till=&timerange=lastWeek
CRAB3 - pyhton configuration file
from WMCore.Configuration import Configuration
config = Configuration()
config.section_("General")
config.General.requestName = 'L1Ntuple_FH_TTH_80X_v1'
config.General.workArea = 'crab_projects'
config.section_("JobType")
config.JobType.pluginName = 'Analysis'
config.JobType.psetName = 'l1Ntuple_RAW2DIGI.py'
config.JobType.outputFiles = ['L1Ntuple.root']
config.section_("Data")
config.Data.inputDataset = '/ttHTobb_M125_TuneCUETP8M2_ttHtranche3_13TeV-powheg-pythia8/RunIISummer16DR80-FlatPU28to62HcalNZSRAWAODSIM_80X_mcRun2_asymptotic_2016_TrancheIV_v6-v2/RAWAODSIM'
#config.Data.inputDBS = 'global'
config.Data.splitting = 'FileBased'
config.Data.publication = True
config.Data.unitsPerJob = 12*6/10
config.Data.totalUnits = 10
#config.Data.publishDBS = 'test'
config.Data.outputDatasetTag = config.General.requestName
config.section_("Site")
config.Site.storageSite = 'T2_CH_CSCS'
config.Data.ignoreLocality = False #use remote files by xrootd?
CRAB 3 - Infos
/afs/cern.ch/user/s/sdonato/AFSwork/NtupleL1July14/CMSSW_7_0_4/src/CRAB3
https://twiki.cern.ch/twiki/bin/view/CMSPublic/WorkBookCRAB3Tutorial
(
MultiCRAB3)
https://twiki.cern.ch/twiki/bin/view/CMSPublic/CRABClientLibraryAPI
example
MultiCRAB3
https://twiki.cern.ch/twiki/bin/view/Sandbox/HLTNtupleProductionSTEAM#MultiCRAB3_configuration
Test timing on CRAB:
crab verify
Test splitting jobs with CRAB:
crab submit ?dryrun
Event based splitting (estimated):
config.Data.splitting = ?EventAwareLumiBased?
and
config.Data.unitsPerJob = N
where N is the number of events per job
find on DAS the files published with CRAB3: example
dataset=/VBF_HToBB_M-125_13TeV-powheg-pythia6/sdonato-VBF_L1HadronicSkim-47f7d60cdcd1e3c6897b4a4791426d8d/USER
https://cmsweb.cern.ch/das/request?view=list&limit=10&instance=prod%2Fphys03&input=dataset%3D%2F*%2Fsdonato*%2FUSER
Git common commands
git cms-addpkg DataFormats/TestObjects
git branch
git remote show
git checkout -b my-new-branch
<span style="font-size: 12pt;">git commit -m "Test feature" BuildFile.xml</span>
git push my-cmssw my-new-branch
Git download from users
cmsrel CMSSW_5_3_11
cd CMSSW_5_3_11/src
cmsenv
git cms-merge-topic arizzi:splitterTests
from PR:
git cms-merge-topic 6326
Simplest python config
import FWCore.ParameterSet.Config as cms
process = cms.Process("SKIMEVENT")
process.maxEvents = cms.untracked.PSet(
input = cms.untracked.int32(30)
)
process.source = cms.Source("PoolSource",
#skipEvents = cms.untracked.uint32(58),
fileNames =
cms.untracked.vstring("file:/data/arizzi/NewTrackingPlusIVF/CMSSW_5_3_12_patch1/src/Validation/RecoB/test/qcd-quick-retracking-withsplitting/trk_0.root"),
)
process.outp1=cms.OutputModule("PoolOutputModule",
fileName = cms.untracked.string('skimmed30ev.root'),
)
process.out = cms.EndPath( process.outp1 )
process.TFileService = cms.Service("TFileService", fileName = cms.string("histo.root") )
Crab limits
# LIMITS USED BY CRAB WATCHDOG:
# RSS (KBytes) : 2300000
# VSZ (KBytes) : 100000000
# DISK (MBytes) : 19000
# CPU TIME : 720h:0m:0s
# WALL CLOCK TIME : 21h:50m:0s
How to get a histo (or profile) from a TCanvas
TProfile* a = (TProfile*)c1->FindObject("profs")
How to use an external files list in cmsRun
mfile = open("files.txt",'r')
names = mfile.read().split('\n')
readFiles = cms.untracked.vstring()
secFiles = cms.untracked.vstring()
names=names[4:]
print names[0]
print names[1]
for i in range(1,len(names)-1):
names[i]="file:////gpfs/ddn/srm/cms/"+names[i]
readFiles.extend([names[i]])
process.source = cms.Source ("PoolSource",fileNames = readFiles, secondaryFileNames =secFiles)
How to use multi processes
process.options = cms.untracked.PSet(
wantSummary = cms.untracked.bool( True ),
multiProcesses=cms.untracked.PSet(
maxChildProcesses=cms.untracked.int32(10),
maxSequentialEventsPerChild=cms.untracked.uint32(1))
)
How to use multithreading
process.options = cms.untracked.PSet(
wantSummary = cms.untracked.bool( True ),
# multiProcesses=cms.untracked.PSet(
# maxChildProcesses=cms.untracked.int32(10),
# maxSequentialEventsPerChild=cms.untracked.uint32(1))
)
NTHREADS =8
process.options.numberOfThreads = cms.untracked.uint32( NTHREADS )
process.options.numberOfStreams = cms.untracked.uint32( NTHREADS ) #same number as above
process.options.sizeOfStackForThreadsInKB = cms.untracked.uint32( 10*1024 )
How to get the Integral of a TH1D
TH1F* h2 = (TH1F*)histoIter_->Clone("h2")
h2->ComputeIntegral();
Double_t *integral = h2->GetIntegral();
h2->SetContent(integral);
h2->Draw("same");
AWK
cat redirectout_28762_*.log | grep " 0 HLT_Q" | awk '{a+=$4; b+=$5; print b"\t"a"\t"100*b/a }'
cat log3 | grep Tim | grep "Modules in Path: HLT_DiCentralPFJet30_PFMET80_BTagCSV07_v6" -A100 | awk '{a+=$2; print $2"\t"a }'
Convert a list of .pdf in .png
ls -1 | awk '{print "convert -trim ",$1,"\\"; gsub(/.pdf/,".png");print $1}' >convertMe
Two columns on bash
=pr -mts, file1 file2=
Two columns on bash
pr -mts, checkNewMenuV8 checkSTORM8013 --width=1000 | awk '{print $9-$4,"(",100*($9-$4)/(0.001+$4),"%)\t",$8,"\t",$4,"\t",$2,"\t",$1}
Use perl to replace a word inside a txt
perl -pi -e 's/HLTSchedule/#HLTSchedule/g' hlt_cff.py
Common errors - how to make timing plots
An exception of category 'NotFound' occurred while
[0] Calling beginJob
Exception Message:
Service Request unable to find requested service with compiler type name ' 8DQMStore'.
Add process.DQMStore and process.DQM !!
process.load("DQMServices.Core.DQM_cfg")
process.load('HLTrigger.Timer.FastTimerService_cff')
process.DQMFileSaverOutput = cms.EndPath( process.dqmFileSaver )
Useful python commands
dir(object) (eg. dir(process.p) )
raise Exception("This is a fatal error")
asser a!=None
Useful CMSSW commands
process.mypath.insert(0,process.newobject)
TGraph in pyROOT
import ROOT
import array
grs = {}
for matching in matchings:
grs[matching] = {}
for mass in masses:
grs[matching] = ROOT.TGraph(2,array.array('d',mu_error["jets12"].keys()),array.array('d',mu_error["jets12"]))
Divide using bayesian error 68% CL
ratio = total.Clone("ratio")
ratio.Divide(pass,total,1,1,"cl=0.683 b(1,1) mode")
NB. You must use
TGraphAsymmErrors function!!
See:
https://root.cern.ch/doc/master/classTGraphAsymmErrors.html#a78cd209f4da9a169848ab23f539e1c94
See:
GetEffienciencyTHNF https://github.com/silviodonato/usercode/blob/master/GetEffienciencyTHNF.py
To convert THNF (eg.
TH3F) ->
TH1F -> TEffiency ->
TH1F -> THNF
PileUp PU info (number of true vertices)
Events->Scan("PileupSummaryInfos_addPileupInfo__HLT.obj.getTrueNumInteractions()","PileupSummaryInfos_addPileupInfo__HLT.obj.getBunchCrossing()==0")</pre>
Skim file root
{
TFile *oldfile = new TFile("/data2/p/pellicci/L1DPG/root/RegionCalib_V43/v4_62X_40PU_25bx_ReEmul2015/L1Tree.root");
TDirectoryFile* dir = (TDirectoryFile*) gDirectory->Get("l1NtupleProducer");
dir->cd();
TTree *oldtree = (TTree*)dir->Get("L1Tree");
TFile *newfile = new TFile("Skim.root","recreate");
TTree *newtree = oldtree->CloneTree(10000);
newtree->Print();
newtree->AutoSave();
delete oldfile;
delete newfile;
}
Example with edm::View and reco::Candidate
edm::Handle< edm::View<reco::Candidate> > pfMetH;
iEvent.getByLabel(edm::InputTag("pfMet"), pfMetH);
const edm::View<reco::Candidate> & pfMets = *pfMetH.product();
cout<<"pfMet="<<pfMets.begin()->et()<<endl;
Run a trigger path using an external configuration
edmConfigFromDB --cff --configName /users/sdonato/ZnnHbb710pre7/V3 --nopaths --nooutput > setup_cff.py
hltGetConfiguration /users/sdonato/ZnnHbb710pre7/V3 --no-output --path HLT_DiCentralPFNoPUJet30_PFMET80_BTagCSV07_v1 --mc --globaltag auto:startup_GRun --input file:/afs/pi.infn.it/user/sdonato/mystore/ZnnHbb_PU40_25ns_GEN-SIM-RAW_12May/ZeroBias13TeVSkimMET130_1_1_sEr.root --max-events -1 --cff --open > hlt_cff.py
import FWCore.ParameterSet.Config as cms
process = cms.Process("SKIMEVENT")
process.load("setup_cff")
process.load("hlt_cff")
process.maxEvents = cms.untracked.PSet(
input = cms.untracked.int32(30)
)
process.source = cms.Source("PoolSource",
#skipEvents = cms.untracked.uint32(58),
fileNames =
cms.untracked.vstring(
"file:/afs/pi.infn.it/user/sdonato/mystore/ZnnHbb_PU40_25ns_GEN-SIM-RAW_12May/ZeroBias13TeVSkimMET130_10_1_ZEI.root",
),
)
process.outp1=cms.OutputModule("PoolOutputModule",
fileName = cms.untracked.string('skimmed30ev.root'),
)
#process.out = cms.EndPath( process.outp1 )
# Enable HF Noise filters in GRun menu
if 'hltHfreco' in process.__dict__:
process.hltHfreco.setNoiseFlags = cms.bool( True )
# customise the HLT menu for running on MC
from HLTrigger.Configuration.customizeHLTforMC import customizeHLTforMC
process = customizeHLTforMC(process)
# CMSSW version specific customizations
import os
cmsswVersion = os.environ['CMSSW_VERSION']
# customization for 6_2_X
# none for now
# adapt HLT modules to the correct process name
if 'hltTrigReport' in process.__dict__:
process.hltTrigReport.HLTriggerResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
if 'hltPreExpressCosmicsOutputSmart' in process.__dict__:
process.hltPreExpressCosmicsOutputSmart.hltResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
if 'hltPreExpressOutputSmart' in process.__dict__:
process.hltPreExpressOutputSmart.hltResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
if 'hltPreDQMForHIOutputSmart' in process.__dict__:
process.hltPreDQMForHIOutputSmart.hltResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
if 'hltPreDQMForPPOutputSmart' in process.__dict__:
process.hltPreDQMForPPOutputSmart.hltResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
if 'hltPreHLTDQMResultsOutputSmart' in process.__dict__:
process.hltPreHLTDQMResultsOutputSmart.hltResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
if 'hltPreHLTDQMOutputSmart' in process.__dict__:
process.hltPreHLTDQMOutputSmart.hltResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
if 'hltPreHLTMONOutputSmart' in process.__dict__:
process.hltPreHLTMONOutputSmart.hltResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
if 'hltDQMHLTScalers' in process.__dict__:
process.hltDQMHLTScalers.triggerResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
process.hltDQMHLTScalers.processname = 'HLTX'
if 'hltDQML1SeedLogicScalers' in process.__dict__:
process.hltDQML1SeedLogicScalers.processname = 'HLTX'
# limit the number of events to be processed
process.maxEvents = cms.untracked.PSet(
input = cms.untracked.int32( -1 )
)
# enable the TrigReport and TimeReport
process.options = cms.untracked.PSet(
wantSummary = cms.untracked.bool( True )
)
# override the GlobalTag, connection string and pfnPrefix
if 'GlobalTag' in process.__dict__:
from Configuration.AlCa.GlobalTag import GlobalTag as customiseGlobalTag
process.GlobalTag = customiseGlobalTag(process.GlobalTag, globaltag = 'auto:startup_GRun')
process.GlobalTag.connect = 'frontier://FrontierProd/CMS_COND_31X_GLOBALTAG'
process.GlobalTag.pfnPrefix = cms.untracked.string('frontier://FrontierProd/')
for pset in process.GlobalTag.toGet.value():
pset.connect = pset.connect.value().replace('frontier://FrontierProd/', 'frontier://FrontierProd/')
# Fix for multi-run processing:
process.GlobalTag.RefreshEachRun = cms.untracked.bool( False )
process.GlobalTag.ReconnectEachRun = cms.untracked.bool( False )
#
if 'MessageLogger' in process.__dict__:
process.MessageLogger.categories.append('TriggerSummaryProducerAOD')
process.MessageLogger.categories.append('L1GtTrigReport')
process.MessageLogger.categories.append('HLTrigReport')
process.MessageLogger.categories.append('FastReport')
Turn-on functions
TF1* turnonPt = new TF1("turnonPt","(0.5+0.5*erf( (x-[0])*(x-[0]>[5])/[1] + (x-[0])*(x-[0]<[5])/[2] + [5]*(1/[1]-1/[2])*(x-[0]<[5]) ))*[4]+[3] ");
TF1* turnonPt = new TF1("turnonPt","(0.5+0.5*erf( (x-[0])/[1]))*[3]+[2] ");
TMath::ATan([1](*x-[0]))/(TMath::Pi())+0.5
1/(1+exp(-[1]*(x-[0])))
tree->Draw("sqrt(TVector2::Phi_mpi_pi(jetPhiOffline[0]-cjetPhi[0])**2 + (jetEtaOffline[0]-cjetEta[0])**2)","","")
File merger (how to merge root CMSSW files)
import FWCore.ParameterSet.Config as cms
import FWCore.ParameterSet.VarParsing as VarParsing
process = cms.Process( "MERGER" )
options = VarParsing.VarParsing ('analysis')
options.register('name','aaa',options.multiplicity.singleton,options.varType.string,'the filenames')
options.parseArguments()
name=options.name
process.source = cms.Source( "PoolSource",
fileNames = cms.untracked.vstring(
'file:'+name+'_00.root',
'file:'+name+'_01.root',
'file:'+name+'_02.root',
'file:'+name+'_03.root',
'file:'+name+'_04.root',
'file:'+name+'_05.root',
'file:'+name+'_06.root',
'file:'+name+'_07.root',
'file:'+name+'_08.root',
'file:'+name+'_09.root',
'file:'+name+'_10.root',
'file:'+name+'_11.root',
'file:'+name+'_12.root',
'file:'+name+'_13.root',
'file:'+name+'_14.root',
'file:'+name+'_15.root',
'file:'+name+'_16.root',
'file:'+name+'_17.root',
'file:'+name+'_18.root',
'file:'+name+'_19.root',
),
secondaryFileNames = cms.untracked.vstring(
),
inputCommands = cms.untracked.vstring(
'keep *'
)
)
process.hltOutput = cms.OutputModule( "PoolOutputModule",
fileName = cms.untracked.string( name+"_merged.root" ),
outputCommands = cms.untracked.vstring(
'keep *',
'drop *_*_*_MERGER',
)
)
process.Output = cms.EndPath( process.hltOutput )
launch
cmsRun merger.py name=file:XToHHTo4b_GEN-SIM
Soft kill command (equivalent to CTRL+C)
kill -INT %1
Plot on ROOT the hltFastPrimaryVertex error
Events->Draw("abs(recoVertexs_hltFastPrimaryVertex__CHECK2.obj[0].position_.fCoordinates.fZ - SimVertexs_g4SimHits__SIM.obj[0].theVertex.fCoordinates.fZ)<1.5")
Events->Draw("abs(recoVertexs_hltFastPrimaryVertex__HLT.obj[0].position_.fCoordinates.fZ - recoGenParticles_genParticles__HLT.obj[2].vertex().z())<1.5")
Get the IP of your machine
wget -qO- http://ipecho.net/plain ; echo
Links
https://twiki.cern.ch/twiki/bin/view/CMS/HLTinCMSSW71X#CMSSW_7_1_0_pre8 (which HLT dev to use for CMSSW samples)
https://twiki.cern.ch/twiki/bin/view/CMS/SoftToolsOnlRelMenus
https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideGlobalHLT#Trigger_development_for_Run_2 (SW guide to HLT)
https://twiki.cern.ch/twiki/bin/view/CMSPublic/WorkBookXrootdService (How to take remote data files)
DBS query
dbs search --query "find dataset where dataset=*/VBF_HToBB_M-125_8TeV-powheg-pythia6_ext* "
dbs search --query "find sum(file.numevents) where dataset=/Neutrino_Pt-2to20_gun/Fall13dr-tsg_PU40bx25_POSTLS162_V2-v1/GEN-SIM-RAW"
process.load("FWCore.MessageLogger.MessageLogger_cfi")
process.MessageLogger.cerr.FwkReport.reportEvery = 1000
Clone method in CMSSW
newName = oldName.clone (changedParameter = 42)
How to dowload an HLT trigger path
edmConfigFromDB --cff --configName /dev/CMSSW_5_2_6/GRun --nopaths --nooutput --noservices > setup_cff.py
hltGetConfiguration /dev/CMSSW_5_2_1/GRun/V4 --offline --no-output --path HLT_DiCentralJet20_BTagIP_MET65_HBHENoiseFiltered_dPhi1_v1 --timing --data --globaltag auto:hltonline --l1-emulator --l1 L1GtTriggerMenu _L1Menu_Collisions2012_v0_mc --timing > HLT_DiCentralJet20_BTagIP_MET65_HBHENoiseFiltered_dPhi1_v1.py
hltGetConfiguration orcoff:/cdaq/physics/Run2012/7e33/v2.5/HLT/V1 --no-output --path HLT_QuadPFJet78_61_44_31_BTagCSV_VBF_v1 --mc --globaltag START53_V7C::All --unprescale > hlt3.py
hltGetConfiguration /dev/CMSSW_6_2_0/GRun/V24 --output full --path HLT_DoubleJet20_ForwardBackward_v4 --mc --globaltag auto:upgradePLS1 --unprescale --input file:/gpfs/ddn/srm/cms/store/mc/Fall13dr/Neutrino_Pt-2to20_gun/GEN-SIM-RAW/tsg_PU40bx25_POSTLS162_V2-v1/00000/1EE8B655-0378-E311-9905-0030486791F2.root --max-events 100
How to use process.source with keep and drop
process.source = cms.Source( "PoolSource",
fileNames = cms.untracked.vstring(
'file:skimL1TripleJetCVBFfromPhys198609.root',
),
secondaryFileNames = cms.untracked.vstring(
),
dropDescendantsOfDroppedBranches=cms.untracked.bool(False),
inputCommands=cms.untracked.vstring(
'drop *',
'keep *_genParticles_*_*',
'keep *_ak5GenJets_*_*',
'keep *_g4SimHits_*_*',
'keep FEDRawDataCollection_*_*_*'
)
)
Output Module
process.outp1=cms.OutputModule("PoolOutputModule",
fileName = cms.untracked.string('savep1.root'),
outputCommands = cms.untracked.vstring('keep *'),
SelectEvents = cms.untracked.PSet(
SelectEvents = cms.vstring('p1')
)
)
process.out = cms.EndPath( process.outp1 )
How to options in CMSSW
process.options = cms.untracked.PSet(
wantSummary = cms.untracked.bool( True ),
SkipEvent = cms.untracked.vstring('ProductNotFound')
)
From files.root to JSON file
edmLumisInFiles.py files.root
From JSON file to PileUp histo
pileupCalc.py -i lumis.txt --inputLumiJSON /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions12/8TeV/PileUp/pileup_JSON_DCSONLY_190389-208686_All_2012_pixelcorr.txt --calcMode observed --minBiasXsec 69400 --maxPileupBin 50 --numPileupBins 50 MyDataPileupHistogram.root
Lumi mask / where to find JSON file for data / pileup ...
/afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions12/8TeV/Prompt/Cert_190456-208686_8TeV_PromptReco_Collisions12_JSON.txt
Where to find all PileUp PU Scensario (eg. S10 = 2012_Summer_50ns_PoissonOOTPU)
https://twiki.cern.ch/twiki/bin/view/CMS/PdmVPileUpDescription#S10
cmsDriver example (REDIGI of VBF dataset GEN-SIM -> RAW,HLT...)
cmsDriver.py REDIGI --step DIGI,L1,DIGI2RAW,HLT:7E33v2 --conditions START53_V7C::All --pileup 2012_Summer_50ns_PoissonOOTPU --datamix NODATAMIXER --eventcontent RAWSIM --datatier GEN-SIM-RAW --filein=file:/gpfs/ddn/cms/user/donato/FastPV/CMSSW_5_3_10_patch2/src/VBF/test2012/04DE2282-3384-E211-B29F-60EB69BACA86.root --fileout=file:VBFrawS10_withHLT.root -n -1
How to access to EOS from lxplus
eg.
root -l root://eoscms//eos/cms/store/cmst3/group/vbfhbb/CMG/VBF1Parked/Run2012B/22Jan2013/PAT_CMG_V5_17_0/cmgTuple_973_1_XoS.root
How to use X2GO with Pisa cmsanalys
(install x2go with 'sudo apt-get install x2goclient')
From x2go, define a session with - Host: cmsanalysis - Loging: sdonato - SSH port: 22 (default) - Use Proxy server for SSH connection - Proxy server - Type SSH - Host galilinux.pi.infn.it - Port 22 - Same login as on
X2Go server - Same password as on
X2Go server
- Session Type -> Single application -> Terminal
or /gpfs/ddn/users/sdonato/xterm_mod
with
/gpfs/ddn/users/sdonato/xterm_mod
containing
xterm -ls -xrm 'xterm*VT100.Translations: #override \ Ctrl Shift <Key>V: insert-selection(CLIPBOARD) \n\ Ctrl Shift <Key>C: copy-selection(CLIPBOARD)'
(chmod +x /gpfs/ddn/users/sdonato/xterm_mod)
Tunnel HLT on-call DOC
See
https://twiki.cern.ch/twiki/bin/viewauth/CMS/HLTOnCallGuide#Tunnel_Instructions_for_Remote_S
ssh -f -N cmsusr
using as ~/.ssh/config file:
Host *.cern.ch
user sdonato
Host lxplus lxplus.cern.ch
# Host name
HostName lxplus.cern.ch
# Username
User sdonato
# Use SSHv2 only
Protocol 2
# Forward your SSH key agent so that it can be used on further hops
ForwardAgent yes
# For X11
#ForwardX11 yes
#ForwardX11Trusted no
Host cmspisa001.cern.ch
ProxyCommand ssh lxplus.cern.ch /usr/bin/nc %h %p 2> /dev/null
GSSAPITrustDNS no
Host techlab-arm64-thunderx2-02.cern.ch
ProxyCommand ssh lxplus.cern.ch /usr/bin/nc %h %p 2> /dev/null
ForwardAgent yes
Host techlab-arm64-thunderx2-02.cern.ch
user cmsbld
HostName techlab-arm64-thunderx2-02.cern.ch
ProxyCommand ssh lxplus.cern.ch -W %h:%p 2> /dev/null
Host galilinux.pi.infn.it
user sdonato
HostName galilinux.pi.infn.it
Host cmsanalysis.pi.infn.it
user sdonato
HostName cmsanalysis.pi.infn.it
ProxyCommand ssh galilinux.pi.infn.it -W %h:%p 2> /dev/null
######################
# CMS Network Access #
######################
Host cmsusr cmsusr* cmsusr*.cms
# Username (replace by your cmsusr username if different from your LXPLUS one)
User sdonato
# Use SSHv2 only
Protocol 2
# Forward your SSH key agent so that it can be used on further hops
ForwardAgent yes
# For X11
#ForwardX11 yes
#ForwardX11Trusted no
# Go through lxplus so that it works from wherever you are
ProxyCommand ssh lxplus nc %h 22
# Setup a SOCKS5 proxy on local port 1080 so that you can easily browse internal CMS web sites
DynamicForward 1080
# DAQ OnCall settings (DB and daqweb)
# For connection to the DB, from outside use the tnsnames.ora file provided where this file was provided
LocalForward 10121 cmsonr1-v.cms:10121
#LocalForward 10122 cmsonr2-v.cms:10121
#LocalForward 10123 cmsonr3-v.cms:10121
#LocalForward 10124 cmsonr4-v.cms:10121
#LocalForward 10125 cmsintr1-v.cms:10121
#LocalForward 10126 cmsintr2-v.cms:10121
#LocalForward 45679 cmsdaqweb.cms:45679
From CERN network:
ssh -f -N sdonato@cmsusr.cern.ch -L 10121:cmsonr1-v.cms:10121
or
ssh -f -N sdonato@cmsusr.cern.ch -D 1081
How to use X2GO from .cms (cms-online)
See
https://twiki.cern.ch/twiki/bin/view/CMS/ClusterUsersGuide (cmsusr twiki)
First 2 steps
pass ssh -f -N -L 5022:cmsusr.cern.ch:22 sdonato@lxplus.cern.ch # Connect to the private network via lxplus
passDOC ssh -f -N -L 5122:x2go06:22 sdonato@localhost -p 5022 # To use the X2GO client
passDOC ssh -f -N -L 10121:cmsrac42-v.cms:10121 -p 5022 sdonato@localhost # Allow ConfDB to connect to the online database
passDOC ssh -f -N -D 1081 -p 5022 sdonato@localhost # Allow Firefox to connect assuming you have properly setup FoxyProxy
#passDOC ssh -f -N -L 5122:cmsnx2:22 sdonato@localhost -p 5022 # To use the NX client
pass ssh -f -N sdonato@lxplus.cern.ch -L 10122:cmsr3-s.cern.ch:10121 # Allow ConfDB to connect to the offline database (if necessary)
Then use x2go client with localhost 5122
How to run ConfDB directly from .jnlp file (iceadtea, javaws)
From terminal:
itweb-settings
Then: -> Policy settings -> Simple Editor -> Permissions: add a tick to
- ) Access the network
- ) Execute unowned code
-> Apply, Ok
How to use Windows Terminal Services Client at CERN - Remote Desktop
xfreerdp -a 16 -u sdonato -d CERN -g 1920x1080 cernts.cern.ch ### xfreerdp -a 16 -u sdonato -d CERN -g 1024x768 cernts.cern.ch
or simply use Remmina: - select RDP: - address: cernts.cern.ch - press ENTER - username: sdonato - password: (CERN password) - domain: CERN - press ENTER
How to use VNC at CERN
Example done with '4'. If it does not work, try with another number
Tab-1:
ssh sdonato@lxplus.cern.ch -L5804:localhost:5804 -L5904:localhost:5904
vncserver -geometry 1600x800 :4
Tab-2:
xvncviewer localhost:4
(or xvnc4viewer)
Tab-1:
vncserver -kill :4
sdads
How to mount EOS from lxplus
eosmount $HOME/eos
#umount
eosumount $HOME/eos
http://eos.cern.ch/index.php?option=com_content&view=article&id=87:using-eos-at-cern&catid=31:general&Itemid=41
Usefull edm tools
edmConfigDump pippo.py > dump.py
print process.dumpPython()
CRAB example in MC
[CMSSW]
total_number_of_events = -1
number_of_jobs = 40
#events_per_job = 1
#number_of_jobs = 1
pset = HLT_QuadPFJet78_61_44_31_BTagCSV_VBF_v7.py
datasetpath = /Neutrino_Pt-2to20_gun/Fall13dr-tsg_PU40bx25_POSTLS162_V2-v1/GEN-SIM-RAW
output_file = AfterOldTrigger_64_44_24_CaloJetSelection.root
[USER]
return_data = 0
copy_data = 1
storage_element = T2_IT_Pisa
user_remote_dir = MinBias_PU40_25ns_HLT-VBF_GEN-SIM-RAW
publish_data = 0
[CRAB]
use_server = 0
scheduler = remoteGlidein
#scheduler = glite
jobtype = cmssw
CRAB example in Data
[CMSSW]
lumis_per_job = 1000
number_of_jobs = 25
#events_per_job = 1
#number_of_jobs = 1
pset = HLT_QuadPFJet78_61_44_31_BTagCSV_VBF_v7_data.py
datasetpath = /HLTPhysicsParked/Run2012D-v1/RAW
output_file = AfterOldTrigger_64_44_24_CaloJetSelection.root
lumi_mask = /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions12/8TeV/Prompt/Cert_190456-208686_8TeV_PromptReco_Collisions12_JSON.txt
[USER]
return_data = 0
copy_data = 1
storage_element = T2_IT_Pisa
user_remote_dir = Run2012_CMSSW_6_2_5_HLT-VBF_GEN-SIM-RAW
publish_data = 0
[CRAB]
use_server = 0
scheduler = remoteGlidein
#scheduler = glite
jobtype = cmssw
CRAB3 - set up & launch
cmsenv
source /cvmfs/cms.cern.ch/crab3/crab.sh
voms-proxy-init --voms cms --valid 168:00
crab submit -c my_crab_config_file.py
crab status -t crab_projects/crab_MinBias_PU40_Sept14_CRAB3/
CRAB3 - pyhton configuration file
from WMCore.Configuration import Configuration
config = Configuration()
config.section_("General")
config.General.requestName = 'NeutrinoL1FlatPU_CRAB3'
config.General.workArea = 'crab_projects'
config.section_("JobType")
config.JobType.pluginName = 'Analysis'
config.JobType.psetName = 'ntuplerL1MinBiasFlatPU.py'
config.section_("Data")
config.Data.inputDataset = '/Ne1utrino_Pt-2to20_gun/Spring14dr-PU20bx50_POSTLS170_V6-v1/GEN-SIM-RAW'
config.Data.dbsUrl = 'global'
config.Data.splitting = 'FileBased'
config.Data.publication = True
config.Data.unitsPerJob = 30
config.Data.totalUnits = -1
config.Data.publishDbsUrl = 'test'
config.Data.publishDataName = 'NeutrinoL1FlatPU_CRAB3'
config.section_("Site")
config.Site.storageSite = 'T2_IT_Pisa'
CRAB 3 - Infos
/afs/cern.ch/user/s/sdonato/AFSwork/NtupleL1July14/CMSSW_7_0_4/src/CRAB3
https://twiki.cern.ch/twiki/bin/view/CMSPublic/WorkBookCRAB3Tutorial
Git common commands
git cms-addpkg DataFormats/TestObjects
git branch
git remote show
git checkout -b my-new-branch
<span style="font-size: 12pt;">git commit -m "Test feature" BuildFile.xml</span>
git push my-cmssw my-new-branch
Git download from users
cmsrel CMSSW_5_3_11
cd CMSSW_5_3_11/src
cmsenv
git cms-merge-topic silviodonato/hlt-validation-2
[to modify the branch]
git checkout silviodonato/hlt-validation-2
Simplest python config
import FWCore.ParameterSet.Config as cms
process = cms.Process("SKIMEVENT")
process.maxEvents = cms.untracked.PSet(
input = cms.untracked.int32(30)
)
process.source = cms.Source("PoolSource",
#skipEvents = cms.untracked.uint32(58),
fileNames =
cms.untracked.vstring("file:/data/arizzi/NewTrackingPlusIVF/CMSSW_5_3_12_patch1/src/Validation/RecoB/test/qcd-quick-retracking-withsplitting/trk_0.root"),
)
process.outp1=cms.OutputModule("PoolOutputModule",
fileName = cms.untracked.string('skimmed30ev.root'),
)
process.out = cms.EndPath( process.outp1 )
process.TFileService = cms.Service("TFileService", fileName = cms.string("histo.root") )
Crab limits
# LIMITS USED BY CRAB WATCHDOG:
# RSS (KBytes) : 2300000
# VSZ (KBytes) : 100000000
# DISK (MBytes) : 19000
# CPU TIME : 720h:0m:0s
# WALL CLOCK TIME : 21h:50m:0s
How to match the CaloJets with the b-quarks directly by ROOT
Events->Draw("recoJetedmRefToBaseProdTofloatsAssociationVector_hltCombinedSecondaryVertexBJetTagsCalo__TEST.obj[0].data_","Min$(reco::deltaR(recoCaloJets_hltSelector8CentralJetsL1FastJet__TEST.obj[0].eta(),recoCaloJets_hltSelector8CentralJetsL1FastJet__TEST.obj[0].phi(),recoGenParticles_genParticles__HLT.obj.eta(),recoGenParticles_genParticles__HLT.obj.phi()) + (recoGenParticles_genParticles__HLT.obj.pt()<2)*1000 + (abs(recoGenParticles_genParticles__HLT.obj.pdgId())==5)*1000)<0.3")
Command to wait for jobs (bsub) do be done
bash loop while
i="1"
while [ $i -gt 0 ]
do
i=`qstat -u sdonato | wc -l`
echo $i
sleep 1
done
for ITER in 0 1 2 3 4 5 6 7 8 9
do
hadd -f Run2016B_$ITER.root /mnt/t3nfs01/data01/shome/dbrzhech/DijetScouting/CMSSW_8_0_30/src/DijetRootTreeAnalyzer/output_20190607_181317/*_Run2016B-*$ITER_reduced_skim.root >& logRun2016B_$ITER &
done
How to take the two b quarks from the Higgs decay
Events->Scan("recoGenParticles_genParticles__SIM.obj.pdgId_","abs(recoGenParticles_genParticles__SIM.obj.pdgId_)==5 && recoGenParticles_genParticles__SIM.obj[recoGenParticles_genParticles__SIM.obj.mom.refVector_.keys_].pdgId_==25 && recoGenParticles_genParticles__SIM.obj.mom.refVector_.keys_")
How to get a histo (or profile) from a TCanvas
TProfile* a = (TProfile*)c1->FindObject("profs")
How to use an external files list in cmsRun
mfile = open("files.txt",'r')
names = mfile.read().split('\n')
readFiles = cms.untracked.vstring()
secFiles = cms.untracked.vstring()
names=names[4:]
print names[0]
print names[1]
for i in range(1,len(names)-1):
names[i]="file:////gpfs/ddn/srm/cms/"+names[i]
readFiles.extend([names[i]])
process.source = cms.Source ("PoolSource",fileNames = readFiles, secondaryFileNames =secFiles)
How to use multi processes
process.options = cms.untracked.PSet(
wantSummary = cms.untracked.bool( True ),
multiProcesses=cms.untracked.PSet(
maxChildProcesses=cms.untracked.int32(10),
maxSequentialEventsPerChild=cms.untracked.uint32(1))
)
How to get the Integral of a TH1D
TH1F* h2 = (TH1F*)histoIter_->Clone("h2")
h2->ComputeIntegral();
Double_t *integral = h2->GetIntegral();
h2->SetContent(integral);
h2->Draw("same");
Trigger objects and trigger filter in MINIAOD
See
https://twiki.cern.ch/twiki/bin/view/CMSPublic/WorkBookMiniAOD2017#Trigger
AWK
cat redirectout_28762_*.log | grep " 0 HLT_Q" | awk '{a+=$4; b+=$5; print b"\t"a"\t"100*b/a }'
cat log3 | grep Tim | grep "Modules in Path: HLT_DiCentralPFJet30_PFMET80_BTagCSV07_v6" -A100 | awk '{a+=$2; print $2"\t"a }'
Use perl to replace a word inside a txt
perl -pi -e 's/HLTSchedule/#HLTSchedule/g' hlt_cff.py
Draw Feynman diagrams
JaxoDraw
http://jaxodraw.sourceforge.net/download/index.html
Useful python commands
dir(object) (eg. dir(process.p) )
Useful CMSSW commands
process.mypath.insert(0,process.newobject)
TLegend
TLegend* leg = new TLegend(0.1,0.7,0.48,0.9);
leg->SetHeader("The Legend Title");
leg->AddEntry(h1,"Histogram filled with random numbers","f");
leg->AddEntry("f1","Function abs(#frac{sin(x)}{x})","l");
leg->AddEntry("gr","Graph with error bars","lep");
leg->Draw();
==================================
leg = ROOT.TLegend(0.1,0.7,0.48,0.9)
leg.SetHeader("")
leg.AddEntry(h1,"l") # or lep or f
PileUp PU info (number of true vertices)
Events->Scan("PileupSummaryInfos_addPileupInfo__HLT.obj.getTrueNumInteractions()","PileupSummaryInfos_addPileupInfo__HLT.obj.getBunchCrossing()==0")</pre>
Skim file root
{
TFile *oldfile = new TFile("/data2/p/pellicci/L1DPG/root/RegionCalib_V43/v4_62X_40PU_25bx_ReEmul2015/L1Tree.root");
TDirectoryFile* dir = (TDirectoryFile*) gDirectory->Get("l1NtupleProducer");
dir->cd();
TTree *oldtree = (TTree*)dir->Get("L1Tree");
TFile *newfile = new TFile("Skim.root","recreate");
TTree *newtree = oldtree->CloneTree(10000);
newtree->Print();
newtree->AutoSave();
delete oldfile;
delete newfile;
}
Example with edm::View and reco::Candidate
edm::Handle< edm::View<reco::Candidate> > pfMetH;
iEvent.getByLabel(edm::InputTag("pfMet"), pfMetH);
const edm::View<reco::Candidate> & pfMets = *pfMetH.product();
cout<<"pfMet="<<pfMets.begin()->et()<<endl;
Run a trigger path using an external configuration
edmConfigFromDB --cff --configName /users/sdonato/ZnnHbb710pre7/V3 --nopaths --nooutput > setup_cff.py
hltGetConfiguration /users/sdonato/ZnnHbb710pre7/V3 --no-output --path HLT_DiCentralPFNoPUJet30_PFMET80_BTagCSV07_v1 --mc --globaltag auto:startup_GRun --input file:/afs/pi.infn.it/user/sdonato/mystore/ZnnHbb_PU40_25ns_GEN-SIM-RAW_12May/ZeroBias13TeVSkimMET130_1_1_sEr.root --max-events -1 --cff --open > hlt_cff.py
import FWCore.ParameterSet.Config as cms
process = cms.Process("SKIMEVENT")
process.load("setup_cff")
process.load("hlt_cff")
process.maxEvents = cms.untracked.PSet(
input = cms.untracked.int32(30)
)
process.source = cms.Source("PoolSource",
#skipEvents = cms.untracked.uint32(58),
fileNames =
cms.untracked.vstring(
"file:/afs/pi.infn.it/user/sdonato/mystore/ZnnHbb_PU40_25ns_GEN-SIM-RAW_12May/ZeroBias13TeVSkimMET130_10_1_ZEI.root",
),
)
process.outp1=cms.OutputModule("PoolOutputModule",
fileName = cms.untracked.string('skimmed30ev.root'),
)
#process.out = cms.EndPath( process.outp1 )
# Enable HF Noise filters in GRun menu
if 'hltHfreco' in process.__dict__:
process.hltHfreco.setNoiseFlags = cms.bool( True )
# customise the HLT menu for running on MC
from HLTrigger.Configuration.customizeHLTforMC import customizeHLTforMC
process = customizeHLTforMC(process)
# CMSSW version specific customizations
import os
cmsswVersion = os.environ['CMSSW_VERSION']
# customization for 6_2_X
# none for now
# adapt HLT modules to the correct process name
if 'hltTrigReport' in process.__dict__:
process.hltTrigReport.HLTriggerResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
if 'hltPreExpressCosmicsOutputSmart' in process.__dict__:
process.hltPreExpressCosmicsOutputSmart.hltResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
if 'hltPreExpressOutputSmart' in process.__dict__:
process.hltPreExpressOutputSmart.hltResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
if 'hltPreDQMForHIOutputSmart' in process.__dict__:
process.hltPreDQMForHIOutputSmart.hltResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
if 'hltPreDQMForPPOutputSmart' in process.__dict__:
process.hltPreDQMForPPOutputSmart.hltResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
if 'hltPreHLTDQMResultsOutputSmart' in process.__dict__:
process.hltPreHLTDQMResultsOutputSmart.hltResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
if 'hltPreHLTDQMOutputSmart' in process.__dict__:
process.hltPreHLTDQMOutputSmart.hltResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
if 'hltPreHLTMONOutputSmart' in process.__dict__:
process.hltPreHLTMONOutputSmart.hltResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
if 'hltDQMHLTScalers' in process.__dict__:
process.hltDQMHLTScalers.triggerResults = cms.InputTag( 'TriggerResults', '', 'HLTX' )
process.hltDQMHLTScalers.processname = 'HLTX'
if 'hltDQML1SeedLogicScalers' in process.__dict__:
process.hltDQML1SeedLogicScalers.processname = 'HLTX'
# limit the number of events to be processed
process.maxEvents = cms.untracked.PSet(
input = cms.untracked.int32( -1 )
)
# enable the TrigReport and TimeReport
process.options = cms.untracked.PSet(
wantSummary = cms.untracked.bool( True )
)
# override the GlobalTag, connection string and pfnPrefix
if 'GlobalTag' in process.__dict__:
from Configuration.AlCa.GlobalTag import GlobalTag as customiseGlobalTag
process.GlobalTag = customiseGlobalTag(process.GlobalTag, globaltag = 'auto:startup_GRun')
process.GlobalTag.connect = 'frontier://FrontierProd/CMS_COND_31X_GLOBALTAG'
process.GlobalTag.pfnPrefix = cms.untracked.string('frontier://FrontierProd/')
for pset in process.GlobalTag.toGet.value():
pset.connect = pset.connect.value().replace('frontier://FrontierProd/', 'frontier://FrontierProd/')
# Fix for multi-run processing:
process.GlobalTag.RefreshEachRun = cms.untracked.bool( False )
process.GlobalTag.ReconnectEachRun = cms.untracked.bool( False )
#
if 'MessageLogger' in process.__dict__:
process.MessageLogger.categories.append('TriggerSummaryProducerAOD')
process.MessageLogger.categories.append('L1GtTrigReport')
process.MessageLogger.categories.append('HLTrigReport')
process.MessageLogger.categories.append('FastReport')
Turn-on functions
TMath::ATan([1]*x-[0])/(TMath::Pi())+0.5
1/(1+exp(-[0]*(x-[1])))
tree->Draw("sqrt(TVector2::Phi_mpi_pi(jetPhiOffline[0]-cjetPhi[0])**2 + (jetEtaOffline[0]-cjetEta[0])**2)","","")
File merger (how to merge root CMSSW files)
import FWCore.ParameterSet.Config as cms
import FWCore.ParameterSet.VarParsing as VarParsing
process = cms.Process( "MERGER" )
options = VarParsing.VarParsing ('analysis')
options.register('name','aaa',options.multiplicity.singleton,options.varType.string,'the filenames')
options.parseArguments()
name=options.name
process.source = cms.Source( "PoolSource",
fileNames = cms.untracked.vstring(
'file:'+name+'_00.root',
'file:'+name+'_01.root',
'file:'+name+'_02.root',
'file:'+name+'_03.root',
'file:'+name+'_04.root',
'file:'+name+'_05.root',
'file:'+name+'_06.root',
'file:'+name+'_07.root',
'file:'+name+'_08.root',
'file:'+name+'_09.root',
'file:'+name+'_10.root',
'file:'+name+'_11.root',
'file:'+name+'_12.root',
'file:'+name+'_13.root',
'file:'+name+'_14.root',
'file:'+name+'_15.root',
'file:'+name+'_16.root',
'file:'+name+'_17.root',
'file:'+name+'_18.root',
'file:'+name+'_19.root',
),
secondaryFileNames = cms.untracked.vstring(
),
inputCommands = cms.untracked.vstring(
'keep *'
)
)
process.hltOutput = cms.OutputModule( "PoolOutputModule",
fileName = cms.untracked.string( name+"_merged.root" ),
outputCommands = cms.untracked.vstring(
'keep *',
'drop *_*_*_MERGER',
)
)
process.Output = cms.EndPath( process.hltOutput )
Soft kill command (equivalent to CTRL+C)
kill -INT %1
Get trigger with L1 emulator
hltGetConfiguration /dev/CMSSW_7_1_2/HLT --path HLT_PFMHT100_SingleCentralJet60_BTagCSV0p6_v1 --full --offline --mc --unprescale --process TEST --globaltag auto:startup_GRun --l1-emulator 'stage1,gt' --l1Xml L1Menu_Collisions2015_25ns_v1_L1T_Scales_20101224_Imp0_0x102f.xml --open --input root://xrootd.ba.infn.it///store/data/Run2012C/ZeroBias/RAW/v1/000/198/609/58A2ABD6-70CA-E111-9327-0030486780E6.root --output none > hltNew.py
How to allow some warnings with CMSSW scram b
setenv USER_CXXFLAGS "-Wno-delete-non-virtual-dtor -Wno-error=unused-but-set-variable -Wno-error=unused-variable"
How to find the cross-section (crosssection , xsection ) of a sample with DAS
check /mnt/t3nfs01/data01/shome/zucchett/SFrameAnalysis/BatchSubmission/xsections.py
on Tier3
https://cmsweb.cern.ch/das/request?view=list&limit=50&instance=prod%2Fglobal&input=mcm++dataset%3D%2FTT_TuneCUETP8M1_13TeV-powheg-pythia8%2FRunIIWinter15GS-MCRUN2_71_V1-v1%2FGEN-SIM++|+grep+mcm.generator_parameters.cross_section
How to calculate cross sections (xsect)
https://twiki.cern.ch/twiki/bin/viewauth/CMS/HowToGenXSecAnalyzer
cern-get-sso-cookie -u https://cms-pdmv-dev.cern.ch/mcm/ -o ~/private/dev-cookie.txt --krb --reprocess
cern-get-sso-cookie -u https://cms-pdmv.cern.ch/mcm/ -o ~/private/prod-cookie.txt --krb --reprocess
source /afs/cern.ch/cms/PPD/PdmV/tools/McM/getCookie.sh
voms-proxy-init -voms cms
cmsrel CMSSW_9_4_9
cd CMSSW_9_4_9/src
cmsenv
scram b -j8
cd ../../
# download the genproduction reposotory
git clone https://github.com/cms-sw/genproductions.git
cd genproductions/test/calculateXSectionAndFilterEfficiency
./calculateXSectionAndFilterEfficiency.sh -f datasets.txt -c RunIIFall17 -d MINIAODSIM -n 1000000
Plot on ROOT the hltFastPrimaryVertex error
Events->Draw("abs(recoVertexs_hltFastPrimaryVertex__CHECK2.obj[0].position_.fCoordinates.fZ - SimVertexs_g4SimHits__SIM.obj[0].theVertex.fCoordinates.fZ)<1.5")
Madgraph aMC generate events and simulation
See
https://twiki.cern.ch/twiki/bin/view/CMSPublic/MadgraphTutorial
wget https://launchpadlibrarian.net/446898081/MG5_aMC_v2.6.7.tar.gz
tar -xf MG5_aMC_v2.6.7.tar.gz
cd MG5_aMC_v2_6_7
./bin/mg5_aMC
install pythia-pgs
install Delphes
Uncomment "# delphes_path = ./Delphes" in input/mg5_configuration.txt
generate p p > j j
output myTest
Check myTest folder index.html
launch myTest
PRESS 2 to enable Delphes and Pythia (needed to have the final
GenJets variables)
ENTER
Check/Modify the cards (param_card.dat is about particle definition, run_card.dat is about initial/final state - eg. jet pt and eta - )
ENTER to start the event generation
cd Delphes
gunzip ../myTest/Events/run_04/tag_1_pythia_events.hep.gz
./DelphesSTDHEP cards/delphes_card_ATLAS.tcl ../myTest_03.root ../myTest/Events/run_03/tag_1_pythia_events.hep
Event size
https://cmsweb.cern.ch/das/request?input=dataset%3D%2FScoutingCaloHT%2FRun2016H-v1%2FRAW&instance=prod/global
https://cmsweb.cern.ch/das/request?input=dataset%3D%2FJetHT%2FRun2016H-v1%2FRAW&instance=prod/global
Run2016H -v1
ScoutingCaloHT: 4381382768519 B / 3684448435 events = 1189 B/event = 1.2 KB
JetHT: 72413390813294 B / 125958413 events = 574899 B/event = 574.9 KB
or from OMS:
https://cmsoms.cern.ch/cms/runs/report?cms_run=283408
Event size
PhysicsHadronsTaus: 630255.830 B /event = 630 KB
ScoutingCalo: 2871.805 B/event = 2.87 KB
Events->Draw("abs(recoVertexs_hltFastPrimaryVertex__CHECK2.obj[0].position_.fCoordinates.fZ - SimVertexs_g4SimHits__SIM.obj[0].theVertex.fCoordinates.fZ)<1.5")
Run CMSSW in a folder/directory
import FWCore.ParameterSet.Config as cms
import FWCore.ParameterSet.VarParsing as VarParsing
process = cms.Process( "HLTX" )
options = VarParsing.VarParsing ('analysis')
options.register('name','aaa',options.multiplicity.singleton,options.varType.string,'the filenames')
options.parseArguments()
name=options.name
from os import walk
fileList = []
for (dirpath, dirnames, filenames) in walk(name):
for filename in filenames:
if filename.endswith(".root"):
fileList.extend(["file:" + dirpath + "/"+ filename])
if len(fileList)>=256: break
process.source = cms.Source( "PoolSource",
fileNames = cms.untracked.vstring(
fileList
# 'file:ZnnHbb_222A3D89-FC6F-E311-82DA-00266CF9B8B0.root',
),
secondaryFileNames = cms.untracked.vstring(
),
inputCommands = cms.untracked.vstring(
'keep *'
)
)
Get the IP of your machine
wget -qO- http://ipecho.net/plain ; echo
<a name="How_to_install_CERN_printers_on"></a> How to install CERN printers on Ubuntu 16.04
Follow the instructions for Ubuntu 12.04 in
https://twiki.cern.ch/twiki/bin/view/Main/UbuntuPrintingIt works in Ubuntu 16.04!!
Printer CERN
Color ATLAS: 40-4D274-CANON
B/W
CMS: 40-4B-COR
Color: 161-R402-HPCOL
B/W: 161-1009-HP
Building 15
B/W: 231-1201-CANON
Links
https://twiki.cern.ch/twiki/bin/view/CMS/HLTinCMSSW71X#CMSSW_7_1_0_pre8 (which HLT dev to use for CMSSW samples)
https://twiki.cern.ch/twiki/bin/view/CMS/SoftToolsOnlRelMenus
https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideGlobalHLT#Trigger_development_for_Run_2 (SW guide to HLT)
https://twiki.cern.ch/twiki/bin/view/CMSPublic/WorkBookXrootdService (How to take remote data files)
java -jar confdb-gui-all-inclusive.jar
inline LOOP Bash (resubmit folders crab)
</verbatim> for file in ./crab_PROD_4_3_*; do echo -d $file; done
for file in ./crab_PROD_4_3_*; do crab resubmit -d $file; done</verbatim>
VBF trigger test (Feb 2016)
cmsrel CMSSW_8_0_0
cd CMSSW_8_0_0/src
cmsenv
git cms-addpkg L1TriggerConfig/L1GtConfigProducers
cp /afs/cern.ch/user/t/tmatsush/public/L1Menu/L1Menu_Collisions2015_25nsStage1_v5/xml/L1Menu_Collisions2015_25nsStage1_v5_L1T_Scales_20141121.xml L1TriggerConfig/L1GtConfigProducers/data/Luminosity/startup/L1Menu_Collisions2015_25nsStage1_v5_L1T_Scales_20141121.xml
cp /afs/cern.ch/work/g/georgia/public/L1prescales/Stage1-v5_prescales/7e33/* L1TriggerConfig/L1GtConfigProducers/python/.
scram build -j 4
hltGetConfiguration \
/dev/CMSSW_8_0_0/HLT/V16 \
--prescale 7e33 \
--input root://xrootd.ba.infn.it//store/mc/RunIIFall15DR76/QCD_Pt_15to30_TuneCUETP8M1_13TeV_pythia8/GEN-SIM-RAW/25nsFlat10to50NzshcalRaw_76X_mcRun2_asymptotic_v12-v1/00000/000AB15C-AAAB-E511-9988-0025904A891A.root \
--max-events 100 \
--full --offline --mc --process TEST \
--open \
--path HLT_QuadPFJet_VBF_v4,HLT_QuadPFJet_BTagCSV_p037_p11_VBF_Mqq200_v1,HLT_QuadPFJet_BTagCSV_p037_VBF_Mqq460_v1 \
--output full \
> hlt.py
Matching b-quark from Higgs to Higgs jets with deltaR
tree->Draw("max(Min$(abs(GenBQuarkFromH_eta-Jet_eta[hJCidx[1]])**2+(TVector2::Phi_mpi_pi(GenBQuarkFromH_phi-Jet_phi[hJCidx[1]]))**2)**0.5,Min$(abs(GenBQuarkFromH_eta-Jet_eta[hJCidx[0]])**2+(TVector2::Phi_mpi_pi(GenBQuarkFromH_phi-Jet_phi[hJCidx[0]]))**2)**0.5)","((Sum$(abs(GenBQuarkFromH_eta)<2.4 && GenBQuarkFromH_pt>20)==2) && Vtype==4) ","");
Get HLT BTag plots from DQM
((TDirectory*) gDirectory->Get("DQMData/Run 1/HLT/Run summary/BTag/Discrimanator/HLT_QuadPFJet_SingleBTagCSV_VBF_Mqq460_/efficiency"))->cd()a
Get n-th element after a selection in ROOT tree->Draw() (TTree::Draw)
#include <iostream>
#include <vector>
std::vector<float> pts;
float Pt4(float pt, float eta, int element, int iteration, int length){
float value = 0;
if(iteration==0){
pts.clear();
}
if (abs(eta)<2.4){
pts.push_back(pt);
}
if (iteration==length-1){
// pts.sort();
if (pts.size()>=4){
value= pts[element];
pts.clear();
}
}
return value;
}
// tree->Scan("Sum$(Pt4(Jet_pt,Jet_eta,3,Iteration$,Length$)):Jet_pt[3]")
CSV
#include <iostream>
#include <vector>
#include <algorithm>
std::vector<float> pts;
//float Pt4(float pt, float eta, int element, int iteration, int length){
// float value = 0;
// if(iteration==0){
// pts.clear();
// }
// if (abs(eta)<2.4){
// pts.push_back(pt);
// }
// if (iteration==length-1){
//// pts.sort();
// if (pts.size()>=4){
// value= pts[element];
// pts.clear();
// }
// }
// return value;
//}
float CSV3(float csv, int iteration, int length){
using namespace std;
float value = 0;
if(iteration==0){
pts.clear();
}
pts.push_back(csv);
if (iteration==length-1){
std::sort(pts.begin(),pts.end());
std::reverse(pts.begin(),pts.end());
if (pts.size()>=3){
value= pts[2];
pts.clear();
}
}
return value;
}
// tree->Scan("Sum$(Pt4(Jet_pt,Jet_eta,3,Iteration$,Length$)):Jet_pt[3]")
// tree->Scan("Sum$(Pt4(Jet_pt,Jet_eta,3,Iteration$,Length$)):Jet_pt[3]")
Modify/override the global-tag (GT,globaltag). Use the old/new btag training
process.GlobalTag.toGet.append(
cms.PSet(
record = cms.string("SiPixelGainCalibrationForHLTRcd"),
tag = cms.string("SiPixelGainCalibrationHLT_2009runs_hlt"),
connect = cms.string("frontier://FrontierProd/CMS_CONDITIONS")
)
)
just below
if 'GlobalTag' in process.__dict__:
from Configuration.AlCa.GlobalTag import GlobalTag as customiseGlobalTag
process.GlobalTag = customiseGlobalTag(process.GlobalTag, globaltag = '100X_dataRun2_relval_v2?)
from CondCore.DBCommon.CondDBSetup_cfi import *
process.BTauMVAJetTagComputerRecord = cms.ESSource("PoolDBESSource",CondDBSetup,
connect = cms.string("frontier://FrontierProd/CMS_CONDITIONS"),
pfnPrefix = cms.untracked.string('frontier://FrontierProd/'),
toGet = cms.VPSet(cms.PSet(record = cms.string("BTauGenericMVAJetTagComputerRcd"),
label = cms.untracked.string("HLT"),
tag = cms.string("MVAComputerContainer_75X_JetTags_v5_online")),
)
)
process.es_prefer_BTauMVAJetTagComputerRecord = cms.ESPrefer("PoolDBESSource","BTauMVAJetTagComputerRecord")
or if you have a local file:
process.load("CondCore.DBCommon.CondDBSetup_cfi")
process.BTauMVAJetTagComputerRecord = cms.ESSource("PoolDBESSource",
process.CondDBSetup,
timetype = cms.string('runnumber'),
toGet = cms.VPSet(cms.PSet(
record = cms.string('BTauGenericMVAJetTagComputerRcd'),
label = cms.untracked.string("HLT"),
tag = cms.string('MVAJetTags')
)),
connect = cms.string("sqlite_"),
BlobStreamerName = cms.untracked.string('TBufferBlobStreamingService')
)
process.es_prefer_BTauMVAJetTagComputerRecord =
cms.ESPrefer("PoolDBESSource","BTauMVAJetTagComputerRecord")
How to convert certificates from .P12 to .PEM formats?
https://twiki.cern.ch/twiki/bin/view/ETICS/HowToConvertCertificatesFromP12ToPem
https://twiki.cern.ch/twiki/bin/viewauth/CMS/DQMGUIGridCertificate
Find largest file or directories in a folder
du -ahx --max-depth=1 /var <code style="font-size: 10px;">2>/dev/null </code>| sort -k1 -rh
How to obtain eigenvalue and eigenvector of a fit in ROOT (useful to set Up/Down systematics )
ratio_pt.Draw()
fitResults = ratio_pt.Fit(funct_pt,"ESV0")
res = fitResults.Get()
matr = res.GetCovarianceMatrix()
matr.Print()
funct_pt.Draw("same")
funct_pt_err = []
eigenvalues = getattr(ROOT,"TVectorT<double>")()
eigenvectors = matr.EigenVectors(eigenvalues)
for i in range(funct_pt.GetNpar()):
funct_pt_err.append(funct_pt.Clone("funct_pt_err"+str(i)))
gaus = rnd.Gaus()
delta=[0.]*funct_pt.GetNpar()
for j in range(funct_pt.GetNpar()):
centralValue = funct_pt.GetParameter(j)
delta = eigenvectors[j][i] * (eigenvalues[i])**0.5
funct_pt_err[i].SetParameter(j, centralValue + delta*2)
print eigenvectors[j][i],"\t",eigenvalues[i],"\t",delta,"\t",centralValue,"\t",centralValue+delta
funct_pt_err[i].SetLineColor(1+i)
funct_pt_err[i].Draw("same")
c6.SaveAs("pt_errors.png")
Get plots for .C ROOT macro (macro.C)
gROOT.ProcessLine(".x macro1.C")
function = gDirectory.Get("histo")
function = copy.copy(function)
gROOT.ProcessLine(".x macro2.C")
expoRatio = f4.Get("c1").GetPrimitive("expoRatio")
expoRatio = copy.copy(expoRatio)
Mia's plot HIP using my HLT ntuples
tree->Draw("caloJet_hltBTagCaloCSVp087Triple>=1:bx","HLT_PFHT800_v2 && caloJet_offmatch>0 && offJet_csv[caloJet_offmatch]>0.8 ","prof")
hadd alternative ROOT
TChain tree("tree");
tree.Add("tree_*.root");
tree.SetBranchStatus("*",0);
tree.SetBranchStatus("offJet*",1);
tree.SetBranchStatus("caloJet_hltBTagCaloCSVp067Single",1);
tree.SetBranchStatus("caloJet_offmatch*",1);
tree.SetBranchStatus("run",1);
tree.SetBranchStatus("bx",1);
tree.SetBranchStatus("lumi",1);
tree.SetBranchStatus("event",1);
merged = (TTree*) tree.CloneTree();
merged->SaveAs("merged.root")
ROOT loop in a directory TDirectory TKey
for i in dir.GetListOfKeys(): print i.ReadObj()
Python print the whole history
import readline
for i in range(readline.get_current_history_length()):print readline.get_history_item(i +1)
Python pyROOT skim and merger of a list of TTree to a file
from ROOT import *
fileout = TFile("test.root","recreate")
chain = TChain("tree")
chain.Add("tree_134.root")
chain.Add("tree_699.root")
chain.SetBranchStatus("HLT*",0)
chain.SetBranchStatus("HLT_PFHT800",1)
chain.SetBranchStatus("caloJet_*",1)
chain.SetBranchStatus("pfJet_*",1)
chain.SetBranchStatus("offJet_*",1)
chain.SetBranchStatus("hltQuadCentralJet45",1)
chain.SetBranchStatus("hltBTagCaloCSVp087Triple",1)
chain.SetBranchStatus("hltBTagPFCSVp016SingleWithMatching",1)
fileout.cd()
newTree = chain.CloneTree(0)
for entry in chain:
if chain.HLT_PFHT800:
newTree.Fill()
newTree.Write()
Install CERN AFS, CVMS, SSO, printer, ... on Ubuntu
https://gitlab.cern.ch/snippets/623
CERN SSO Cookies (wget cern pages)
https://linux.web.cern.ch/docs/cernssocookie/
STEAM rate - QCD Mu Enriched samples
Take MCM code from DAS
https://cmsweb.cern.ch/das/request?view=list&limit=50&instance=prod%2Fglobal&input=mcm+dataset%3D%2FQCD_Pt-30to50_MuEnrichedPt5_TuneCUETP8M1_13TeV_pythia8%2FRunIISummer15GS-MCRUN2_71_V1-v1%2FGEN-SIM
https://cmsweb.cern.ch/das/request?view=list&limit=50&instance=prod%2Fglobal&input=mcm+dataset%3D%2FQCD_Pt-30to50_MuEnrichedPt5_TuneCUETP8M1_13TeV_pythia8%2FPhaseIFall16GS-81X_upgrade2017_realistic_v26-v1%2FGEN-SIM
Look at the MCM request details
https://cms-pdmv.cern.ch/mcm/requests?prepid=BTV-RunIISummer15GS-00038&page=0&shown=127
Get setup command
https://cms-pdmv.cern.ch/mcm/public/restapi/requests/get_setup/BTV-RunIISummer15GS-00038
Get fragment
https://raw.githubusercontent.com/cms-sw/genproductions/3c3b5e0c80c506d9623885f1e651c9dee9d04cab/python/ThirteenTeV/QCD_Pt-30to50_MuEnrichedPt5_TuneCUETP8M1_13TeV_pythia8_cff.py
See the lines
'130:mayDecay = on',
'211:mayDecay = on',
'321:mayDecay = on'
Find the corresponding particles (pdg, page 365,
http://pdg.lbl.gov/2010/download/rpp-2010-JPhys-G-37-075021.pdf
)
130: K-long
211: Pi+
321: K+
Conclusion
The
MuEnriched samples include the K and Pi decay.
Example of AOD and RAW matching files
process.source = cms.Source("PoolSource",
secondaryFileNames = cms.untracked.vstring('root://cms-xrd-global.cern.ch//store/data/Run2017F/JetHT/RAW/v1/000/305/045/00000/C85CC1D9-3BB1-E711-869A-02163E01A4FF.root'),
fileNames = cms.untracked.vstring('root://cms-xrd-global.cern.ch//store/data/Run2017F/JetHT/AOD/PromptReco-v1/000/305/045/00000/2E74B4E1-5DB2-E711-9ABF-02163E0123C6.root'),
inputCommands = cms.untracked.vstring('keep *')
)
Which samples are used to simulate pile-up (PU)
Go on DAS and take the mcm code
https://cmsweb.cern.ch/das/request?view=list&limit=50&instance=prod%2Fglobal&input=mcm+dataset%3D%2FQCD_Pt_15to30_TuneCUETP8M1_13TeV_pythia8%2FRunIISummer16DR80-FlatPU28to62HcalNZSRAW_80X_mcRun2_asymptotic_2016_TrancheIV_v6-v1%2FGEN-SIM-RAW
https://cmsweb.cern.ch/das/request?view=list&limit=50&instance=prod%2Fglobal&input=mcm+dataset%3D%2FQCD_Pt_15to30_TuneCUETP8M1_13TeV_pythia8%2FPhaseIFall16DR-FlatPU28to62HcalNZSRAW_81X_upgrade2017_realistic_v26-v1%2FGEN-SIM-RAW
(TSG-PhaseIFall16DR-00002 and TSG-RunIISummer16DR80-00016)
Look at the MCM request detailshttps://cms-pdmv.cern.ch/mcm/requests?prepid=TSG-PhaseIFall16DR-00002
Get setup commandhttps://cms-pdmv.cern.ch/mcm/public/restapi/requests/get_setup/TSG-PhaseIFall16DR-00002
See the cmsDriver configuration. The samples used to generate pile-up are:
/MinBias_TuneCUETP8M1_13TeV-pythia8/PhaseIFall16GS-81X_upgrade2017_realistic_v26-v1/GEN-SIM
/MinBias_TuneCUETP8M1_13TeV-pythia8/RunIISummer15GS-MCRUN2_71_V1_ext1-v1/GEN-SIM
What's my IP bash? Found the IP address from bash
curl -s http://whatismijnip.nl |cut -d " " -f 5
Scouting ntuples
/ecms/store/group/phys_exotica/dijet/Dijet13TeVScouting
How to convert a TH1F into a TH3F and a TH3F in TH1F
def TH3FtoTH1F(histo3D):
nx = histo3D.GetNbinsX()
ny = histo3D.GetNbinsY()
nz = histo3D.GetNbinsZ()
nbins = (nx+2)*(ny+2)*(nz+2) - 2
histo1D = ROOT.TH1F("histo1D","",nbins,0,nbins)
histo1D.Sumw2()
if histo3D.fN != histo1D.fN:
print("histo3D.fN",histo3D.fN)
print("histo1D.fN",histo1D.fN)
print("must be identical!")
for i in range(histo1D.fN):
histo1D.GetSumw2()[i] = histo3D.GetSumw2()[i]
histo1D.GetArray()[i] = histo3D.GetArray()[i]
histo1D.Modify()
histo3D.Modify()
return copy.copy(histo1D)
def TH1FtoTH3F(histo1D,histo3D):
if histo3D.fN != histo1D.fN:
print("histo3D.fN",histo3D.fN)
print("histo1D.fN",histo1D.fN)
print("must be identical!")
return
for i in range(histo1D.fN):
histo3D.GetSumw2()[i] = histo1D.GetSumw2()[i]
histo3D.GetArray()[i] = histo1D.GetArray()[i]
histo1D.Modify()
histo3D.Modify()
return histo3D
https://github.com/silviodonato/usercode/tree/master
From pT and eta to energy
See:
https://en.wikipedia.org/wiki/Pseudorapidity
| p | = p T cosh η {\displaystyle |p|=p_{\text{T}}\cosh {\eta }}

.
and so:
tree->Draw("jet1_pt*TMath::CosH(jet1_eta):jet1_energy")
LHC/CMS numbers
ZeroBias rate (single bunch):
11.24545 kHznominal number of bunches:
2808 [ZeroBias rate:
32 MHz]
theoretical number of bunches: 26659m/(25E-9*3E8m/s) = 3558 [ZeroBias rate: 40 MHz]
lumisection definition: 2^18 LHC orbits = 2^18 / 11245.5Hz =
23.31 s
Print run number, lumisection, event number from ROOT file
Events->Scan("EventAuxiliary.run():EventAuxiliary.luminosityBlock():EventAuxiliary.event()")
Pile-up (PU), instantaneous luminosity, number of bunches
At 13
TeV:
L = 1.5915 E34
n coll. bunches = 1866
PU = 60.678
sigma(pp) = Rzerobias * nB * PU / L = 11.24 kHz * 1866 * 60.678 / 1.5915 E34 =
8.0E-26 cm^2 = 8.0E-2 b =
80 mb
Luminosity of 1 colliding bunch with PU = 1:
L = 11.24 kHz / 8.0E-26 cm^2
= 1.405E29 cm^-2s^-1
LHC luminosity, given number of colliding bunches (nB) and PU:
L = 1.4E29 * nB * PU (eg. nB=2808, PU=60 -> L=2.35E34)
Pile up, given number of colliding bunches (nB) and LHC luminosity:
PU = L / (1.4E29 * nB) (eg. L=2.0E34, nB=2808 -> PU = 50.9)
Dataset size given rate and event size
The event size at PU~50 is ~0.85 MB/event.
Rate [Hz] * 0.85 MB/event * L[fb-1] * (1E39 cm-2/fb-1) / (2E34 cm-2 s-1) = Rate [Hz] * L[fb-1] * 42.5 GB .
Example:
SingleMuon dataset (~300 Hz) in 2022 (~35 fb-1) will have a size of ~446 TB of RAW data.
Skim trigger HLT and L1
https://github.com/silviodonato/usercode/blob/master/skim_edm_HLT_L1_bits_external.py
Rho, pile-up correction, effective area
From 2015 JINST 10 P06005
Performance of electron reconstruction and selection with the
CMS detector in proton-proton collisions at √s = 8
TeV
Material budget
L1 EG prefiring
https://indico.cern.ch/event/734407/contributions/3049707/
colors = [ROOT.kRed +3,ROOT.kRed +1,ROOT.kRed -4,ROOT.kRed -7,ROOT.kRed -9,ROOT.kGreen +3,ROOT.kGreen +1,ROOT.kGreen -4,ROOT.kGreen -7,ROOT.kGreen -9,ROOT.kBlue +3,ROOT.kBlue +1,ROOT.kBlue -4,ROOT.kBlue -7,ROOT.kBlue -9,]
Create
BuildFile.xml
https://twiki.cern.ch/twiki/bin/viewauth/CMS/BuildFileCreator
(Korbinian sta lavorando all'analisis ttH fully hadronic sui dati 2017)</verbatim>
### Setup Rucio
source /cvmfs/cms.cern.ch/cmsset_default.sh &&source /cvmfs/cms.cern.ch/rucio/setup-py3.sh &&export RUCIO_ACCOUNT="t2_ch_cern_local_users" &&voms-proxy-init -voms cms -rfc -valid 192:00
### Add Rucio rule (only FOG conveners and Trigger Coordinators allowed)
rucio add-rule --account t2_ch_cern_local_users --lifetime 12960000 --comment "Copy
NanoAOD for TSG studies" cms:/HLTPhysics/Run2022E-PromptNanoAODv10_v1-v1/NANOAOD 1 T2_CH_CERN
### Add Rucio rule
### Check status Rucio Rules
rucio rule-info 412fb67bef5b4e4e95c55c8c35c4d90d
### Example of useful commands
rucio help
rucio whoami
rucio list-account-limits t2_ch_cern_local_usersrucio list-account-usage t2_ch_cern_local_users
rucio list-rules-historyrucio list-rules --id 412fb67bef5b4e4e95c55c8c35c4d90drucio list-rules --account t2_ch_cern_local_users
import
ParameterSet.Config as cms
process = cms.Process("TEST")
import
FWCore.ParameterSet.VarParsing as
VarParsing process.source = cms.Source("NewEventStreamFileReader", fileNames = cms.untracked.vstring("file:run360991_ls0117_streamPhysicsHICommissioning_hilton-c2e36-35-04.dat") )
process.outp1=cms.OutputModule("PoolOutputModule", fileName = cms.untracked.string('out.root'), outputCommands = cms.untracked.vstring('keep *'), ) process.out = cms.EndPath( process.outp1 )