Cristina's Sandbox

Commands

Vim

https://www.fprintf.net/vimCheatSheet.html

Notes

ttH multileptons

Twiki: https://gitlab.cern.ch/ttH_leptons/doc

LLR framework

https://github.com/LLRCMS/LLRHiggsTauTau/tree/94X_ttH

Installation:

https://github.com/LLRCMS/LLRHiggsTauTau/tree/94X_ttH#instructions-for-94x_tth

Ntuple producer:

https://github.com/LLRCMS/LLRHiggsTauTau/blob/063c6c3be223e0322c2b60a842fae8f26b040449/NtupleProducer/plugins/HTauTauNtuplizer.cc

Run:

cmsRun analyzer.py

Output:

HTauTauAnalysis.root

Private production

Run interactively, having in analyzer.py :

'file:/data_CMS/cms/mperez/ttH_2017/THW/merged_THW001.root'

Crab production

Create a python config file for your sample to process: crab3_XXX.py

Modify the following entries:

config.General.requestName = 'HTauTau_MSSM_GGH300_21_09_15'

config.Data.inputDataset = '/SUSYGluGluToHToTauTau_M-300_TuneCUETP8M1_13TeV-pythia8/RunIISpring15DR74-Asympt25ns_MCRUN2_74_V9-v1/MINIAODSIM'

config.Data.outLFNDirBase = '/store/user/davignon/EnrichedMiniAOD/MSSM_GGH300_pfMET_prod_21_09_2015/'

config.Data.publishDataName = 'MSSM_GGH300_HTauTau_21_09_2015'

config.Site.storageSite = 'T2_FR_GRIF_LLR'

RUN_NTUPLIZER = False

SVFITBYPASS = False

IsMC = True

Is25ns = True

For data, specify the golden JSON:

config.Data.lumiMask = 'https://cms-service-dqm.web.cern.ch/cms-service-dqm/CAF/certification/Collisions17/13TeV/ReReco/Cert_294927-306462_13TeV_EOY2017ReReco_Collisions17_JSON.txt'

Source crab:

voms-proxy-init -voms cms

source /cvmfs/cms.cern.ch/crab3/crab.sh

Launch production:

crab submit -c crab3_XXX.py

Monitor jobs:

crab status -d crab3//

or in:

http://dashb-cms-job-task.cern.ch/dashboard/request.py/taskmonitoring

Relaunch failed jobs:

crab resubmit -d crab3//

Delete jobs:

crab kill -d

Grid space:

rfdir /dpm/in2p3.fr/home/cms/trivcat/store/user/cmartinp/

Helpers convert

This step will add additional variables with respect to the LLRHiggsTauTau NtupleProducer.

https://github.com/cmarper/ttH/tree/master/macros/https://github.com/cmarper/ttH/blob/master/macros/Helpers_convert_ttH_2017_v7.C

To run on LLR Tier3:

https://github.com/tstreble/MEM_Analysis/blob/master/ttH/macros/Helpers_convert_ttH_v6.C

Tree splitter

This step takes care of skimming the existing ntuples, building different trees for the different regions used for the signal and background estimations (both for the ttH multilepton and tau categories).

https://github.com/cmarper/ttH/blob/master/macros/tree_splitter_2017_v9.C

To run on LLR Tier3:

https://github.com/tstreble/MEM_Analysis/blob/master/ttH/macros/launch_split_jobs_tier3.C

Datacard computation

Compute datacards combining the yields/systematics from all the ntuples (one per category):

https://github.com/cmarper/ttH/blob/master/macros/datacard_maker_2lSS1tau_2017_antiEle_v2.C

Combine

Installation:

https://github.com/cms-analysis/CombineHarvester

https://twiki.cern.ch/twiki/bin/viewauth/CMS/SWGuideHiggsAnalysisCombinedLimit

Install in CMSSW_7_4_7/src/CombineHarvester/ttH_htt/bin/ the code like in:

https://github.com/tstreble/MEM_Analysis/blob/master/ttH/macros/WriteDatacards_2lss_1tau_ttH_comb.cpp

This can be used then with standard combine commands like the ones in:

https://github.com/tstreble/MEM_Analysis/blob/master/ttH/macros/make_ttH_htt_ttH_comb.sh

MEM

Installation

Package:

https://llrgit.in2p3.fr/mem-group/CMS-MEM/tree/OpenCL_ttH_Run2_2017

Instructions:

mkdir MEM-Project

cd MEM-Project

git clone git@llrgitNOSPAMPLEASE.in2p3.fr:mem-group/CMS-MEM.git

cd CMS-MEM

git checkout OpenCL _ttH_Run2_2017

ln -s Env/CC_slc7_amd_amd64_gcc530.env cms-mem.env

ln -s Env/make.inc.cc make.inc

cd xxx/CMS-MEM

. ./cms-mem.env

cd MGMEM/

make clean; make

Adding new variables:

- Scalar:

EventReader _impl_Run2.cpp: ok = ok && ! tchain_->SetBranchAddress( "bTagSF_weight_up", &_bTagSF_weight_up );

EventReader _impl_Run2.cpp: eventData._bTagSF_weight_up = _bTagSF_weight_up;

IntegralsOutputs _Run2.cpp: ttree_->Branch("bTagSF_weight_up", &_bTagSF_weight_up, "bTagSF_weight_up/F");

IntegralsOutputs _Run2.cpp: _bTagSF_weight_up = ev->_bTagSF_weight_up;

Run2EventData _t.cpp: _bTagSF_weight_up = evData->_bTagSF_weight_up;

Run2EventData _t.h: float _bTagSF_weight_up;

- Vectorial:

EventReader _impl_Run2.cpp: ok = ok && ! tchain_->SetBranchAddress( "recotauh_sel_phi", &p_recotauh_sel_phi);

EventReader _impl_Run2.cpp: eventData._recotauh_sel_phi = _recotauh_sel_phi;

IntegralsOutputs _Run2.cpp: ttree_->Branch("recotauh_sel_phi", &_recotauh_sel_phi);

IntegralsOutputs _Run2.cpp: _recotauh_sel_phi = ev->_recotauh_sel_phi;

Run2EventData _t.cpp: _recotauh_sel_phi = evData->_recotauh_sel_phi;

Run2EventData _t.cpp: p_recotauh_sel_phi = &_recotauh_sel_phi;

Run2EventData _t.h: vector _recotauh_sel_phi;

Run2EventData _t.h: vector* p_recotauh_sel_phi;

Commit to Git

git status

git commit -a -m “comment”

git push -v -u origin OpenCL _ttH_Run2_2017

GPU Platform @ CC-IN2P3

Twiki: https://llrgit.in2p3.fr/mem-group/CMS-MEM/wikis/batch-CC-cluster

Log-in

ssh -XY mperez@ccaNOSPAMPLEASE.in2p3.fr

. /usr/local/shared/bin/ge_env.sh

qlogin -l GPU=1 -q mc_gpu_interactive -pe multicores_gpu 4

Log-in with cmsf group:

groups

newgrp cmsf

Config file:

MGMEM/cristina.py

Input file: InputFileList

Output file: FileOfIntegrals

sps space:

/sps/cms/mperez

Run interactively

cd MGMEM/

mpirun -n 2 ./MG-MEM-MPI  cristina.py

with:

OCLConfig.py:

SelectedQueues = [ True, False, False, False, False, False, False]

KernelExecMode = 1

Run on batch

cd Batch/Model/

qsub -l os=cl7,GPU=4 -q pa_gpu_long -pe openmpigpu 32 sge_debug.sh

with:

OCLConfig.py:46:SelectedQueues = [ True, True, True, True, False, False, False]

Check jobs:

qstat

More info about batch submission:

https://doc.cc.in2p3.fr/utiliser_le_systeme_batch_ge_depuis_le_centre_de_calcul#jobs_gpu_paralleles

LSI @ LLR

https://llrlsi-git.in2p3.fr/llrlsi/for-users/wikis/home

Log-in

ssh -XY cmartinp@cmsusrNOSPAMPLEASE.cern.ch

sps space:

/sps/mperez

Polui @ LLR

https://llrgit.in2p3.fr/mem-group/CMS-MEM/wikis/git-commands

Installed in /home/llr/cms/mperez/MEM-Project/CMS-MEM

MEM output

No missing jet:

T->Draw("Integral_ttH/(Integral_ttH+1e-18*(Integral_ttbar_DL_fakelep_tlep+Integral_ttbar_DL_fakelep_ttau)+1e-1*Integral_ttZ+2e-1*Integral_ttZ_Zll)","integration_type==0")

Missing jet:

T->Draw("Integral_ttH/(Integral_ttH+5e-15*(Integral_ttbar_DL_fakelep_tlep+Integral_ttbar_DL_fakelep_ttau)+5e-2*Integral_ttZ+5e-1*Integral_ttZ_Zll)”, “integration_type==1”)

L1 Tau Trigger

L1 CMSSW:

https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideL1TStage2Instructions

Tau Tag&Probe package:

https://github.com/davignon/TauTagAndProbe

Production of ntuples

- Offline: cmsRun test_noTagAndProbe.py

- L1 (with re-emulation): cmsRun reEmulL1_MC_L1Only.py

- ZeroBias: reEmulL1_ZeroBias.py

Merging offline and L1 taus

- config files in /run/VBFStage2_WithJune2017_Jets_05_10_17.config

- compile under CMSSW: make clean; make

- run: ./merge.exe run/VBFStage2_WithJune2017_Jets_05_10_17.config

Matching offline and L1 taus

- script: MakeTreeForCalibration.C

Create compressed tree

- need the files: LUTs_06_09_16_NewLayer1_SK1616 and compressionLuts

- run: python produceTreeWithCompressedIetaEShape_NewFormat.py

Produce the calibration LUT

- directory: /home/llr/cms/mperez/RegressionTraining/CMSSW_7_6_0/src/RegressionTraining

- BDT config file: GBRFullLikelihood_Trigger_Stage2_2017_compressedieta_compressediet_hasEM_isMerged_MC_SandeepCristina_MC_VBF.config

- compilation; =make clean; make'

- run: ./regression.exe GBRFullLikelihood _Trigger_Stage2_2017_compressedieta_compressediet_hasEM_isMerged_MC_SandeepCristina_MC_VBF.config

- make histo with calibration constants: python makeTH4_Stage2_2017_compressedieta_compressediet_hasEM_isMerged_MC_VBF

- result in corrections/

- produce LUT: MakeTauCalibLUT_MC_NewCompression_WithMarch2017Layer1.C

Apply the calibration LUT

- apply calibration: ApplyCalibration.C

Produce the isolation LUT

- get isolation cuts: Build_Isolation_WPs_MC_NewCompression_Thomas_nTT_OlivierFlatWP_With2017Layer1.C

- perform the relaxation: Fill_Isolation_TH3_MC_2017Layer1Calibration.C

- produce LUT: MakeTauIsoLUT_MC_NewCompression_WithMarch2017Layer1.C

Rate studies

- Use ZeroBias ntuples.

- Apply calibration: ApplyCalibrationZeroBias.C

- Compute rates: Rate_ZeroBias_Run305310.C

- Plot rate comparison and get thresholds for a certain rate: CompareRates_Run305310.C

Apply the isolation LUT

- Apply isolation: ApplyIsolationForTurnOns.C

- Plot turnons: CompareTurnOns_2017Layer1Calibration_ShapeVeto_AdaptedThreshold.C

-- CristinaMartinPerez - 2018-12-04

Edit | Attach | Watch | Print version | History: r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r3 - 2018-12-10 - CristinaMartinPerez
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2018 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback