Cristina's Sandbox

Commands

Git

Commit

git branch

git commit # get list of files that changed

git add < files > (or . for all files modified in the current directory)

git commit -m < comment > < files > (or without < files > )

git push origin < branch >

New branch

git checkout < oldbranch > #make sure you are in old branch

git checkout -b < newbranch > #create new branch and move into it

### do your work

git commit -m < comment >

git push origin < newbranch >

Screen (for analyzer.py)

screen

/opt/exp_soft/vo.gridcl.fr/singularity/ui_sl6

voms-proxy-init -voms cms

source /cvmfs/cms.cern.ch/cmsset_default.sh

cd /home/llr/cms/mperez/CMSSW_10_2_14/src/LLRHiggsTauTau/NtupleProducer/test

cmsenv

cmsRun *.py

Ctr+A Ctr+D

screen -list

screen -r < screen >

Kill screen: once connected to the session (screen -r) press Ctrl + A then type :quit.

Polgrid LLR

rfdir /path/to/folder

rfdir /dpm/in2p3.fr/home/cms/trivcat/store/user/cmartinp/

rfrm -r /path/to/folder

Remove folder without timeout errors (leave in the background):

gfal-rm -r -v srm://polgrid4.in2p3.fr:8446/srm/managerv2?SFN=/dpm/in2p3.fr/home/cms/trivcat/store/user/cmartinp/ttH_Legacy/Data_2016_v1/SingleElectron/ > deletion.log

Check how much space I use in dpm:

/opt/exp_soft/cms/t3/gfal-du --path /dpm/in2p3.fr/home/cms/trivcat/store/user/cmartinp

Polgrid IRFU

export DPM_HOST=node12.datagrid.cea.fr

export DPNS_HOST=node12.datagrid.cea.fr

rfdir /dpm/datagrid.cea.fr/home/cms/trivcat/store/user/cmartinp/

root -l root://node12.datagrid.cea.fr:1094//dpm/datagrid.cea.fr/home/cms/trivcat/store/user/cmartinp/

Dasgoclient

dasgoclient --query="file dataset=/DYJetsToLL_M-50_TuneCP5_13TeV-amcatnloFXFX-pythia8/RunIIFall17MiniAODv2-PU2017_12Apr2018_94X_mc2017_realistic_v14-v1/MINIAODSIM"

Xrootd

root root://xrootd-cms.infn.it///store/mc/RunIISummer16MiniAODv3/ttHToNonbb_M125_TuneCUETP8M2_ttHtranche3_13TeV-powheg-pythia8/MINIAODSIM/PUMoriond17_94X_mcRun2_asymptotic_v3-v2/120000/F24F2D5E-DDEC-E811-AF50-90B11C08AD7D.root

Vim

https://www.fprintf.net/vimCheatSheet.html

Notes

ttH multileptons

Twiki: https://gitlab.cern.ch/ttH_leptons/doc

LLR framework

https://github.com/LLRCMS/LLRHiggsTauTau/tree/94X_ttH

Installation:

https://github.com/LLRCMS/LLRHiggsTauTau/tree/94X_ttH#instructions-for-94x_tth

Ntuple producer:

https://github.com/LLRCMS/LLRHiggsTauTau/blob/063c6c3be223e0322c2b60a842fae8f26b040449/NtupleProducer/plugins/HTauTauNtuplizer.cc

Run:

cmsRun analyzer.py

Output:

HTauTauAnalysis.root

Private production

Run interactively, having in analyzer.py :

'file:/data_CMS/cms/mperez/ttH_2017/THW/merged_THW001.root'

Crab production

Create a python config file for your sample to process: crab3_XXX.py

Modify the following entries:

config.General.requestName = 'HTauTau_MSSM_GGH300_21_09_15'

config.Data.inputDataset = '/SUSYGluGluToHToTauTau_M-300_TuneCUETP8M1_13TeV-pythia8/RunIISpring15DR74-Asympt25ns_MCRUN2_74_V9-v1/MINIAODSIM'

config.Data.outLFNDirBase = '/store/user/davignon/EnrichedMiniAOD/MSSM_GGH300_pfMET_prod_21_09_2015/'

config.Data.publishDataName = 'MSSM_GGH300_HTauTau_21_09_2015'

=config.Site.storageSite = 'T2_FR_GRIF_LLR' / 'T2_FR_GRIF_IRFU'

RUN_NTUPLIZER = False

SVFITBYPASS = False

IsMC = True

Is25ns = True

For data, specify the golden JSON:

config.Data.lumiMask = 'https://cms-service-dqm.web.cern.ch/cms-service-dqm/CAF/certification/Collisions17/13TeV/ReReco/Cert_294927-306462_13TeV_EOY2017ReReco_Collisions17_JSON.txt'

Source crab:

voms-proxy-init -voms cms

source /cvmfs/cms.cern.ch/crab3/crab.sh

Launch production:

crab submit -c crab3_XXX.py

Monitor jobs:

crab status -d crab3//

or in:

http://dashb-cms-job-task.cern.ch/dashboard/request.py/taskmonitoring

Relaunch failed jobs:

crab resubmit -d crab3//

Delete jobs:

crab kill -d

Grid space (LLR):

rfdir /dpm/in2p3.fr/home/cms/trivcat/store/user/cmartinp/

Grid space (IRFU):

gfal-ls root://node12.datagrid.cea.fr//dpm/datagrid.cea/home/cms/trivcat/store/user/cmartinp/

Helpers convert

This step will add additional variables with respect to the LLRHiggsTauTau NtupleProducer.

https://github.com/cmarper/ttH/tree/master/macros/https://github.com/cmarper/ttH/blob/master/macros/Helpers_convert_ttH_2017_v7.C

To run on LLR Tier3:

https://github.com/tstreble/MEM_Analysis/blob/master/ttH/macros/Helpers_convert_ttH_v6.C

Tree splitter

This step takes care of skimming the existing ntuples, building different trees for the different regions used for the signal and background estimations (both for the ttH multilepton and tau categories).

https://github.com/cmarper/ttH/blob/master/macros/tree_splitter_2017_v9.C

To run on LLR Tier3:

https://github.com/tstreble/MEM_Analysis/blob/master/ttH/macros/launch_split_jobs_tier3.C

Datacard computation

Compute datacards combining the yields/systematics from all the ntuples (one per category):

https://github.com/cmarper/ttH/blob/master/macros/datacard_maker_2lSS1tau_2017_antiEle_v2.C

Combine

Installation:

https://github.com/cms-analysis/CombineHarvester

https://twiki.cern.ch/twiki/bin/viewauth/CMS/SWGuideHiggsAnalysisCombinedLimit

Install in CMSSW_7_4_7/src/CombineHarvester/ttH_htt/bin/ the code like in:

https://github.com/tstreble/MEM_Analysis/blob/master/ttH/macros/WriteDatacards_2lss_1tau_ttH_comb.cpp

This can be used then with standard combine commands like the ones in:

https://github.com/tstreble/MEM_Analysis/blob/master/ttH/macros/make_ttH_htt_ttH_comb.sh

Editing the AN

AN: AN-19-111 (https://gitlab.cern.ch/tdr/notes/AN-19-111)

Configure git client:

scl enable rh-git29 bash # this allows you to access a recent version of git. It will place you in a bash shell.

git config --global user.name "Cristina Martin Perez"

git config --global user.email "cmartinp@cern.ch"

# failure to set the next option can lead to the message

# 'Basic: Access denied'

# if you use KRB access (http)

git config --global http.emptyAuth true

Edit the AN:

=git clone --recursive https://:@gitlab.cern.ch:8443/tdr/notes/AN-19-111.git =

cd /afs/cern.ch/user/c/cmartinp/Legacy/AnalysisNote/AN-19-111

eval utils/tdr runtime # add -sh for bash; -csh for csh; -fish for fish. Default is csh (for now).

# (edit the template, then to build the document)

./utils/tdr --style=note b # the local document with the name of the directory is the default build target=

# we also recommend setting the output directory using either the command line option --temp_dir or the env var TDR_TMP_DIR (new from svn version)

# to commit changes back...

git add .                           # add all files modified in current directory

git commit -m "add my new changes"  # to stage your changes

git push                            # to send them back to the repo

MEM

Installation

Package:

https://llrgit.in2p3.fr/mem-group/CMS-MEM/tree/OpenCL_ttH_Run2_2017

Instructions:

mkdir MEM-Project

cd MEM-Project

git clone git@llrgitNOSPAMPLEASE.in2p3.fr:mem-group/CMS-MEM.git

cd CMS-MEM

git checkout OpenCL _ttH_Run2_2017

#ln -s Env/CC_slc7_amd_amd64_gcc530.env cms-mem.env #this will create the wrong CUDA environment for compilation!

#ln -s Env/make.inc.cc make.inc #this will create the wrong CUDA environment for compilation!

cd xxx/CMS-MEM

. ./cms-mem.env

cd MGMEM/

qlogin before compiling: qlogin -l GPU=1 -l GPUtype=K80 -q mc_gpu_interactive -pe multicores_gpu 4

make clean; make

Adding new variables:

Changes to be made in directory ./IOLib/ :

- Scalar variables:

EventReader _impl_Run2.cpp: ok = ok && ! tchain_->SetBranchAddress( "bTagSF_weight_up", &_bTagSF_weight_up );

EventReader _impl_Run2.cpp: eventData._bTagSF_weight_up = _bTagSF_weight_up;

IntegralsOutputs _Run2.cpp: ttree_->Branch("bTagSF_weight_up", &_bTagSF_weight_up, "bTagSF_weight_up/F");

IntegralsOutputs _Run2.cpp: _bTagSF_weight_up = ev->_bTagSF_weight_up;

Run2EventData _t.cpp: _bTagSF_weight_up = evData->_bTagSF_weight_up;

Run2EventData _t.h: float _bTagSF_weight_up;

- Vectorial variables:

EventReader _impl_Run2.cpp: ok = ok && ! tchain_->SetBranchAddress( "recotauh_sel_phi", &p_recotauh_sel_phi);

EventReader _impl_Run2.cpp: eventData._recotauh_sel_phi = _recotauh_sel_phi;

IntegralsOutputs _Run2.cpp: ttree_->Branch("recotauh_sel_phi", &_recotauh_sel_phi);

IntegralsOutputs _Run2.cpp: _recotauh_sel_phi = ev->_recotauh_sel_phi;

Run2EventData _t.cpp: _recotauh_sel_phi = evData->_recotauh_sel_phi;

Run2EventData _t.cpp: p_recotauh_sel_phi = &_recotauh_sel_phi;

Run2EventData _t.h: vector _recotauh_sel_phi;

Run2EventData _t.h: vector* p_recotauh_sel_phi;

Commit to Git

git status

git commit -a -m “comment”

git push -v -u origin OpenCL _ttH_Run2_2017

GPU Platform @ CC-IN2P3

Twiki: https://llrgit.in2p3.fr/mem-group/CMS-MEM/wikis/batch-CC-cluster

Log-in

ssh -XY mperez@ccaNOSPAMPLEASE.in2p3.fr

qlogin -l GPU=1 -l GPUtype=K80 -q mc_gpu_interactive -pe multicores_gpu 4

. /usr/local/shared/bin/ge_env.sh

. ./cms-mem.env

Log-in with cmsf group:

groups

newgrp cmsf

Config file:

MGMEM/cristina.py

Input file: InputFileList

Output file: FileOfIntegrals

sps space:

/sps/cms/mperez

Run interactively

cd MGMEM/

mpirun -n 2 ./MG-MEM-MPI cristina.py

with:

OCLConfig.py:

SelectedQueues = [ True, False, False, False, False, False, False]

KernelExecMode = 1

Run on batch

cd BatchModel /

2 nodes:

qsub -l GPU=4 -l GPUtype=K80 -q pa_gpu_long -pe openmpigpu_4 8 batch.sh

1 node:

qsub -l GPU=4 -l GPUtype=K80 -q pa_gpu_long -pe openmpigpu_4 4 batch.sh

with:

OCLConfig.py:

SelectedQueues = [ True, True, True, True, False, False, False]

KernelExecMode = 1

Check jobs:

qstat

More info about batch submission:

https://doc.cc.in2p3.fr/utiliser_le_systeme_batch_ge_depuis_le_centre_de_calcul#jobs_gpu_paralleles


To run multiple jobs:

cd CMS-MEM/MGMEM

cp -rf BatchModel BatchModel _XXX

cd BatchModel _XXX

#change cristina.py, OCLConfig.py

cp batch.sh batch_XXX.sh (useful for job survey)

To run interactively: ./batch_XXX.sh

To run on batch (1 nodes): qsub -l GPU=4 -l GPUtype=K80 -q pa_gpu_long -pe openmpigpu_4 4 batch_XXX.sh

To run on batch (2 nodes): qsub -l GPU=4 -l GPUtype=K80 -q pa_gpu_long -pe openmpigpu_8 4 batch_XXX.sh

LSI @ LLR

https://llrlsi-git.in2p3.fr/llrlsi/for-users/wikis/home

Log-in

ssh -XY cmartinp@cmsusrNOSPAMPLEASE.cern.ch

sps space:

/sps/mperez

Polui @ LLR

https://llrgit.in2p3.fr/mem-group/CMS-MEM/wikis/git-commands

Installed in /home/llr/cms/mperez/MEM-Project/CMS-MEM

MEM output

No missing jet:

T->Draw("Integral_ttH/(Integral_ttH+1e-18*(Integral_ttbar_DL_fakelep_tlep+Integral_ttbar_DL_fakelep_ttau)+1e-1*Integral_ttZ+2e-1*Integral_ttZ_Zll)","integration_type==0")

Missing jet:

T->Draw("Integral_ttH/(Integral_ttH+5e-15*(Integral_ttbar_DL_fakelep_tlep+Integral_ttbar_DL_fakelep_ttau)+5e-2*Integral_ttZ+5e-1*Integral_ttZ_Zll)”, “integration_type==1”)

L1 Tau Trigger

L1 CMSSW:

https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideL1TStage2Instructions

Tau Tag&Probe package:

https://github.com/davignon/TauTagAndProbe

Production of ntuples

- Offline: cmsRun test_noTagAndProbe.py

- L1 (with re-emulation): cmsRun reEmulL1_MC_L1Only.py

- ZeroBias: reEmulL1_ZeroBias.py

Merging offline and L1 taus

- config files in /run/VBFStage2_WithJune2017_Jets_05_10_17.config

- compile under CMSSW: make clean; make

- run: ./merge.exe run/VBFStage2_WithJune2017_Jets_05_10_17.config

Matching offline and L1 taus

- script: MakeTreeForCalibration.C

Create compressed tree

- need the files: LUTs_06_09_16_NewLayer1_SK1616 and compressionLuts

- run: python produceTreeWithCompressedIetaEShape_NewFormat.py

Produce the calibration LUT

- directory: /home/llr/cms/mperez/RegressionTraining/CMSSW_7_6_0/src/RegressionTraining

- BDT config file: GBRFullLikelihood_Trigger_Stage2_2017_compressedieta_compressediet_hasEM_isMerged_MC_SandeepCristina_MC_VBF.config

- compilation; =make clean; make'

- run: ./regression.exe GBRFullLikelihood _Trigger_Stage2_2017_compressedieta_compressediet_hasEM_isMerged_MC_SandeepCristina_MC_VBF.config

- make histo with calibration constants: python makeTH4_Stage2_2017_compressedieta_compressediet_hasEM_isMerged_MC_VBF

- result in corrections/

- produce LUT: MakeTauCalibLUT_MC_NewCompression_WithMarch2017Layer1.C

Apply the calibration LUT

- apply calibration: ApplyCalibration.C

Produce the isolation LUT

- get isolation cuts: Build_Isolation_WPs_MC_NewCompression_Thomas_nTT_OlivierFlatWP_With2017Layer1.C

- perform the relaxation: Fill_Isolation_TH3_MC_2017Layer1Calibration.C

- produce LUT: MakeTauIsoLUT_MC_NewCompression_WithMarch2017Layer1.C

Rate studies

- Use ZeroBias ntuples.

- Apply calibration: ApplyCalibrationZeroBias.C

- Compute rates: Rate_ZeroBias_Run305310.C

- Plot rate comparison and get thresholds for a certain rate: CompareRates_Run305310.C

Apply the isolation LUT

- Apply isolation: ApplyIsolationForTurnOns.C

- Plot turnons: CompareTurnOns_2017Layer1Calibration_ShapeVeto_AdaptedThreshold.C

Combine - control analysis

Setup

In lxplus. Increase stack memory:

cmsenv; ulimit -s unlimited

Combined card

Make combined card:

combineCards.py $(for fil in `ls *.txt`; do echo -n ${fil/.txt/}=$fil  \ ;done) > combined_cards.dat

For FitDiagnostics (used in plots) declare the naming of the subcategories use script here

Workspaces

Inclusive:

text2workspace.py combined_cards.dat -o ttHmultilep_WS.root -P HiggsAnalysis.CombinedLimit.PhysicsModel:multiSignalModel --PO verbose --PO 'map=.*/TTW.*:r_ttW[1,0,6]' --PO 'map=.*/TTWW.*:r_ttW[1,0,6]' --PO 'map=.*/TTZ.*:r_ttZ[1,0,6]' --PO 'map=.*/ttH.*:r_ttH[1,-1,3]'

Per category:

text2workspace.py combined_cards.dat -o ttHmultilep_WS_perchannel.root -P HiggsAnalysis.CombinedLimit.PhysicsModel:multiSignalModel --PO verbose --PO 'map=.*/TTW.*:r_ttW[1,0,6]' --PO 'map=.*/TTWW.*:r_ttW[1,0,6]' --PO 'map=.*/TTZ.*:r_ttZ[1,0,6]' --PO 'map=.*ttH_2lss_0tau.*/ttH.*:r_ttH_2lss_0tau[1,-5,10]' --PO 'map=.*ttH_3l_0tau.*/ttH.*:r_ttH_3l_0tau[1,-5,10]' --PO 'map=.*ttH_4l.*/ttH.*:r_ttH_4l[1,-5,10]' --PO 'map=.*ttH_2lss_1tau.*/ttH.*:r_ttH_2lss_1tau[1,-5,10]'

Significance:

Inclusive:

combineTool.py -M Significance --signif ttHmultilep_WS.root  --redefineSignalPOI r_ttH (-t -1 --setParameters r_ttH=1,r_ttW=1,r_ttZ=1) -m 125 -n .significance.all

Per category:

combineTool.py -M Significance --signif ttHmultilep_WS_perchannel.root  --redefineSignalPOI r_ttH_2lss_0tau  (-t -1 --setParameters r_ttH_2lss_0tau=1,r_ttH_3l_0tau=1,r_ttH_2lss_1tau=1,r_ttH_4l=1,r_ttW=1,r_ttZ=1) -m 125 -n .significance.2lss0tau

combineTool.py -M Significance --signif ttHmultilep_WS_perchannel.root  --redefineSignalPOI r_ttH_2lss_1tau  (-t -1 --setParameters r_ttH_2lss_0tau=1,r_ttH_3l_0tau=1,r_ttH_2lss_1tau=1,r_ttH_4l=1,r_ttW=1,r_ttZ=1) -m 125 -n .significance.2lss1tau

combineTool.py -M Significance --signif ttHmultilep_WS_perchannel.root  --redefineSignalPOI r_ttH_3l_0tau  (-t -1 --setParameters r_ttH_2lss_0tau=1,r_ttH_3l_0tau=1,r_ttH_2lss_1tau=1,r_ttH_4l=1,r_ttW=1,r_ttZ=1) -m 125 -n .significance.3l0tau

combineTool.py -M Significance --signif ttHmultilep_WS_perchannel.root  --redefineSignalPOI r_ttH_4l  (-t -1 --setParameters r_ttH_2lss_0tau=1,r_ttH_3l_0tau=1,r_ttH_2lss_1tau=1,r_ttH_4l=1,r_ttW=1,r_ttZ=1) -m 125 -n .significance.4l0tau

Signal strength

Inclusive:

combine -M MultiDimFit --algo singles ttHmultilep_WS.root (-t -1 --setParameters r_ttW=1,r_ttZ=1,r_ttH=1) -m 125 -n .mu.all

Per category:

combine -M MultiDimFit --algo singles ttHmultilep_WS_perchannel.root (-t -1 --setParameters r_ttW=1,r_ttZ=1,r_ttH_2lss_0tau=1,r_ttH_3l_0tau=1,r_ttH_4l=1,r_ttH_2lss_1tau=1) -m 125 -n .mu.cats

Likelihood scan

Inclusive likelihood scan with syst and stats:

combineTool.py -M MultiDimFit --algo grid --points 100 --rMin 0 --rMax 3 ttHmultilep_WS.root --alignEdges 1 --floatOtherPOIs=1 -P r_ttH (--setParameters r_ttH=1,r_ttZ=1,r_ttW=1  -t -1) -n .likelihoodscan --saveWorkspace

Plot inclusive likelihood scan:

plot1DScan.py all.root --POI r_ttH --y-cut 50 --y-max 50

Get statistical only component:

combine -M MultiDimFit higgsCombine.likelihoodscan.MultiDimFit.mH125.root -n .likelihoodscan.freezeAll -m 125 --rMin 0 --rMax 3  --algo grid --points 30 --freezeParameters allConstrainedNuisances --snapshotName MultiDimFit --alignEdges 1 --floatOtherPOIs=1 -P r_ttH

Plot breakdown stat and syst:

plot1DScan.py higgsCombine.likelihoodscan.MultiDimFit.mH125.root --POI r_ttH --y-cut 50 --y-max 50 --breakdown syst,stat --others "higgsCombine.likelihoodscan.freezeAll.MultiDimFit.mH125.root:Stat only:2"

Impacts

a) Initial fit for each POI:

combineTool.py -M Impacts -d ttHmultilep_WS.root  --doInitialFit --robustFit 1 (-t -1 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1 -m 125) -n t1 --redefineSignalPOIs r_ttH --floatOtherPOIs 1

b) Comment "FixAll()" in CombineHarvester/CombineTools/python/combine/Impacts.py and CombineHarvester/CombineTools/combine/utils.py

c) Fit scan for each nuisance:

combineTool.py -M Impacts -d ttHmultilep_WS.root --robustFit 1 --doFits (-t -1 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1) -m 125 -n t1 --redefineSignalPOIs r_ttH --job-mode condor

d) Kill the submitted jobs:

condor_rm cmartinp

e) Add in condor_combine_task.sub the following lines before "queue":

periodic_remove = False

+JobFlavour = "tomorrow" (or "nextweek")

f) Submit the jobs:

condor_submit condor_combine_task.sub

g) Monitor the jobs:

condor_q

h) Check the failed impacts with this script

Re-run failed impacts with the options: --cminDefaultMinimizerStrategy 0 or --X-rtd MINIMIZER_MaxCalls=999999999

h) Collect outputs when jobs are done:

combineTool.py -M Impacts -d ttHmultilep_WS.root -o impactst1.json (-t -1 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1) -m 125 -n t1 --redefineSignalPOIs r_ttH

i) Plot impacts:

plotImpacts.py -i impactst1.json  -o impactst1

Table of systematics

Take script here

Step 1: breakdown in different types of systematics

Step 2: plotting

To run:

python table_systs.py > commands_table_systs.sh

chmod +x  commands_table_systs.sh

./commands_table_systs.sh

2D contours

a) Run central fit:

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttH_ttZ_central --fastScan --algo grid --points 1800 --redefineSignalPOIs r_ttH,r_ttZ --setParameterRanges r_ttH=-2,3:r_ttZ=-2,3 (--setParameters r_ttH=1,r_ttZ=1,r_ttW=1)

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttH_ttW_central --fastScan --algo grid --points 1800 --redefineSignalPOIs r_ttH,r_ttW --setParameterRanges r_ttH=-2,3:r_ttW=-2,3 (--setParameters r_ttH=1,r_ttZ=1,r_ttW=1)

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttZ_ttW_central --fastScan --algo grid --points 1800 --redefineSignalPOIs r_ttZ,r_ttW --setParameterRanges r_ttZ=-2,3:r_ttW=-2,3 (--setParameters r_ttH=1,r_ttZ=1,r_ttW=1)

For 1sigma and 2sigma contours take condor submision scripts here and here

b) Run 1sigma contours (68% CL):

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttH_ttZ_cl68 (--fastScan --cminDefaultMinimizerStrategy 0) --cl=0.68 --algo contour2d --points=10 --redefineSignalPOIs r_ttH,r_ttZ --setParameterRanges r_ttH=-2,3:r_ttZ=-2,3 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttH_ttW_cl68  (--fastScan --cminDefaultMinimizerStrategy 0) --cl=0.68 --algo contour2d --points=10 --redefineSignalPOIs r_ttH,r_ttW --setParameterRanges r_ttH=-2,3:r_ttW=-2,3 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttZ_ttW_cl68  (--fastScan --cminDefaultMinimizerStrategy 0)  --cl=0.68 --algo contour2d --points=10 --redefineSignalPOIs r_ttZ,r_ttW --setParameterRanges r_ttZ=-2,3:r_ttW=-2,3 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1

c) Run 2sigma contours (68% CL):

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttH_ttZ_cl95  (--fastScan --cminDefaultMinimizerStrategy 0)  --cl=0.95 --algo contour2d --points=10 --redefineSignalPOIs r_ttH,r_ttZ --setParameterRanges r_ttH=-2,3:r_ttZ=-2,3 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttH_ttW_cl95  (--fastScan --cminDefaultMinimizerStrategy 0)  --cl=0.95 --algo contour2d --points=10 --redefineSignalPOIs r_ttH,r_ttW --setParameterRanges r_ttH=-2,3:r_ttW=-2,3 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttZ_ttW_cl95  (--fastScan --cminDefaultMinimizerStrategy 0)  --cl=0.95 --algo contour2d --points=10 --redefineSignalPOIs r_ttZ,r_ttW --setParameterRanges r_ttZ=-2,3:r_ttW=-2,3 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1

d) Plot with script here:

python plot2Dcontours.py --first "ttH" --second "ttZ" --label " " --plotName "contour_ttH_ttZ" --outputFolder "plots" --input "higgsCombinettH_ttZ_central.MultiDimFit.mH120.root" --input68 "higgsCombinettH_ttZ_cl68.MultiDimFit.mH120.root" --input95 "higgsCombinettH_ttZ_cl95.MultiDimFit.mH120.root"

python plot2Dcontours.py --first "ttH" --second "ttW" --label " " --plotName "contour_ttH_ttW" --outputFolder "plots" --input "higgsCombinettH_ttW_central.MultiDimFit.mH120.root" --input68 "higgsCombinettH_ttW_cl68.MultiDimFit.mH120.root" --input95 "higgsCombinettH_ttW_cl95.MultiDimFit.mH120.root"

python plot2Dcontours.py --first "ttZ” --second "ttW” --label " " --plotName "contour_ttZ_ttW” --outputFolder "plots" --input "higgsCombinettZ_ttW_central.MultiDimFit.mH120.root" --input68 "higgsCombinettZ_ttW_cl68.MultiDimFit.mH120.root" --input95 "higgsCombinettZ_ttW_cl95.MultiDimFit.mH120.root"

Prefit plots

a) Run fit diagnostics for each subcategory:

cd /afs/cern.ch/user/c/cmartinp/Legacy/combine/CMSSW_10_2_13/src/CombineHarvester/fits/CA_7Apr_unblind_step2/

combine -M FitDiagnostics ttH_2lss_1tau_nomiss_2016.txt --saveShapes --saveWithUncertainties  --skipBOnlyFit -n _ttH_2lss_1tau_nomiss_2016 --job-mode condor

b) Plot:

cd ~/Legacy/combine/CMSSW_10_2_13/src/HiggsAnalysis/CombinedLimit/signal_extraction_tH_ttH/

python test/makePlots.py --input /afs/cern.ch/user/c/cmartinp/Legacy/combine/CMSSW_10_2_13/src/CombineHarvester/fits/CA_7Apr_unblind_step2/fitDiagnostics_ttH_2lss_1tau_miss_2016.root --odir /afs/cern.ch/user/c/cmartinp/Legacy/combine/CMSSW_10_2_13/src/CombineHarvester/fits/CA_7Apr_unblind_step2/outputs/ --original /afs/cern.ch/user/c/cmartinp/Legacy/combine/CMSSW_10_2_13/src/CombineHarvester/fits/CA_7Apr_unblind_step2/ttH_2lss_1tau_miss_2016.root --era 2016 --nameOut ttH_2lss_1tau_miss_2016 --channel 2lss_1tau --nameLabel " missing jet" --do_bottom --unblind --binToRead ttH_2lss_1tau_miss --binToReadOriginal ttH_2lss_1tau_miss

Postfit plots

a) Run fit diagnostics in the inclusive datacards:

combineTool.py -M FitDiagnostics ttHmultilep_WS_naming.root --saveShapes --saveWithUncertainties --saveNormalization  (--cminDefaultMinimizerStrategy 0) --skipBOnlyFit  -n _tttHmultilep_WS_standard --job-mode condor

b) Plot:

cd ~/Legacy/combine/CMSSW_10_2_13/src/HiggsAnalysis/CombinedLimit/signal_extraction_tH_ttH/

python test/makePlots.py --input /afs/cern.ch/user/c/cmartinp/Legacy/combine/CMSSW_10_2_13/src/CombineHarvester/fits/CA_7Apr_unblind_step3_v2/fitDiagnostics_ttHmultilep_WS_naming.root --odir /afs/cern.ch/user/c/cmartinp/Legacy/combine/CMSSW_10_2_13/src/CombineHarvester/fits/CA_7Apr_unblind_step3_v2/outputs/ --era 2016 --nameOut ttH_2lss_1tau_miss_2016 --channel 2lss_1tau --nameLabel " missing jet" --do_bottom --unblind --doPostFit --binToRead ttH_2lss_1tau_miss_2016 --original /afs/cern.ch/user/c/cmartinp/Legacy/combine/CMSSW_10_2_13/src/CombineHarvester/fits/CA_7Apr_unblind_step3_v2/ttH_2lss_1tau_miss_2016.root --binToReadOriginal ttH_2lss_1tau_miss

Edit | Attach | Watch | Print version | History: r33 < r32 < r31 < r30 < r29 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r33 - 2020-05-12 - CristinaMartinPerez
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback