Cristina's Sandbox




git branch

git commit # get list of files that changed

git add < files > (or . for all files modified in the current directory)

git commit -m < comment > < files > (or without < files > )

git push origin < branch >

New branch

git checkout < oldbranch > #make sure you are in old branch

git checkout -b < newbranch > #create new branch and move into it

### do your work

git commit -m < comment >

git push origin < newbranch >

Screen (for



voms-proxy-init -voms cms

source /cvmfs/

cd /home/llr/cms/mperez/CMSSW_10_2_14/src/LLRHiggsTauTau/NtupleProducer/test


cmsRun *.py

Ctr+A Ctr+D

screen -list

screen -r < screen >

Kill screen: once connected to the session (screen -r) press Ctrl + A then type :quit.

Polgrid LLR

rfdir /path/to/folder

rfdir /dpm/

rfrm -r /path/to/folder

Remove folder without timeout errors (leave in the background):

gfal-rm -r -v srm:// > deletion.log

Check how much space I use in dpm:

/opt/exp_soft/cms/t3/gfal-du --path /dpm/

Polgrid IRFU



rfdir /dpm/

root -l root://


dasgoclient --query="file dataset=/DYJetsToLL_M-50_TuneCP5_13TeV-amcatnloFXFX-pythia8/RunIIFall17MiniAODv2-PU2017_12Apr2018_94X_mc2017_realistic_v14-v1/MINIAODSIM"


root root://



ttH multileptons


LLR framework


Ntuple producer:





Private production

Run interactively, having in :


Crab production

Create a python config file for your sample to process:

Modify the following entries:

config.General.requestName = 'HTauTau_MSSM_GGH300_21_09_15'

config.Data.inputDataset = '/SUSYGluGluToHToTauTau_M-300_TuneCUETP8M1_13TeV-pythia8/RunIISpring15DR74-Asympt25ns_MCRUN2_74_V9-v1/MINIAODSIM'

config.Data.outLFNDirBase = '/store/user/davignon/EnrichedMiniAOD/MSSM_GGH300_pfMET_prod_21_09_2015/'

config.Data.publishDataName = 'MSSM_GGH300_HTauTau_21_09_2015'

=config.Site.storageSite = 'T2_FR_GRIF_LLR' / 'T2_FR_GRIF_IRFU'



IsMC = True

Is25ns = True

For data, specify the golden JSON:

config.Data.lumiMask = ''

Source crab:

voms-proxy-init -voms cms

source /cvmfs/

Launch production:

crab submit -c

Monitor jobs:

crab status -d crab3//

or in:

Relaunch failed jobs:

crab resubmit -d crab3//

Delete jobs:

crab kill -d

Grid space (LLR):

rfdir /dpm/

Grid space (IRFU):

gfal-ls root://

Helpers convert

This step will add additional variables with respect to the LLRHiggsTauTau NtupleProducer.

To run on LLR Tier3:

Tree splitter

This step takes care of skimming the existing ntuples, building different trees for the different regions used for the signal and background estimations (both for the ttH multilepton and tau categories).

To run on LLR Tier3:

Datacard computation

Compute datacards combining the yields/systematics from all the ntuples (one per category):



Install in CMSSW_7_4_7/src/CombineHarvester/ttH_htt/bin/ the code like in:

This can be used then with standard combine commands like the ones in:

Editing the AN

AN: AN-19-111 (

Configure git client:

scl enable rh-git29 bash # this allows you to access a recent version of git. It will place you in a bash shell.

git config --global "Cristina Martin Perez"

git config --global ""

# failure to set the next option can lead to the message

# 'Basic: Access denied'

# if you use KRB access (http)

git config --global http.emptyAuth true

Edit the AN:

=git clone --recursive =

cd /afs/

eval utils/tdr runtime # add -sh for bash; -csh for csh; -fish for fish. Default is csh (for now).

# (edit the template, then to build the document)

./utils/tdr --style=note b # the local document with the name of the directory is the default build target=

# we also recommend setting the output directory using either the command line option --temp_dir or the env var TDR_TMP_DIR (new from svn version)

# to commit changes back...

git add .                           # add all files modified in current directory

git commit -m "add my new changes"  # to stage your changes

git push                            # to send them back to the repo





mkdir MEM-Project

cd MEM-Project

git clone


git checkout OpenCL _ttH_Run2_2017

#ln -s Env/CC_slc7_amd_amd64_gcc530.env cms-mem.env #this will create the wrong CUDA environment for compilation!

#ln -s Env/ #this will create the wrong CUDA environment for compilation!

cd xxx/CMS-MEM

. ./cms-mem.env


qlogin before compiling: qlogin -l GPU=1 -l GPUtype=K80 -q mc_gpu_interactive -pe multicores_gpu 4

make clean; make

Adding new variables:

Changes to be made in directory ./IOLib/ :

- Scalar variables:

EventReader _impl_Run2.cpp: ok = ok && ! tchain_->SetBranchAddress( "bTagSF_weight_up", &_bTagSF_weight_up );

EventReader _impl_Run2.cpp: eventData._bTagSF_weight_up = _bTagSF_weight_up;

IntegralsOutputs _Run2.cpp: ttree_->Branch("bTagSF_weight_up", &_bTagSF_weight_up, "bTagSF_weight_up/F");

IntegralsOutputs _Run2.cpp: _bTagSF_weight_up = ev->_bTagSF_weight_up;

Run2EventData _t.cpp: _bTagSF_weight_up = evData->_bTagSF_weight_up;

Run2EventData _t.h: float _bTagSF_weight_up;

- Vectorial variables:

EventReader _impl_Run2.cpp: ok = ok && ! tchain_->SetBranchAddress( "recotauh_sel_phi", &p_recotauh_sel_phi);

EventReader _impl_Run2.cpp: eventData._recotauh_sel_phi = _recotauh_sel_phi;

IntegralsOutputs _Run2.cpp: ttree_->Branch("recotauh_sel_phi", &_recotauh_sel_phi);

IntegralsOutputs _Run2.cpp: _recotauh_sel_phi = ev->_recotauh_sel_phi;

Run2EventData _t.cpp: _recotauh_sel_phi = evData->_recotauh_sel_phi;

Run2EventData _t.cpp: p_recotauh_sel_phi = &_recotauh_sel_phi;

Run2EventData _t.h: vector _recotauh_sel_phi;

Run2EventData _t.h: vector* p_recotauh_sel_phi;

Commit to Git

git status

git commit -a -m “comment”

git push -v -u origin OpenCL _ttH_Run2_2017

GPU Platform @ CC-IN2P3



ssh -XY

qlogin -l GPU=1 -l GPUtype=K80 -q mc_gpu_interactive -pe multicores_gpu 4

. /usr/local/shared/bin/

. ./cms-mem.env

Log-in with cmsf group:


newgrp cmsf

Config file:


Input file: InputFileList

Output file: FileOfIntegrals

sps space:


Run interactively


mpirun -n 2 ./MG-MEM-MPI


SelectedQueues = [ True, False, False, False, False, False, False]

KernelExecMode = 1

Run on batch

cd BatchModel /

2 nodes:

qsub -l GPU=4 -l GPUtype=K80 -q pa_gpu_long -pe openmpigpu_4 8

1 node:

qsub -l GPU=4 -l GPUtype=K80 -q pa_gpu_long -pe openmpigpu_4 4


SelectedQueues = [ True, True, True, True, False, False, False]

KernelExecMode = 1

Check jobs:


More info about batch submission:

To run multiple jobs:


cp -rf BatchModel BatchModel _XXX

cd BatchModel _XXX


cp (useful for job survey)

To run interactively: ./

To run on batch (1 nodes): qsub -l GPU=4 -l GPUtype=K80 -q pa_gpu_long -pe openmpigpu_4 4

To run on batch (2 nodes): qsub -l GPU=4 -l GPUtype=K80 -q pa_gpu_long -pe openmpigpu_8 4



ssh -XY

sps space:


Polui @ LLR

Installed in /home/llr/cms/mperez/MEM-Project/CMS-MEM

MEM output

No missing jet:


Missing jet:

T->Draw("Integral_ttH/(Integral_ttH+5e-15*(Integral_ttbar_DL_fakelep_tlep+Integral_ttbar_DL_fakelep_ttau)+5e-2*Integral_ttZ+5e-1*Integral_ttZ_Zll)”, “integration_type==1”)

L1 Tau Trigger


Tau Tag&Probe package:

Production of ntuples

- Offline: cmsRun

- L1 (with re-emulation): cmsRun

- ZeroBias:

Merging offline and L1 taus

- config files in /run/VBFStage2_WithJune2017_Jets_05_10_17.config

- compile under CMSSW: make clean; make

- run: ./merge.exe run/VBFStage2_WithJune2017_Jets_05_10_17.config

Matching offline and L1 taus

- script: MakeTreeForCalibration.C

Create compressed tree

- need the files: LUTs_06_09_16_NewLayer1_SK1616 and compressionLuts

- run: python

Produce the calibration LUT

- directory: /home/llr/cms/mperez/RegressionTraining/CMSSW_7_6_0/src/RegressionTraining

- BDT config file: GBRFullLikelihood_Trigger_Stage2_2017_compressedieta_compressediet_hasEM_isMerged_MC_SandeepCristina_MC_VBF.config

- compilation; =make clean; make'

- run: ./regression.exe GBRFullLikelihood _Trigger_Stage2_2017_compressedieta_compressediet_hasEM_isMerged_MC_SandeepCristina_MC_VBF.config

- make histo with calibration constants: python makeTH4_Stage2_2017_compressedieta_compressediet_hasEM_isMerged_MC_VBF

- result in corrections/

- produce LUT: MakeTauCalibLUT_MC_NewCompression_WithMarch2017Layer1.C

Apply the calibration LUT

- apply calibration: ApplyCalibration.C

Produce the isolation LUT

- get isolation cuts: Build_Isolation_WPs_MC_NewCompression_Thomas_nTT_OlivierFlatWP_With2017Layer1.C

- perform the relaxation: Fill_Isolation_TH3_MC_2017Layer1Calibration.C

- produce LUT: MakeTauIsoLUT_MC_NewCompression_WithMarch2017Layer1.C

Rate studies

- Use ZeroBias ntuples.

- Apply calibration: ApplyCalibrationZeroBias.C

- Compute rates: Rate_ZeroBias_Run305310.C

- Plot rate comparison and get thresholds for a certain rate: CompareRates_Run305310.C

Apply the isolation LUT

- Apply isolation: ApplyIsolationForTurnOns.C

- Plot turnons: CompareTurnOns_2017Layer1Calibration_ShapeVeto_AdaptedThreshold.C

Combine - control analysis


In lxplus. Increase stack memory:

cmsenv; ulimit -s unlimited

Combined card

Make combined card: $(for fil in `ls *.txt`; do echo -n ${fil/.txt/}=$fil  \ ;done) > combined_cards.dat

For FitDiagnostics (used in plots) declare the naming of the subcategories use script here


Inclusive: combined_cards.dat -o ttHmultilep_WS.root -P HiggsAnalysis.CombinedLimit.PhysicsModel:multiSignalModel --PO verbose --PO 'map=.*/TTW.*:r_ttW[1,0,6]' --PO 'map=.*/TTWW.*:r_ttW[1,0,6]' --PO 'map=.*/TTZ.*:r_ttZ[1,0,6]' --PO 'map=.*/ttH.*:r_ttH[1,-1,3]'

Per category: combined_cards.dat -o ttHmultilep_WS_perchannel.root -P HiggsAnalysis.CombinedLimit.PhysicsModel:multiSignalModel --PO verbose --PO 'map=.*/TTW.*:r_ttW[1,0,6]' --PO 'map=.*/TTWW.*:r_ttW[1,0,6]' --PO 'map=.*/TTZ.*:r_ttZ[1,0,6]' --PO 'map=.*ttH_2lss_0tau.*/ttH.*:r_ttH_2lss_0tau[1,-5,10]' --PO 'map=.*ttH_3l_0tau.*/ttH.*:r_ttH_3l_0tau[1,-5,10]' --PO 'map=.*ttH_4l.*/ttH.*:r_ttH_4l[1,-5,10]' --PO 'map=.*ttH_2lss_1tau.*/ttH.*:r_ttH_2lss_1tau[1,-5,10]'


Inclusive: -M Significance --signif ttHmultilep_WS.root  --redefineSignalPOI r_ttH (-t -1 --setParameters r_ttH=1,r_ttW=1,r_ttZ=1) -m 125 -n .significance.all

Per category: -M Significance --signif ttHmultilep_WS_perchannel.root  --redefineSignalPOI r_ttH_2lss_0tau  (-t -1 --setParameters r_ttH_2lss_0tau=1,r_ttH_3l_0tau=1,r_ttH_2lss_1tau=1,r_ttH_4l=1,r_ttW=1,r_ttZ=1) -m 125 -n .significance.2lss0tau -M Significance --signif ttHmultilep_WS_perchannel.root  --redefineSignalPOI r_ttH_2lss_1tau  (-t -1 --setParameters r_ttH_2lss_0tau=1,r_ttH_3l_0tau=1,r_ttH_2lss_1tau=1,r_ttH_4l=1,r_ttW=1,r_ttZ=1) -m 125 -n .significance.2lss1tau -M Significance --signif ttHmultilep_WS_perchannel.root  --redefineSignalPOI r_ttH_3l_0tau  (-t -1 --setParameters r_ttH_2lss_0tau=1,r_ttH_3l_0tau=1,r_ttH_2lss_1tau=1,r_ttH_4l=1,r_ttW=1,r_ttZ=1) -m 125 -n .significance.3l0tau -M Significance --signif ttHmultilep_WS_perchannel.root  --redefineSignalPOI r_ttH_4l  (-t -1 --setParameters r_ttH_2lss_0tau=1,r_ttH_3l_0tau=1,r_ttH_2lss_1tau=1,r_ttH_4l=1,r_ttW=1,r_ttZ=1) -m 125 -n .significance.4l0tau

Signal strength


combine -M MultiDimFit --algo singles ttHmultilep_WS.root (-t -1 --setParameters r_ttW=1,r_ttZ=1,r_ttH=1) -m 125 -n .mu.all

Per category:

combine -M MultiDimFit --algo singles ttHmultilep_WS_perchannel.root (-t -1 --setParameters r_ttW=1,r_ttZ=1,r_ttH_2lss_0tau=1,r_ttH_3l_0tau=1,r_ttH_4l=1,r_ttH_2lss_1tau=1) -m 125 -n .mu.cats

Likelihood scan

Inclusive likelihood scan with syst and stats: -M MultiDimFit --algo grid --points 100 --rMin 0 --rMax 3 ttHmultilep_WS.root --alignEdges 1 --floatOtherPOIs=1 -P r_ttH (--setParameters r_ttH=1,r_ttZ=1,r_ttW=1  -t -1) -n .likelihoodscan --saveWorkspace

Plot inclusive likelihood scan: all.root --POI r_ttH --y-cut 50 --y-max 50

Get statistical only component:

combine -M MultiDimFit higgsCombine.likelihoodscan.MultiDimFit.mH125.root -n .likelihoodscan.freezeAll -m 125 --rMin 0 --rMax 3  --algo grid --points 30 --freezeParameters allConstrainedNuisances --snapshotName MultiDimFit --alignEdges 1 --floatOtherPOIs=1 -P r_ttH

Plot breakdown stat and syst: higgsCombine.likelihoodscan.MultiDimFit.mH125.root --POI r_ttH --y-cut 50 --y-max 50 --breakdown syst,stat --others "higgsCombine.likelihoodscan.freezeAll.MultiDimFit.mH125.root:Stat only:2"


a) Initial fit for each POI: -M Impacts -d ttHmultilep_WS.root  --doInitialFit --robustFit 1 (-t -1 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1 -m 125) -n t1 --redefineSignalPOIs r_ttH --floatOtherPOIs 1

b) Comment "FixAll()" in CombineHarvester/CombineTools/python/combine/ and CombineHarvester/CombineTools/combine/

c) Fit scan for each nuisance: -M Impacts -d ttHmultilep_WS.root --robustFit 1 --doFits (-t -1 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1) -m 125 -n t1 --redefineSignalPOIs r_ttH --job-mode condor

d) Kill the submitted jobs:

condor_rm cmartinp

e) Add in condor_combine_task.sub the following lines before "queue":

periodic_remove = False

+JobFlavour = "tomorrow" (or "nextweek")

f) Submit the jobs:

condor_submit condor_combine_task.sub

g) Monitor the jobs:


h) Check the failed impacts with this script

Re-run failed impacts with the options: --cminDefaultMinimizerStrategy 0 or --X-rtd MINIMIZER_MaxCalls=999999999

h) Collect outputs when jobs are done: -M Impacts -d ttHmultilep_WS.root -o impactst1.json (-t -1 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1) -m 125 -n t1 --redefineSignalPOIs r_ttH

i) Plot impacts: -i impactst1.json  -o impactst1

Table of systematics

Take script here

Step 1: breakdown in different types of systematics

Step 2: plotting

To run:

python >

chmod +x


2D contours

a) Run central fit:

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttH_ttZ_central --fastScan --algo grid --points 1800 --redefineSignalPOIs r_ttH,r_ttZ --setParameterRanges r_ttH=-2,3:r_ttZ=-2,3 (--setParameters r_ttH=1,r_ttZ=1,r_ttW=1)

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttH_ttW_central --fastScan --algo grid --points 1800 --redefineSignalPOIs r_ttH,r_ttW --setParameterRanges r_ttH=-2,3:r_ttW=-2,3 (--setParameters r_ttH=1,r_ttZ=1,r_ttW=1)

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttZ_ttW_central --fastScan --algo grid --points 1800 --redefineSignalPOIs r_ttZ,r_ttW --setParameterRanges r_ttZ=-2,3:r_ttW=-2,3 (--setParameters r_ttH=1,r_ttZ=1,r_ttW=1)

For 1sigma and 2sigma contours take condor submision scripts here and here

b) Run 1sigma contours (68% CL):

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttH_ttZ_cl68 (--fastScan --cminDefaultMinimizerStrategy 0) --cl=0.68 --algo contour2d --points=10 --redefineSignalPOIs r_ttH,r_ttZ --setParameterRanges r_ttH=-2,3:r_ttZ=-2,3 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttH_ttW_cl68  (--fastScan --cminDefaultMinimizerStrategy 0) --cl=0.68 --algo contour2d --points=10 --redefineSignalPOIs r_ttH,r_ttW --setParameterRanges r_ttH=-2,3:r_ttW=-2,3 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttZ_ttW_cl68  (--fastScan --cminDefaultMinimizerStrategy 0)  --cl=0.68 --algo contour2d --points=10 --redefineSignalPOIs r_ttZ,r_ttW --setParameterRanges r_ttZ=-2,3:r_ttW=-2,3 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1

c) Run 2sigma contours (68% CL):

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttH_ttZ_cl95  (--fastScan --cminDefaultMinimizerStrategy 0)  --cl=0.95 --algo contour2d --points=10 --redefineSignalPOIs r_ttH,r_ttZ --setParameterRanges r_ttH=-2,3:r_ttZ=-2,3 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttH_ttW_cl95  (--fastScan --cminDefaultMinimizerStrategy 0)  --cl=0.95 --algo contour2d --points=10 --redefineSignalPOIs r_ttH,r_ttW --setParameterRanges r_ttH=-2,3:r_ttW=-2,3 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1

combine -M MultiDimFit ttHmultilep_WS.root (-t -1) -n ttZ_ttW_cl95  (--fastScan --cminDefaultMinimizerStrategy 0)  --cl=0.95 --algo contour2d --points=10 --redefineSignalPOIs r_ttZ,r_ttW --setParameterRanges r_ttZ=-2,3:r_ttW=-2,3 --setParameters r_ttH=1,r_ttZ=1,r_ttW=1

d) Plot with script here:

python --first "ttH" --second "ttZ" --label " " --plotName "contour_ttH_ttZ" --outputFolder "plots" --input "higgsCombinettH_ttZ_central.MultiDimFit.mH120.root" --input68 "higgsCombinettH_ttZ_cl68.MultiDimFit.mH120.root" --input95 "higgsCombinettH_ttZ_cl95.MultiDimFit.mH120.root"

python --first "ttH" --second "ttW" --label " " --plotName "contour_ttH_ttW" --outputFolder "plots" --input "higgsCombinettH_ttW_central.MultiDimFit.mH120.root" --input68 "higgsCombinettH_ttW_cl68.MultiDimFit.mH120.root" --input95 "higgsCombinettH_ttW_cl95.MultiDimFit.mH120.root"

python --first "ttZ” --second "ttW” --label " " --plotName "contour_ttZ_ttW” --outputFolder "plots" --input "higgsCombinettZ_ttW_central.MultiDimFit.mH120.root" --input68 "higgsCombinettZ_ttW_cl68.MultiDimFit.mH120.root" --input95 "higgsCombinettZ_ttW_cl95.MultiDimFit.mH120.root"

Prefit plots

a) Run fit diagnostics for each subcategory:

cd /afs/

combine -M FitDiagnostics ttH_2lss_1tau_nomiss_2016.txt --saveShapes --saveWithUncertainties  --skipBOnlyFit -n _ttH_2lss_1tau_nomiss_2016 --job-mode condor

b) Plot:

cd ~/Legacy/combine/CMSSW_10_2_13/src/HiggsAnalysis/CombinedLimit/signal_extraction_tH_ttH/

python test/ --input /afs/ --odir /afs/ --original /afs/ --era 2016 --nameOut ttH_2lss_1tau_miss_2016 --channel 2lss_1tau --nameLabel " missing jet" --do_bottom --unblind --binToRead ttH_2lss_1tau_miss --binToReadOriginal ttH_2lss_1tau_miss

Postfit plots

a) Run fit diagnostics in the inclusive datacards: -M FitDiagnostics ttHmultilep_WS_naming.root --saveShapes --saveWithUncertainties --saveNormalization  (--cminDefaultMinimizerStrategy 0) --skipBOnlyFit  -n _tttHmultilep_WS_standard --job-mode condor

b) Plot:

cd ~/Legacy/combine/CMSSW_10_2_13/src/HiggsAnalysis/CombinedLimit/signal_extraction_tH_ttH/

python test/ --input /afs/ --odir /afs/ --era 2016 --nameOut ttH_2lss_1tau_miss_2016 --channel 2lss_1tau --nameLabel " missing jet" --do_bottom --unblind --doPostFit --binToRead ttH_2lss_1tau_miss_2016 --original /afs/ --binToReadOriginal ttH_2lss_1tau_miss

Edit | Attach | Watch | Print version | History: r33 < r32 < r31 < r30 < r29 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r33 - 2020-05-12 - CristinaMartinPerez
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback