OlivierDavignonSandbox

* Test

Links

CMS

Production of Enrichined MiniAOD / LLR Ntuples

      cd LLRHiggsTauTau/NtupleProducer/test
      emacs crab3_XXX.py
Change name / path to output / name of input file
      voms-proxy-init -voms cms
      rfdir /dpm/in2p3.fr/home/cms/trivcat/store/user/davignon/

      emacs XXX_files.py&
add name of files from DAS, files, plain.
For analyzer.py, you have to download a file from your DAS file and put it somewhere on polui.
then reference it in analyzer.py
Launch crab3 command:
      source /cvmfs/cms.cern.ch/crab3/crab.sh
      crab submit -c crab3_XXX.py
      crab status -d crab3/<NAME PATH>

PAT production

Old location of PAT
     export DPM_HOST=node12.datagrid.cea.fr
     export DPNS_HOST=node12.datagrid.cea.fr
     rfdir /dpm/datagrid.cea.fr//home/cms/trivcat/store/user/lmastrol/

Go to the data_CMS directory:
cd /data_CMS/cms/davignon/
Create a folder for PAT production:
mkdir PATProduction
Check out the cms release:
cmsrel CMSSW_5_3_13
Get the last version of the Tau-ID packages, following this link:
cd src/
cmsenv
git cms-merge-topic -u cms-tau-pog:CMSSW_5_3_X_boostedTaus
Make a temporary folder:
mkdir temp/
cd temp/
Copy all the other packages from Ivo's tar file:
cp /home/llr/cms/ivo/HTauTauAnalysis/NewTauIDsrc2/NewTauIDsrc.tgz .
tar -xvf NewTauIDsrc.tgz
Remove the following list of folders:
rm -rf DataFormats/TauReco
rm -rf RecoTauTag/RecoTau
rm -rf RecoTauTag/Configuration
rm -rf RecoTauTag/ImpactParameter
rm -rf RecoTauTag/TauTagTools
rm -rf PhysicsTools/PatAlgos
rm -rf DataFormats/PatCandidates
Go back in the src/ for the new release and copy the contents of temp/:
cd ..
cp -r temp/* .
Fix a few includes. In PhysicsTools/PatAlgos/plugins/PATMHTProducer.h:
replace #include "RecoMET/METAlgorithms/interface/SigInputObj.h" by #include "DataFormats/METReco/interface/SigInputObj.h"
In PhysicsTools/PatAlgos/plugins/BuildFile.xml, add:
<use   name="DataFormats/METReco"/>
In CommonTools/ParticleFlow/python/pfTaus_cff.py, replace all occurences of:
'pfTauPFJets08Region' by cms.InputTag('pfTauPFJets08Region')
Copy the LLRAnalysis framework and compile
git clone https://github.com/akalinow/LLRAnalysis.git
scram b -j 4
Sourcing crab
     source /opt/exp_soft/cms/cms_ui_env_crab.2_11_1.sh
To publish a dataset, there is a 3 step procedure:
     crab -getoutput -c DIR/
     crab -report -c DIR/
     crab -publish -c DIR/
Then check the availability on DAS.

Trees production

Go to the data_CMS directory:
cd /data_CMS/cms/davignon/
cd NtuplesProduction/
Setup the correct CMS release and packages:
export SCRAM_ARCH=slc5_amd64_gcc462
scram p    -n CMSSW_5_3_11_p6_patch6    CMSSW   CMSSW_5_3_11_patch6
cd CMSSW_5_3_11_patch6/src
cp /afs/cern.ch/user/v/veelken/public/forOlivier/TauSpinnerInterface_2014Aug13.tar.gz .
tar -xzvf TauSpinnerInterface_2014Aug13.tar.gz
cmsenv
cp /home/llr/cms/ivo/HTauTauAnalysis/NewTauIDsrc2/NewTauIDsrc.tgz .
tar -xvf NewTauIDsrc.tgz
git clone https://github.com/akalinow/LLRAnalysis.git
cp EgammaAnalysis/ElectronTools/data/download.url LLRAnalysis/Utilities/data/mvaEleId/
(cd LLRAnalysis/Utilities/data/mvaEleId/; cat download.url | xargs wget ; rm *.root )
cp /data_CMS/cms/ivo/HTauTauAnalysis/CMSSWRelesases/CMSSW_5_3_11_p6_TESOlivier/src/copy_SA_SVFit.sh .
./copy_SA_SVFit.sh
scram b -r -j 4
Aller dans le dossier TauTauStudies (répertoire de production des Ntuples)
cd LLRAnalysis/TauTauStudies/
dans test/ il y a les configs CMSSW pour tourner les Trees (et PAT)
Aller dans le dossier test. Le script maître est:
emacs test/runMuTauStreamAnalyzer_PostMoriond2013_NewTauES_ByPair.py&
emacs test/runMuTauStreamAnalyzer_PostMoriond2013_NewTauES_ByPair_MC.py& //pas d'ES
Pour afficher tous les Pset et les sequences de la config. (provenance):
     edmConfigDump LLRAnalysis/TauTauStudies/test/runMuTauStreamAnalyzer_PostMoriond2013_NewTauES_ByPair_MCTauES.py
Le code qui tourne est:
emacs plugins/Mu(Elec)TauStreamAnalyzer.cc
To test the ntuple production, do:
cmsRun cfg.py
To launch the full production, go to prod/, then:
multicrab_run_mutau_09Oct13_MC_HiggsMSSM.cfg
multicrab_run_mutau_09Oct13_MC_Backgrounds.cfg
multicrab_run_mutau_09Oct13_embed_2012ABCD.cfg
Create a Tasks/ directory where each type (Trees_Embed, Trees_MC_Backgrounds, Trees_MC_HiggsMSSM) is listed. Copy the cfg multicrab and crab in there. Modify to point to the right locations of the crab and the py runMuTauStreamAnalyzer... (tree producer). To create and submits the jobs, go inside Tasks/*:
nohup multicrab -create -submit 500 -cfg multicrab_run_etau_09Oct13_data_2012ABCD.cfg &> log_submit_etau_all.txt & 
Or:
multicrab -create -submit 500 -cfg multicrab_run_etau_09Oct13_embed_2012ABCD.cfg

CMS release

Primary is for LLRAnalysis code. Secondary is for LimitSettingCode Setup slc5 environement and compiler
export SCRAM_ARCH=slc5_amd64_gcc462
export SCRAM_ARCH=slc5_amd64_gcc472
Voir les releases de CMSSW
scramv1 list CMSSW
Download a release
scram p    -n CMSSW_5_3_11_p6_analysis    CMSSW   CMSSW_5_3_11_patch6
scram p    -n CMSSW_6_1_1_limit    CMSSW   CMSSW_6_1_1
CMS environment
cd CMSSW*/src ; cmsenv

LLR Framework

Copy the packages -- a bit metastable
cp /home/llr/cms/ivo/HTauTauAnalysis/NewTauIDsrc2/NewTauIDsrc.tgz .
tar -xvf NewTauIDsrc.tgz
Install LLR Framework analysis
git clone https://github.com/akalinow/LLRAnalysis.git
Compile everything
scram b -j 4

Tree skimming + add variables to trees

Do:
mkdir Configs
in LLRAnalysis/Limits/bin.
The script where to put the variables is:
~/TAU_ID/TestEnvironement/CMSSW_5_3_11_p6_analysis/src/LLRAnalysis/Limits/bin/treeSkimmerMuTau_Winter13.cc
Modify it accordingly. makeTreeSkimmerMuTau_Winter13.py contains the input and output folders and path to tuples. The script to generate the jobs is (change the path if necessary):
submitJobToBatch_Winter13_MuTau.py
It creates the .sh files in batch/, like:
job_SUSYBBH1000_MuTau_JetDown.sh
Use the
./genSubmit.sh
script to generate the subAll.sh script. Then send to t3:
./submitAll.sh

Analysis + create histograms

Master script is:
CMSSW_5_3_11_p6_analysis/src/LLRAnalysis/Limits/bin/doAnalyzeJobsMuTau_Winter13.py
Modify this script so that it creates the correct histograms for the variables of interest.
Analysis script is:
CMSSW_5_3_11_p6_analysis/src/LLRAnalysis/Limits/bin/analyzeMuTau_Winter13.cc
To compile do:
(cd ../../../ ; scram b -j 4)
Then run by doing:
./doAnalyzeJobsMuTau_Winter13.py
which creates the single job files. Copy batch/genSubmit.sh in:
batch/analyze/MuTau/Results_ABCD_AntiMuMVAMedium_AntiEleLoose_HPSMVA3oldDMwLTTight_TauOldDM_OldEleID_TauESDatacards/
Run genSubmit.sh with gathers all the job send commands in one file: submitAll.sh. Then send all the jobs:
bash submitAll.sh

Create datacards (root files)

emacs makeMuTauTemplates_Winter13.cc &
Change name of variable, name of folders for input and output, then:
(cd ../../../ ; scram b -j 4)
makeMuTauTemplates_Winter13
DO NOT FORGET TO REMOVE THE DATACARDS BEFORE RERUNNING AS THEY WILL NOT BE OVERWRITTEN!!!

Limit setting protocol

cp /home/llr/cms/ivo/HTauTauAnalysis/CMSSW_5_3_11_p6_NewTauID/src/LLRAnalysis/Limits/bin/results/ElecTau/Results_ABCD_AntiMu3Loose_AntiEle5Medium_HPSMVA3oldDMwLTTight_TauOldDM_OldEleID_Datacards/datacards/eTauMSSM_diTauNSVfitMass.root auxiliaries/shapes/LLR/htt_et.inputs-mssm-8TeV-0.root 
 cp /home/llr/cms/ivo/HTauTauAnalysis/CMSSW_5_3_11_p6_NewTauID/src/LLRAnalysis/Limits/bin/results/ElecTau/Results_ABCD_AntiMu3Loose_AntiEle5Medium_HPSMVA3oldDMwLTTight_TauOldDM_OldEleID_Datacards/datacards/eTauMSSM_diTauNSVfitMass.root auxiliaries/shapes/LLR/htt_et.inputs-mssm-8TeV.root

doMSSM.py -a bbb --label=-tauptCats --tail-fitting --drop-list="$PWD/auxiliaries/pruning/uncertainty-pruning-drop-131013-mssm.txt" --config="HiggsAnalysis/HiggsToTauTau/data/limits.config-mssm-et-only-010414" --update-all --blind-datacards 
setup-htt.py -i aux-Imperial/bbb -o LIMITS-Imperial/bbb -a mssm -c 'et' 90 130 100-200:20 250-500:50 600-1000:100

mkdir Scripts/tauptCats/
for mass in $(seq 90 90 90; seq 130 130 130; seq 100 20 200 ;seq 250 50 500 ;seq 600 100 1000 ) ; do python genscripts-setup.py LIMITS-tauptCats et ${mass} "" > script-setup_${mass}.sh ; done
chmod u+x script-setup_*.sh
mv script-setup_*.sh Scripts/tauptCats
cd Scripts/tauptCats/
ls script-setup*.sh | awk '{print "/opt/exp_soft/cms/t3/t3submit -V "$1}' > launch-setup.sh ; chmod u+x launch-setup.sh
./launch-setup.sh
cd ../..
for mass in $(seq 90 90 90; seq 130 130 130; seq 100 20 200 ;seq 250 50 500 ;seq 600 100 1000 ) ; do python genscripts.py LIMITS-tauptCats et ${mass} "" > script_${mass}.sh ; done
chmod u+x script_*.sh
mv script_*.sh Scripts/tauptCats
cd Scripts/tauptCats/
ls script_*.sh | awk '{print "/opt/exp_soft/cms/t3/t3submit -V "$1}' > launch.sh ; chmod u+x launch.sh
./launch.sh
cd ../..

cp -r LIMITS-tauptCats/bbb/et LIMITS-tauptCats/bbb/et-tauptCats
plot --tanb HiggsAnalysis/HiggsToTauTau/python/layouts/tanb.py LIMITS-tauptCats/bbb/et-tauptCats expectedOnly=1

mkdir Results/tauptCats
mv et* Results/tauptCats
mv limits_mA-tanb.root Results/tauptCats

Limit setting

Get the packages:
git clone https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit.git HiggsAnalysis/CombinedLimit
git clone https://github.com/cms-analysis/HiggsAnalysis-HiggsToTauTau.git HiggsAnalysis/HiggsToTauTau
git clone https://github.com/roger-wolf/HiggsAnalysis-HiggsToTauTau-auxiliaries.git auxiliaries
Compile everything
scram b -j 4
Copy the datacard for your poi:
/home/llr/cms/ivo/OlivierAnalysis/CMSSW_6_1_1_limit/src/auxiliaries/shapes/LLR/htt_mt.inputs-mssm-8TeV-0.root
Change parameter file for the analysis:
/home/llr/cms/ivo/OlivierAnalysis/CMSSW_6_1_1_limit/src/HiggsAnalysis/HiggsToTauTau/data/limits.config-mssm-mt-only-Olivier
Kind of README file for limit setting:
/home/llr/cms/ivo/OlivierAnalysis/CMSSW_6_1_1_limit/src/HiggsAnalysis/HiggsToTauTau/data/mssm-protocol.txt
In src/, command to create the structure of the folder:
doMSSM.py -a bbb --label='-TauESVisibleMass-MSSM' --tail-fitting --drop-list="$PWD/auxiliaries/pruning/uncertainty-pruning-drop-131013-mssm.txt" --config="HiggsAnalysis/HiggsToTauTau/data/limits.config-mssm-mt-only-Olivier" --SMHasBackground --update-all --blind-datacards
or
doMSSM.py -a bbb --label='-diTauVisMass-161213-mssm' --tail-fitting --drop-list="$PWD/auxiliaries/pruning/uncertainty-pruning-drop-131013-mssm.txt" --config="HiggsAnalysis/HiggsToTauTau/data/limits.config-mssm-mt-only-161213" --SMHasBackground --update-all --blind-datacards
Setup the datacard
setup-htt.py -i aux-TauESVisibleMass-MSSM/bbb -o LIMITS-TauESVisibleMass-MSSM/bbb -a mssm -c 'mt' 90 130 100-200:20 250-500:50 600-1000:100
or
setup-htt.py -i aux-diTauVisMass-161213-mssm/bbb -o LIMITS-diTauVisMass-161213-mssm/bbb -a mssm -c 'mt' 90 130 100-200:20 250-500:50 600-1000:100
Here is exactly what is needed for the T-ES test hypothesis. First we need to create the datacard. Go inside a datacard directory:
cd LIMITS-diTauVisMass-161213-mssm/bbb/nobtag/100
Linl the parsing script to the directory and run it:
ln -fs ~/GenericTools/CreateDatacardForAlternativeHypothesesWithSignal.C .
root -l
?> .L CreateDatacardForAlternativeHypothesesWithSignal.C++
?> CreateDatacardForAlternativeHypothesesWithSignal("htt_mt_8_8TeV",true)
The true/false options is here to specifiy if the Ztautau process should be treated as signal or not. This should create the *_ALT.txt datacard needed.
Check the PhysicsModel and the command script in:
/home/llr/cms/davignon/TAU_ID/TestEnvironement/CMSSW_6_1_1/src/HiggsAnalysis/CombinedLimit/python/TauEnergyScale.py
/home/llr/cms/davignon/TAU_ID/TestEnvironement/CMSSW_6_1_1/src/execute_SignalSeparationCombine.sh
Master command for limit extraction is:
rm -rf LIMITS-diTauVisMass-161213-mssm/bbb/nobtag/100//output_combine// 
./execute_SignalSeparationCombine.sh LIMITS-diTauVisMass-161213-mssm/bbb/nobtag/100/ htt_mt_8_8TeV_ALT.txt 1

NEW limit setting

First export the CMSSW version:
     export SCRAM_ARCH=slc5_amd64_gcc472
Download a release
     scram p    -n CMSSW_6_1_1_limit    CMSSW   CMSSW_6_1_1
     cd CMSSW*/src ; cmsenv
Get the limit package
     git clone https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit.git HiggsAnalysis/CombinedLimit
     git clone https://github.com/cms-analysis/HiggsAnalysis-HiggsToTauTau.git HiggsAnalysis/HiggsToTauTau 
     git clone https://github.com/roger-wolf/HiggsAnalysis-HiggsToTauTau-auxiliaries.git auxiliaries
In auxiliaries/shapes/LLR, there are the default datacards that you can replace with other ones...
Take them from:
     ~/HTauTauAnalysis/CMSSW_5_3_11_p6_NewTauID/src/LLRAnalysis/Limits/bin/results/MuTau/
Create the working area:
     python HiggsAnalysis/HiggsToTauTau/scripts/doMSSM_taupt.py -a bbb --label='-140514-mssm' --config="HiggsAnalysis/HiggsToTauTau/data/limits.config-mssm-140507-taupt" --update-all --SMHasBackground --SMHasSignal --blind-datacards --extra-templates='ggH_SM125,qqH_SM125,VH_SM125' --tail-fitting --drop-list="auxiliaries/pruning/uncertainty-pruning-drop-140410-mssm.txt"
Recompile this:
     rm HiggsAnalysis/HiggsToTauTau/macros/addFitNuisance_C.so
     ./HiggsAnalysis/HiggsToTauTau/scripts/addFitNuisance.py -s setups-070514-mssm/bbb/ -c et -e 8TeV -b 'QCD_fine_binning' -k '13' --range 140 --rangelast 800 --fitoption 1 --fitmodel 1 --erroroption 1 --testmode
Setup for limits:
     setup-htt.py -i aux-mssm-FixTailFit/plain-asimov -o LIMITS-mssm-FixTailFit/plain-asimov -a mssm --mssm-categories-et '10 11 12 13 14' --mssm-categories-mt '10 11 12 13 14' --mssm-categories-tt '10 11 12 13 14' -c 'mt et tt' 90 130 100-200:20 250-500:50 600-1000:100
Setup for postfits:
     setup-htt.py -i aux-140410-mssm/bbb -o LIMITS-140410-mssm/bbb-mlfit -a mssm -c 'mt et tt' 160 350 500 --mssm-categories-et="10 11 12 13 14" --mssm-categories-mt="10 11 12 13 14" --mssm-categories-tt="10 11 12 13 14"
Setup for goodness of fit:
     To be completed.
For limits, copy the script that launches the limits on the Tier3:
     ln -fs ~/GenericTools/GenScripts_Asymptotic/genscripts-setup.py .
     ln -fs ~/GenericTools/GenScripts_Asymptotic/genscripts.py .
Modify the paths in genscripts. For e.g. mutau, generate the setup scripts:
     mkdir Scripts/mutau-mssm-FixTailFit/
     for mass in $(seq 90 90 90; seq 130 130 130; seq 100 20 200 ;seq 250 50 500 ;seq 600 100 1000 ) ; do python genscripts-setup.py LIMITS-mssm-FixTailFit mt ${mass} "" > script-setup_${mass}.sh ; done
     chmod u+x script-setup_*.sh
     mv script-setup_*.sh Scripts/mutau-mssm-FixTailFit
     cd Scripts/mutau-mssm-FixTailFit/
     ls script-setup*.sh | awk '{print "/opt/exp_soft/cms/t3/t3submit -V "$1}' > launch-setup.sh ; chmod u+x launch-setup.sh
     ./launch-setup.sh
     cd ../..
Repeat for etau and tautau (changing mt to et and tt).
Launch the limits:
     for mass in $(seq 90 90 90; seq 130 130 130; seq 100 20 200 ;seq 250 50 500 ;seq 600 100 1000 ) ; do python genscripts.py LIMITS-mssm-FixTailFit mt ${mass} "" > script_${mass}.sh ; done
     chmod u+x script_*.sh
     mv script_*.sh Scripts/mutau-mssm-FixTailFit
     cd Scripts/mutau-mssm-FixTailFit/
     ls script_*.sh | awk '{print "/opt/exp_soft/cms/t3/t3submit -V "$1}' > launch.sh ; chmod u+x launch.sh
     ./launch.sh
     cd ../..
Copy the fit results into new folder and draw the plot.
     cp -r LIMITS-mssm-FixTailFit/plain-asimov/mt LIMITS-mssm-FixTailFit/plain-asimov/mt-llr-newlimits-fixtailfit
     plot --tanb HiggsAnalysis/HiggsToTauTau/python/layouts/tanb.py LIMITS-mssm-FixTailFit/plain-asimov/mt-llr-newlimits-fixtailfit expectedOnly=1
Copy the plot into Results folder:
     mkdir Results
     mkdir Results/mt-llr-newlimits-fixtailfit/
     mv mt* Results/mt-llr-newlimits-fixtailfit/
     mv limits_mA-tanb.root Results/mt-llr-newlimits-fixtailfit/
Hadd the results files and launch the comparison:
     hadd -f tauPtCat-FW-fixtailfit-mutau.root Results/mt-llr-newlimits-ptweights-nuisance/limits_mA-tanb.root Results/mt-llr-newlimits-fixtailfit/limits_mA-tanb-modified.root
     ln -fs ~/GenericTools/compareLimits.C .
     .x compareLimits.C+("$CMSSW_BASE/src/tauPtCat-FW-fixtailfit-mutau.root","mt-llr-newlimits-ptweights-nuisance,mt-llr-newlimits-fixtailfit", true, false, "mssm-tanb", 1,80,"muTau",true)

To obtain postfit plots.
Start with genscripts, in order to execute mlfits:
     mkdir Scripts/mutau-mssm-Postfit/
     for mass in 160 350 500 ; do python genscripts.py LIMITS-140410-mssm mt ${mass} "" > script_${mass}.sh ; done
     chmod u+x script_*.sh
     mv script_*.sh Scripts/mutau-mssm-Postfit
     cd Scripts/mutau-mssm-Postfit/
     ls script_*.sh | awk '{print "/opt/exp_soft/cms/t3/t3submit -V "$1}' > launch-setup.sh ; chmod u+x launch-setup.sh
     ./launch-setup.sh
     cd ../..
Go to the following dir:
     cd HiggsAnalysis/HiggsToTauTau/test/
Do the plot from postfit results:
     python mlfit_and_copy.py -a mssm --skip --mA 160 --tanb 8 $CMSSW_BASE/src/LIMITS-140410-mssm/bbb-mlfit/mt/160
     python produce_macros.py -a mssm --mA 160 --tanb 8 --hww-signal --config $CMSSW_BASE/src/HiggsAnalysis/HiggsToTauTau/data/limits.config-mssm-140429-taupt-mtonly
     python run_macros.py -a mssm --config $CMSSW_BASE/src/HiggsAnalysis/HiggsToTauTau/data/limits.config-mssm-140429-taupt-mtonly
The plots are placed locally.
To commit to git, create a new release, then:
    cd auxiliaries/shapes/LLR
    git add htt*
    git commit -m "message"
    git push   https://github.com/roger-wolf/HiggsAnalysis-HiggsToTauTau-auxiliaries

Parse a txt datacard for hypothesis testing

You can modify and execute the following script:
/home/llr/cms/davignon/GenericTools/CreateDatacardForAlternativeHypotheses.C

Tau Energy Scale Measurement

There are a certain number of steps to complete in order to measure the Tau Energy Scale (T-ES). The first ones consist in the production of trees and skimmed trees without prior T-ES correction. Currently (Jan. 2014), there are located there:
      /data_CMS/cms/htautau/PostMoriond/TREES_NewTauIDVariables_NoTauES
      /data_CMS/cms/htautau/PostMoriond/NTUPLES_NewTauIDVariables_NoTauES
In the treeSkimmer, we have defined 61 hypotheses for the T-ES, and shifted the had. tau energy accordingly. Then, one has to produce the histograms for each T-ES hypotheses, following the procedure above. When this complete, one has to merge 2 hypotheses into the same root file. This can be done using the GenericTool:
      ln -fs ~/GenericTools/MergeFullHistos.C .
      root -l -q -b MergeFullHistos.C
This will create root files inside the Hypotheses/ folders for all analyses listed in MergeFullHistos. The next steps consists in the production of the root datacards (see section above and the makeMuTauTemplates_Winter13.cc script). One needs to specify the variable and the input folder (typically the results folder). The datacards are stored in the results/ folder. Then everything is ready to move to the "limit" setting step. Move to the limit framework, e.g.:
      /home/llr/cms/davignon/TAU_ID/TauEnergyScale/CMSSW_6_1_1_limit/src
The folder structure has been setup as to adapt to any of the conditions. A sample root datacard (for example the one with 0% shift) can replace:
      LIMITS-diTauVisMass-090114-mssm/bbb/nobtag/common/htt_mt.inputs-mssm-8TeV-0.root
but in practice this is not needed anymore. Use the following scripts to copy all the datacards to the folder (you need to modify it to take the correct variables and path in it):
      ln -fs ~/GenericTools/CopyFullRootDatacards.C .
      root -l -b -q CopyFullRootDatacards.C
This will create *.root_SVG files that will be used to lead to the final good datacards. To do that, look at the following script
      ln -fs ~/GenericTools/RemoveAllHistosFromDatacards.C .
      root -l -q -b RemoveAllHistosFromDatacards.C
This will create *.root files (the datacards that will be used). Then, we must generate the .txt datacards with the list of nuisance parameters and constraints. Go to the following folder and copy some inputs/scripts:
      cd ../100/
      ln -fs ~/GenericTools/GenerateDatacards.C .
      root -l -q -b GenerateDatacards.C
The CreateDatacardForAlternativeHypothesesWithSignal.C script parses the input datacard as to remove the irrelevant lines (systematics, MSSM Higgs signal, etc.), while CloneDatacard.C creates the 61 datacards with correct paths to root datacards and correct normalizations.
To run the first stage of limit setting (fit), go to the src folder and look at the two following:
      ln -fs  ~/GenericTools/CreateBatchCombine.C .
This is the master script. You will need to specify which analyses to run inside the script. To run it, do:
     root -l -q -b CreateBatchCombine.C
This will create jobs inside the batch/ folder. To submit the jobs on LLR Tier3, simply do:
      cd batch/
      ./genSubmit.sh
      ./submitAll.sh
The next step consists in copying the scripts that compute the significances and the likelihood profile. The master script is:
      ln -fs ~/GenericTools/CreateBatchAnalysis.C .
      root -l -q -b CreateBatchAnalysis.C
This will create a folder named batch_analyze that contains the analysis jobs to be sent on the t3 using the genSubmit/submitAll mechanics.
The scripts will create a root file (ObservedLHRatio*.root) that will serve as input for the drawing script that is automatically run (eps/rootfile named LikelihoodProfileData.eps/root that contain the formated LH value as a function of the T-ES).
It is possible to run control plots using this script:
      ln -fs ~/GenericTools/DrawControlPlot.C

Jet to Tau pT reweighting

Example code can be found in this folder:
     /home/llr/cms/ivo/HTauTauAnalysis/CMSSW_5_3_11_p6_NewTauID/src/LLRAnalysis/Limits/bin/JetTauFR
First, one should run the mutau analyzers for ptL2, inclusiveHighMt region, for the old and mva isolations, e.g. in
     /home/llr/cms/davignon/TAU_ID/Release_OlivierTES/CMSSW_5_3_11_p6_analysis/src/LLRAnalysis/Limits/bin
Then, create the folder JetTauFR/ and its subfolder plotJetTauFRCorrections/ and change the paths in
     computeJetTauFRScaleFactors.C
which is the script that should be run.

Merge fork with master in git

Cd to fork:
     git remote -v
     git remote add upstream https://github.com/LLRCMS/LLRHiggsTauTau
     git remote -v
     git fetch upstream
     git checkout master
     git merge upstream/master
     git push

Dump truth event information from MiniAOD

Switch:

     process.printTree 
In:
     HiggsTauTauProducer.py 

Jobs at LLR Tier3

Check job status
qstat @llrt3
qstat -f @llrt3 | grep Jobs
Delete all jobs
qstat @llrt3 | grep davignon | awk '{print "qdel "$1".in2p3.fr"}'
Pause/make jobs on hold
qstat @llrt3 | grep davignon | awk '{print "qhold "$1".in2p3.fr"}'
Delete a range of jobs
echo `seq -f "qdel %.0f.llrt3.in2p3.fr ;" 152701 152761`
Then add returns at the end of each line.

To be able to fetch grid files (e.g. through xrootd) in t3 batch jobs, add this to the .sh script (after doing voms-proxy-init -voms cms):
     export X509_USER_PROXY=$HOME/.t3/proxy.cert

Jobs on the Grid

Check job status
     crab -status -c crabDir
     crab -status -c PFEmbed_mutau_2012D_ReReco22Jan_HTT_09Oct13_Trees_v1_p1/
To create and submit the jobs
     multicrab -create -submit 500 -cfg multicrab_run_mutau_09Oct13_MC_Backgrounds.cfg
To submit the remaining jobs (created but not submitted perhaps because too many jobs > 500) -- see status
     multicrab -submit 500 -c multicrab_131212_140624/
If jobs created but not submitted due to server problem (waiting forever), we can remove the crab dir (PF*) and recreate the jobs:
     multicrab -create -submit 500 -cfg multicrab_run_mutau_09Oct13_MC_Backgrounds.cfg
Success of a job = status 0.
If not 0, resubmit a job doing:
     crab -forceResubmit listJobs_crashed_from_status -c PF*
To get the output (log files only), do:
     crab -getoutput -c <dir_name>
To list the files on DPM, do:
     voms-proxy-init -voms cms
     rfdir /dpm/in2p3.fr/home/cms/trivcat/store/user/davignon/
To get the files after production, first create folder: Trees_NewTauIDVariables_NoTES/MuTau/Embedded/, then run the script:
     ./copy_all_subfolders_mkdir_L1_noDoublons.sh /dpm/in2p3.fr/home/cms/trivcat/store/user/inaranjo/HTauTau/Analysis/NewTauID/EleTau/PFEmbed/          /data_CMS/cms/htautau/PostMoriond/TREES_NewTauID/EleTau/EmbeddedPF/ Embedded
Dans le dossier TREES_NewTauID, run:
ls | awk '{print "cd .. ; cd "$1" ; nohup ./copy.sh &"}'
To determine the files missing after download:
     ln -fs /home/llr/cms/davignon/GenericTools/checkAllTreesPresence_Generic.sh .
     ./checkAllTreesPresence_Generic.sh
To remove non-root files when finished, do:
ls | awk '{print "cd .. ; cd "$1" ; rm copy.sh ; rm list_dpm.txt ; rm nohup.out ; rm checkPresence.sh; rm blub.txt ;"}'
To ls:
voms-proxy-init -voms cms
dpns-ls  /dpm/in2p3.fr/home/cms/trivcat/store/user/inaranjo/
rfdir  /dpm/in2p3.fr/home/cms/trivcat/store/user/inaranjo/
To copy a single file:
rfcp  /dpm/in2p3.fr/home/cms/trivcat/store/user/inaranjo/FILE .

To cancel and resubmit the jobs, start by checking the status to get the list of jobs:
crab -status -c DYJets-50-madgraph-PUS10_MC_Bkg_HTT_09Oct2013_Trees_MuTau_v2/
Copy the list than do:
crab -kill LIST_OF_JOBS -c DYJets-50-madgraph-PUS10_MC_Bkg_HTT_09Oct2013_Trees_MuTau_v2/
Remove the folder:
rm -rf DYJets-50-madgraph-PUS10_MC_Bkg_HTT_09Oct2013_Trees_MuTau_v2/
Do a status to check cancellation. Relaunch the jobs:
multicrab -create -submit 500 -cfg multicrab_run_mutau_09Oct13_MC_Backgrounds.cfg
Launch the jobs > 500 doing:
multicrab -submit 500 -c multicrab_131216_112912/
Kill all the jobs for a given multicrab
     multicrab -kill all multicrab_140311_150904/
To change the server to launch the job, open the crab config file and put
     server_name = cern_vocms83

DAS

To check the status of a published dataset, visit DAS. Then the syntax to search for a dataset is :
"dataset dataset=NAME_OF_THE_DATASET"
Choose the ph01 or ph02 dbs instance. An example of search is:
dataset dataset=/TauPlusX/lmastrol-Data_2012D*/*
Copying an AOD from DPM
     source /afs/cern.ch/cms/cmsset_default.sh
     source /afs/cern.ch/cms/LCG/LCG-2/UI/cms_ui_env.sh
     cmsrel CMSSW_6_0_0
     cd CMSSW_6_0_0
     cmsenv
     xrdcp root://xrootd.unl.edu//store/mc/Summer12_DR53X/SUSYBBHToTauTau_M-1000_8TeV-pythia6-tauola/AODSIM/PU_S10_START53_V7A-v1/0000/4468A238-94F6-E111-968D-00215E93C4A8.root .

To list files in eos:
      xrdfs xrootd-cms.infn.it ls -l -u /store/group/...

Blacklisting a server for crab

~/.cms_crab/0_GET_AvailableServerList

Testing tables

Percentage of SUSY ggH signal entering the 5 categories (mutau channel).

Red is without pT-reweighting. Blue row is with pT-reweighting by POWHEG.

mass Inclusive No B-Tag Low No B-Tag Medium No B-Tag High B-Tag Low B-Tag High
80 100% 75.3% 11.9% 11.6% 0.8% 0.4%
100% 77.1% 11.7% 10% 0.8% 0.4%
90 100% 73.9% 15.7% 9.3% 0.8% 0.3%
100% 75.3% 15.6% 7.9% 0.8% 0.3%
100 100% 67.9% 21.4% 9.4% 0.7% 0.6%
100% 69% 21.6% 8.1% 0.7% 0.6%
110 100% 61.1% 28.1% 10% 0.4% 0.4%
100% 62.2% 28.3% 8.7% 0.5% 0.4%
120 100% 53.7% 32.2% 13.2% 0.4% 0.5%
100% 54.8% 32.6% 11.7% 0.4% 0.5%
130 100% 47% 35.2% 16.7% 0.5% 0.5%
100% 47.7% 35.9% 15.4% 0.5% 0.5%
140 100% 41.1% 35.8% 22% 0.5% 0.6%
100% 41.5% 36.7% 20.7% 0.5% 0.6%
160 100% 32.2% 32.4% 34.2% 0.4% 0.8%
100% 32.2% 33.2% 33.4% 0.4% 0.8%
180 100% 24.7% 29.4% 44.7% 0.4% 0.9%
100% 24.6% 30.1% 44.1% 0.4% 0.9%
200 100% 20.1% 24.4% 54.2% 0.3% 1%
100% 19.8% 24.9% 54% 0.2% 1%
250 100% 12.5% 17.1% 69.1% 0.1% 1.1%
100% 12.2% 17.2% 69.4% 0.1% 1.1%
300 100% 7.9% 12.1% 78.6% 0.1% 1.2%
100% 7.6% 12.1% 79.1% 0.1% 1.1%
350 100% 5.4% 8.3% 85% 0.06% 1.3%
100% 5% 8.1% 85.6% 0.06% 1.3%
400 100% 4% 6.3% 88.3% 0.08% 1.2%
100% 3.7% 6.1% 88.9% 0.06% 1.2%
450 100% 3.1% 4.8% 90.7% 0.07% 1.4%
100% 2.9% 4.5% 91.2% 0.06% 1.3%
500 100% 2.4% 3.8% 92.6% 0.03% 1.2%
100% 2.2% 3.7% 92.9% 0.03% 1.2%
600 100% 1.5% 2.4% 94.7% 0.05% 1.3%
100% 1.4% 2.4% 94.9% 0.04% 1.3%
700 100% 1% 1.8% 95.7% 0% 1.5%
100% 0.9% 1.7% 95.9% 0% 1.5%
800 100% 0.7% 1.1% 96.9% 0% 1.4%
100% 0.6% 0.9% 97.2% 0% 1.3%
900 100% 0.5% 1% 97.2% 0% 1.3%
100% 0.4% 1% 97.3% 0% 1.3%
1000 100% 0.3% 0.5% 98% 0% 1.2%
100% 0.2% 0.5% 98.1% 0% 1.2%

Yield (for 1 pb-1 of luminosity) of SUSY ggH signal entering the 5 categories (mutau channel).

Red is without pT-reweighting. Blue row is with pT-reweighting by POWHEG.

mass Inclusive No B-Tag Low No B-Tag Medium No B-Tag High B-Tag Low B-Tag High
80 42.8 32.2 5.1 5 0.3 0.2
40.9 31.5 4.8 4.1 0.3 0.2
90 72.2 53.4 11.4 6.7 0.6 0.2
69.8 52.6 10.9 5.5 0.6 0.2
100 104 70.6 22.3 9.8 0.7 0.6
102 70.2 22 8.2 0.7 0.6
110 139 84.6 38.9 13.8 0.6 0.6
136 84.5 38.5 11.8 0.6 0.5
120 170 91.2 54.6 22.5 0.7 0.8
168 91.9 54.6 19.6 0.7 0.8
130 206 96.9 72.6 34.5 1 1.1
204 97.2 73.1 31.4 1 1
140 237 97.5 85 52.1 1.1 1.5
236 97.9 86.5 48.7 1.1 1.5
160 296 95.1 95.7 101 1.2 2.4
295 95 97.8 98.4 1.2 2.3
180 350 86.2 103 156 1.3 3.1
350 86 105 154 1.3 3
200 392 78.8 95.7 212 1 4
395 78.3 98.6 213 1 4
250 475 59.6 81.3 328 0.7 5.3
484 59 83 336 0.6 5.2
300 524 41.6 63.6 412 0.7 6.2
536 40.6 64.6 424 0.6 6
350 528 28.3 43.9 449 0.3 6.9
544 27.3 44.1 466 0.3 6.9
400 534 21.5 33.8 471 0.4 6.6
553 20.6 33.9 492 0.4 6.7
450 521 16 24.8 473 0.3 7.2
542 15.6 24.5 494 0.3 7.3
500 484 11.5 18.5 448 0.1 5.7
505 11.2 18.6 470 0.1 5.9
600 237 3.5 5.8 224 0.1 3.1
247 3.3 5.9 235 0.09 3.3
700 170 1.7 3 163 0 2.6
178 1.5 3.1 171 0 2.7
800 118 0.9 1.2 114 0 1.6
124 0.8 1.1 121 0 1.6
900 79.6 0.4 0.8 77.4 0 1
83.9 0.4 0.8 81.6 0 1.1
1000 54.7 0.1 0.3 53.6 0 0.7
58.3 0.1 0.3 57.2 0 0.7

Percentage of SUSY ggH signal entering the 5 categories (etau channel).

Red is without pT-reweighting. Blue row is with pT-reweighting by POWHEG.

mass Inclusive No B-Tag Low No B-Tag Medium No B-Tag High B-Tag Low B-Tag High
80 100% 74.1% 11.3% 13.8% 0.5% 0.3%
100% 76.7% 10.9% 11.6% 0.5% 0.3%
90 100% 71.5% 16.6% 10.9% 0.8% 0.2%
100% 73.5% 16.3% 9.1% 0.9% 0.2%
100 100% 65.9% 22.7% 10.8% 0.4% 0.2%
100% 67.6% 22.9% 9% 0.4% 0.2%
110 100% 59.9% 27.9% 11.3% 0.4% 0.5%
100% 61.2% 28.4% 9.6% 0.4% 0.4%
120 100% 50.8% 33.3% 14.7% 0.5% 0.6%
100% 52.1% 34% 12.8% 0.6% 0.6%
130 100% 44.6% 37% 17.1% 0.5% 0.7%
100% 45.2% 37.9% 15.7% 0.5% 0.7%
140 100% 38.2% 36.2% 24.5% 0.4% 0.7%
100% 38.6% 37.3% 22.9% 0.4% 0.7%
160 100% 29.3% 33.3% 36.5% 0.3% 0.7%
100% 29.3% 34.1% 35.7% 0.3% 0.6%
180 100% 21.5% 29% 48.2% 0.3% 0.9%
100% 21.2% 29.7% 47.9% 0.3% 0.9%
200 100% 17.1% 24.2% 57.5% 0.2% 1%
100% 16.6% 24.4% 57.7% 0.2% 1%
250 100% 10.1% 15.9% 72.7% 0.1% 1.2%
100% 9.5% 15.7% 73.6% 0.1% 1.1%
300 100% 6.4% 10.7% 81.6% 0.1% 1.1%
100% 5.9% 10.3% 82.5% 0.1% 1.1%
350 100% 4.1% 7.6% 87% 0.07% 1.3%
100% 3.7% 7.2% 87.9% 0.06% 1.2%
400 100% 3.1% 5.2% 90.5% 0.06% 1.2%
100% 2.7% 4.9% 91.1% 0.06% 1.2%
450 100% 2.5% 3.6% 92.6% 0.02% 1.4%
100% 2.1% 3.4% 93.2% 0.02% 1.3%
500 100% 1.6% 3.1% 93.9% 0.02% 1.4%
100% 1.4% 2.9% 94.4% 0.01% 1.3%
600 100% 1.2% 1.8% 95.7% 0.02% 1.3%
100% 1% 1.6% 96% 0.02% 1.3%
700 100% 0.8% 1.3% 96.4% 0.02% 1.5%
100% 0.7% 1.2% 96.5% 0.02% 1.6%
800 100% 0.4% 0.8% 97.2% 0% 1.5%
100% 0.4% 0.8% 97.4% 0% 1.4%
900 100% 0.3% 0.9% 97.3% 0% 1.5%
100% 0.1% 0.8% 97.5% 0% 1.5%
1000 100% 0.3% 0.5% 97.9% 0% 1.2%
100% 0.3% 0.5% 97.9% 0% 1.2%

Yield (for 1 pb-1 of luminosity) of SUSY ggH signal entering the 5 categories (etau channel).

Red is without pT-reweighting. Blue row is with pT-reweighting by POWHEG.

mass Inclusive No B-Tag Low No B-Tag Medium No B-Tag High B-Tag Low B-Tag High
80 15.4 11.4 1.7 2.1 0.08 0.04
14.4 11 1.6 1.7 0.08 0.04
90 30.3 21.6 5 3.3 0.3 0.07
28.8 21.2 4.7 2.6 0.3 0.06
100 46.1 30.3 10.5 5 0.2 0.08
44.3 30 10.1 4 0.2 0.08
110 63.4 38 17.7 7.2 0.2 0.3
61.4 37.6 17.4 5.9 0.2 0.3
120 85.5 43.4 28.5 12.6 0.5 0.5
83.5 43.5 28.3 10.7 0.5 0.5
130 103 45.8 38 17.6 0.5 0.7
101 45.5 38.2 15.8 0.5 0.7
140 123 47 44.4 30.1 0.5 0.9
120 46.4 44.9 27.6 0.5 0.8
160 162 47.5 53.9 59.1 0.5 1.1
160 47 54.7 57.3 0.5 1
180 196 42.2 57.1 94.8 0.6 1.8
195 41.4 57.9 93.5 0.6 1.7
200 227 38.9 54.9 131 0.4 2.4
228 37.8 55.6 131 0.4 2.3
250 292 29.5 46.6 212 0.3 3.4
294 28 46.2 216 0.3 3.2
300 334 21.3 35.7 272 0.4 3.8
340 20.2 35.1 281 0.4 3.8
350 342 14 26 298 0.2 4.3
352 13 25.4 309 0.2 4.2
400 357 11 18.5 323 0.2 4.3
368 10 18 335 0.2 4.3
450 349 8.6 12.4 323 0.06 4.7
362 7.5 12.2 338 0.06 4.9
500 323 5.2 9.9 304 0.06 4.4
337 4.7 9.6 318 0.05 4.5
600 160 1.9 2.8 153 0.03 2.2
167 1.7 2.8 161 0.04 2.2
700 117 0.9 1.5 112 0.02 1.7
122 0.9 1.5 118 0.02 1.9
800 79.5 0.4 0.7 77.3 0 1.2
84 0.3 0.7 81.8 0 1.2
900 56 0.1 0.5 54.4 0 0.8
59.5 0.09 0.5 58 0 0.9
1000 36.8 0.1 0.2 36 0 0.5
38.8 0.1 0.2 38 0 0.5

Useful tips

Version of SVfit

To know the version of SVfit, do:
     cat TauAnalysis/CandidateTools/.admin/CVS/Entries

Dumping content of tree to file

     ########################################## 
     # Read and dump the tree entries 
     ########################################## 
     t.GetPlayer().SetScanRedirect(kTRUE) # You may have to cast t.GetPlayer() to a TTreePlayer*
     t.GetPlayer().SetScanFileName("output.txt")
     t.Scan("*")

DPM

To know the space taken on dpm:
     rfdir -r /dpm/datagrid.cea.fr/home/cms/trivcat/store/user/davignon| awk '{s+=$5}END{print s/1024"TB"}'

Shell commands

Floating point operations:
      MIN=`echo "$hour * 60" | bc`
Check if file exists:
     if [ ! -f /tmp/foo.txt ]; then
         echo "File not found!"
     fi

Int_t to TString

      Int_t i = 10 ;
      stringstream ss;
      ss << i;
      TString Appendix = TString(ss.str());// "10"

string to int

      std::istringstream ss(thestring);
      ss >> thevalue;

Open a file with xrootd

     TFile *f =TFile::Open("root://polgrid4.in2p3.fr//store/data/Run2016C/SingleMuon/RAW-RECO/MuTau-PromptReco-v2/000/276/097/00000/52E4953C-FB40-E611-A870-02163E011CE7.root") 

Finding a file

I find this useful for just quickly seeing which files contain a search time. I would normally limit the files searched with a command such as :
find . -iname '*' | xargs grep 'string' -sl
Another common search for me, is to just look at the recently updated files:
find . -iname '*' -mtime -1 | xargs grep 'string' -sl
would find only files edited today, whilst the following finds the files older than today:
find . -iname '*' -mtime +1 | xargs grep 'string' -sl

Get current directory

In C++/ROOT
     gSystem->pwd();
In Python
     import os
     print os.getcwd()

Taring a folder

tar -cvf output.tar /dirname

Getting a file from the Grid

First we need to translate the logical filename (some can be found here) into a physical filename. You need to know the node (can be determined from DAS). Then, the command "lfn2pfn" can be used, e.g.
wget --no-check-certificate http://cmsweb.cern.ch/phedex/datasvc/xml/prod/lfn2pfn\?node=T2_FR_GRIF_IRFU\&protocol=srmv2\&lfn=/store/user/inaranjo/DY1JetsToLL_M-50_TuneZ2Star_8TeV-madgraph/DY1JetsToLL_MC_Bkg_HTT_09Oct2013_PAT_v1/91df7bea76c5e5c57e33843f3c738912/patTuples_LepTauStream_1000_1_5wo.root
It creates a txt file in which the physical filename is stored. If the file is at IRFU, you can use rfcp to get it:
export DPM_HOST=node12.datagrid.cea.fr
export DPNS_HOST=node12.datagrid.cea.fr
rfcp /dpm/datagrid.cea.fr/home/cms/trivcat/store/user/inaranjo/DY1JetsToLL_M-50_TuneZ2Star_8TeV-madgraph/DY1JetsToLL_MC_Bkg_HTT_09Oct2013_PAT_v1/91df7bea76c5e5c57e33843f3c738912/patTuples_LepTauStream_1000_1_5wo.root .
If it is elsewhere, you can try to use lcg-cp:
lcg-cp -v -b -D srmv2 file:///tmp/test   srm://fgitb315.fnal.gov:10443/srm/v2/server?SFN=/cache/test/test100

Get the list of the 15 top heaviest directories

       du -xhS | sort -h | tail -n15

Procedure for NewNtuplettes at ccage

  • Download the d3pd_skimming package
export CVS_RSH=/usr/bin/ssh;
export CVSROOT=<user>@lpnp110.in2p3.fr:/var/cvs;
cvs -d <user>@lpnp110.in2p3.fr:/var/cvs co -d d3pd_skimming Higgs/d3pd_skimming
  • Source root, for example
source /sps/atlas/d/davignon/setup_root.sh
cd d3pd_skimming/NewNtuplettes
root -l
.L SOW.C++
.q
  • If you intend to run on data, modify main.C (line 137) to point towards the location of the input Ntuples
  • Compile the package
make
  • Create datacards:
for data, an example to be stored as Parameters_data/parameters_data.txt:
data
PYTHIA
ZHvv
140
NN
rel17
periodB
periodM
2011
NewSel
a
a
p868
for MC, an example to be stored as Parameters_gfusion/parameters_120.txt:
MC
PYTHIA
gfusion
120
NN
rel17
periodB
periodM
2011
mc11c
/sps/atlas/d/davignon/Tuples/Skimmed/mc/mc11c/p868/
mc11_7TeV.116610.PowHegPythia_ggH120_gamgam.merge.NTUP_PHOTON.e873_s1310_s1300_r3043_r2993_p868_tid693293_00
p868
  • Run the code
for data:
./Ntuplettes data dataM
for MC:
./Ntuplettes gfusion 120

Procedure for jet categories limit extraction

  • Skim data and MC
  • Define selection for jet categories
  • Produce Ntuplettes by defining a new Category variable
  • Get signal yields with GetAllYieldsUserFriendly.C
  • Do the signal fits using José's global fit macro
  • Get the number of backgrounds in each category
  • Do the background fits using Giovanni's macro
  • Get the parameters for the datacards

Global fit of signal

  • Get the Ntuplettes and put them in InputTuples/
  • ./changeNames.zsh to change the names of the Ntuplettes
  • change the category inside SignalFitSelec.cc and adapt the preselection
  • echo | awk -f ao.awk then copy/paste the output to generate seperate datacards
  • cat *.dat > dataSets/.dat
  • have the correct reference fits in referenceFits/
  • launch the fit:
root -l
?> .L myResolution.cc+
?> myResolution(Cat,"<CATNAME>")

Fitting of background

root -l
?> .L DatacardFit.C+
?> DatacardFit("<CATNAME>","<DATA FILENAME>")
  • Make the necessary changes to run_bkgfit.C and bkgfit.C
  • Run the script
root -l run_bkgfit.C

MC Generators

Setting up LHAPDF

export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/sps/atlas/d/davignon/lhapdf/lib
/sps/atlas/d/davignon/HNNLO/hnnlo-v1.3/bin

Downloading PDFs

source lhapdf_env.sh
lhapdf-getdata MSTW2008nlo68cl.LHgrid

Running HNNLO

hnnlo < infile

Useful tools

Running the skimming in a nutshell

exit
lcg_env
voms-proxy-init -voms atlas
zsh
//forget about this source /afs/in2p3.fr/sftgroup/atlassl5/ddm/latest/setup.sh;
source /sps/atlas/d/davignon/dq2_setup.sh;
source /sps/atlas/d/davignon/setup_root53203.sh;
export CVS_RSH=/usr/bin/ssh;
export CVSROOT=davignon@lpnp110.in2p3.fr:/var/cvs;
cd /sps/atlas/d/davignon/NewSkimmer/d3pd_skimming/
cd GoodRunsLists/cmt
make -f Makefile.Standalone
cd ../../
cd PileupReweighting/cmt
make -f Makefile.Standalone
cd ../../
make
source setup.sh
cp /afs/cern.ch/atlas/groups/EGamma/EGammaGRL/Eg_standard_v5/data11_7TeV.periodAllYear_DetStatus-v36-pro10_CoolRunQuery-00-04-08_Eg_standard.xml grl.xml
bash SelectHGGDCache.sh data11_7TeV.00186877.physics_Egamma.merge.NTUP_PHOTON.r2713_p705_p682_p868
bash SelectHGGDCache.sh <container_name>
or
bash SelectHGGDCache.sh container_list.txt

Versions of Athena

ls /afs/cern.ch/atlas/software/builds/AtlasProduction

TruthD3PD Package

from lxplus:
source /afs/cern.ch/atlas/offline/external/GRID/ddm/DQ2Clients/setup.sh;
voms-proxy-init -voms atlas
source pandaSetup.sh
export AtlasSetup=/afs/cern.ch/atlas/software/dist/AtlasSetup
alias asetup='source $AtlasSetup/scripts/asetup.sh'
asetup 17.6.0.1,AtlasProduction,here,builds
cmt co -r trunk $SVNOFF/PhysicsAnalysis/D3PDMaker/TruthD3PDMaker
cd TruthD3PDMaker/cmt
cmt make
cd ../share
pathena TruthD3PDfromEVGEN_topOptions.py "--athenaTag=17.6.0.1" "--inDS=mc11_7TeV.126389.Sherpa2DP20.evgen.EVNT.e1028/" "--outDS=user.davignon.126389.test"
pathena TruthD3PDfromEVGEN_topOptions.py "--athenaTag=17.6.0.1" "--inDS=mc11_7TeV.113714.SherpaY4JetsPt35.evgen.EVNT.e972/" "--outDS=user.davignon.113714.test"
pathena TruthD3PDfromEVGEN_topOptions.py "--athenaTag=17.6.0.1" "--inDS=mc11_7TeV.126372.SherpaY4JetsPt15.evgen.EVNT.e972/" "--outDS=user.davignon.126372.test"
pathena TruthD3PDfromEVGEN_topOptions.py "--athenaTag=17.6.0.1" "--inDS=mc11_7TeV.105802.JF17_pythia_jet_filter.evgen.EVNT.e825/" "--outDS=user.davignon.105802.test"
pbook

Run Sherpa

source /afs/cern.ch/atlas/offline/external/GRID/ddm/DQ2Clients/setup.sh;
voms-proxy-init -voms atlas
source pandaSetup.sh
export AtlasSetup=/afs/cern.ch/atlas/software/dist/AtlasSetup
alias asetup='source $AtlasSetup/scripts/asetup.sh'
asetup 17.6.0.1,AtlasProduction,here,builds
asetup 17.1.3.2,AtlasProduction,here,builds
pathena --trf="Generate_trf.py ecmEnergy=7000 runNumber=126389 firstEvent=1 randomSeed=%RNDM:100 jobConfig=MC11.126389.Sherpa2DP20_2j_80GeV.py outputEVNTFile=%OUT.EVNT.pool.root"  --outDS user.davignon.test.Sherpa.2 --split=50
where the jobOption file looks like:
# prepared by Frank Siegert, Lydia Roos November 2011. 
include ( "MC11JobOptions/MC11_Sherpa_Common.py" )

"""
(run){
  SCALES VAR{Abs2(p[2]+p[3])/4.0}
  ME_QED = Off
  QCUT:=7.0
}(run)

(processes){
  Process 21 21 -> 22 22;
  Loop_Generator gg_yy;
  End process;

  Process 93 93 -> 22 22 93{2};
  Order_EW 2;
  CKKW sqr(QCUT/E_CMS)/(1.0+sqr(QCUT/0.6)/(Abs2(p[2]+p[3])/4.0));
  Integration_Error 0.1;
  End process;
}(processes)

(selector){
  PT      22      14.0 E_CMS
  Mass    22  22  80.0 E_CMS
  DeltaR  22  93  0.3  100.0
}(selector)
"""
#-------------------------------------------------------------
# Filter
#-------------------------------------------------------------

from GeneratorFilters.GeneratorFiltersConf import DirectPhotonFilter
topAlg += DirectPhotonFilter()

DirectPhotonFilter = topAlg.DirectPhotonFilter
DirectPhotonFilter.Ptcut = 20000.
DirectPhotonFilter.Etacut =  2.7
DirectPhotonFilter.NPhotons = 2

#---------------------------------------------------------------
# POOL / Root output
#---------------------------------------------------------------

StreamEVGEN.RequireAlgs +=  [ "DirectPhotonFilter" ]

from MC11JobOptions.SherpaFFEvgenConfig import evgenConfig
evgenConfig.efficiency = 0.15
#evgenConfig.minevents = 5000
evgenConfig.weighting = 0

an example can be found here: /afs/cern.ch/user/l/lroos/scratch0/egammaMCProd/17.2.3.2/

Check outs

RootCore
svn co svn+ssh://davignon@svn.cern.ch/reps/atlasoff/PhysicsAnalysis/D3PDTools/RootCore/tags/RootCore-00-00-29 RootCore
svn co svn+ssh://davignon@svn.cern.ch/reps/atlasoff/Reconstruction/Jet/JetUncertainties/tags/JetUncertainties-00-05-02 JetUncertainties
svn co svn+ssh://davignon@svn.cern.ch/reps/atlasgrp/CombPerf/JetETMiss/JetCalibrationTools/ApplyJetCalibration/tags/ApplyJetCalibration-00-01-02 ApplyJetCalibration
svn co svn+ssh://davignon@svn.cern.ch/reps/atlasoff/Reconstruction/Jet/JetResolution/tags/JetResolution-01-00-00 JetResolution
svn co svn+ssh://davignon@svn.cern.ch/reps/atlasgrp/CombPerf/JetETMiss/JetCalibrationTools/ApplyJetResolutionSmearing/tags/ApplyJetResolutionSmearing-00-00-03 ApplyJetResolutionSmearing
svn co svn+ssh://davignon@svn.cern.ch/reps/atlasoff/Reconstruction/egamma/egammaAnalysis/tags/egammaAnalysisUtils-00-03-19 egammaAnalysisUtils
svn co svn+ssh://davignon@svn.cern.ch/reps/atlasphys/Physics/Higgs/HSG1/Notes/trunk/12_April2012
svn+ssh://davignon@svn.cern.ch/reps/atlasoff/PhysicsAnalysis/StandardModelPhys/PhotonAnalysisUtils/tags/PhotonAnalysisUtils-00-03-45 PhotonAnalysisUtils

svn co svn+ssh://davignon@svn.cern.ch/reps/atlasoff/PhysicsAnalysis/D3PDMaker/TruthD3PDMaker/trunk 
cmt co -r trunk $SVNOFF/PhysicsAnalysis/D3PDMaker/TruthD3PDMaker

svn ls REPOSITORY/PATH

Repositories

  /atlasoff
  /atlasperf contains everything from atlasgrp under /CombPerf
  /atlasinst contains everything from atlasgrp under /Institutes
  /atlasphys contains everything from atlasgrp under /Physics (*)
  /atlasgroups contains atlasgrp minus above. 

Find a string inside a file

find . -iname '*' | xargs grep 'string' -sl 

Setup Athena

export AtlasSetup=/afs/cern.ch/atlas/software/dist/AtlasSetup
alias asetup='source $AtlasSetup/scripts/asetup.sh'
. /afs/cern.ch/sw/lcg/external/gcc/4.3.2/x86_64-slc5/setup.sh
cd /afs/cern.ch/sw/lcg/app/releases/ROOT/5.28.00/x86_64-slc5-gcc43-opt/root/
. bin/thisroot.sh 

export AtlasSetup=/afs/cern.ch/atlas/software/dist/AtlasSetup
alias asetup='source $AtlasSetup/scripts/asetup.sh'
asetup 17.2.1.1,32,here,builds

CVS at Lyon

Setup
export CVS_RSH=/usr/bin/ssh
export CVSROOT=davignon@lpnp110.in2p3.fr:/var/cvs
cvs -d davignon@lpnp110.in2p3.fr:/var/cvs co -d d3pd_skimming Higgs/diphoton_skimming 
Commit
cvs ci -m "<message>"

SVN April notes

export SVNPHYS=svn+ssh://svn.cern.ch/reps/atlasphys
svn co $SVNPHYS/Physics/Higgs/HSG1/Notes/trunk/12_April2012/ workdir cd workdir/

Setup DQ2 at Lyon

lcg_env
voms-proxy-init -voms atlas -valid 168:00
source /afs/in2p3.fr/sftgroup/atlassl5/ddm/latest/setup.sh

Make a directory

mkdir -m 3775 LeRepertoire 

Make a Workspace from Hfitter's datacard

model = HftModelBuilder::Create("model","../datacards/hfitter_tadatacard.dat")
model->MakeWorkspace("muSignal", "output.root") 

Model inspector

Folliwing the creation of the workspace:
root -l
?> .L ModelInspector.C+
?> ModelInspector("output.root","myWorkspace","ModelConfig::mc")
?> ... 

Significance from p0

TMath::NormQuantile(1.-p0)

LADIeS

LArCellEmpty
asetsq
python runBadCellList.py -r 209254
UPD4 Flagging of Beam Background cells: if cluster is with a lot of events, check the fraction of events : 1) if > 85%, put it as sporadic 2) if < 85%, put it as high noise high gain

Download from bash

wget http://tmva.sourceforge.net/downloads/TMVA-v4.1.2.tgz

RootCore

source setup_root53203.sh
cd TestRootCore/packages/
RootCore/scripts/build.sh packages.txt
source RootCore/scripts/setup.sh
cd /sps/atlas/d/davignon/NewSkimmerHCPCategories/d3pd_skimming
source $ROOTCOREDIR/scripts/compile.sh
cd /sps/atlas/d/davignon/NewSkimmerHCPCategories/d3pd_skimming
make

Useful commands

int to string

       int Number = 123;       // number to be converted to a string
       string Result;          // string which will contain the result
       ostringstream convert;   // stream used for the conversion
       convert << Number;      // insert the textual representation of 'Number' in the characters in the stream
       Result = convert.str(); // set 'Result' to the contents of the stream

Git stuff

Display url of remote

       git config --get remote.origin.url

-- OlivierDavignon - 06-Jan-2012

Edit | Attach | Watch | Print version | History: r121 < r120 < r119 < r118 < r117 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r121 - 2019-11-05 - OlivierDavignon
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback