The aime of this page is to collect and provide useful information for the intercalibration of ECAL using electrons.

## The E/p method

In this section a brief explanation of the method is given. Some details are here skipped, since a complete explanation of the E/p method can be found in [1]

The idea behind this method is to select a sample of isolated electrons from W --> ev and Z --> ee decay, and to use them to perform a single channel-by-channel intercalibrations (IC). This is done comparing the supercluster energy of the electron with its tracker momentum, and costraining the E/p ratio to be as close as possible to the physical target value of 1. The basic assumption of this method is that the measurement of the tracker momentum value is ubiased. Therefore, biases or inaccuracies in the momentum measurement should be taken into account as systematics effects.

The algorithm used in the calibration with electrons is the so-called "L3 algorithm". It is an iterative method, where at each step the channel intercalibrations are computed as a correction of the ones calculated at the previous iteration. The formula that gives the IC values is the following:

where:

• iciN is the intercalibration coefficient for the crystal i, at the iteration N
• wij is a weight representing the fraction of the supercluster energy that is contained in the crystal (rechit) i
• (pTk/Esc)j is the ratio between the tracker momentum and the supercluster energy of the electron j
• each event is reweighted through f(Esc/pTk), that is the distribution of the Esc/pTk of the electrons in the eta-ring corresponding to the eta of the electron seed crystal.
• Ne is the total number of electrons used in the calibration

The supercluster energy (Esc) of each electron is estimated from its rechit collection:

where

The key ingredient of the algorithm is the f(Esc/pTk) distribution. The events are reweighted accordingly to the E/p distribution of the electrons: hence, an electron with an E/p ratio near the peak as a large weight, while the events in the tails of the distribution count less in the method. The E/p distribution is updated at each iteration, since the energy of the single rechits are updated using the new IC constants derived at the previous loop.

### Calibration of momentum (branch: lbrianza/master of ECALELF)

The E/p calibration algorithm is based on the assumption that the measurements of the tracker momentum is unbiased. However, this is not perfectly true. In fact, a modularity of the electron tracker momentum (pTk) is generally observed along phi, especially in the endcap, and it is related to the tracker structures. This behaviour of pTk must be corrected in order to have an accurate calibration.

In order to correct the momentum, events from Z --> ee decay are used. The algorithm works in the following way:

• Events are divided into 360 bins in phi. For each phi-bin, the distribution of m(ee)^2 / m(Z)^2 is built, defined as
• Build a reference distribution of m(ee)^2 / m(Z)^2, using the all electrons
• Use the reference template to fit the distribution of each phi-bin, extracting a scale parameter which represents the momentum scale variation

The code used for the momentum calibration is EOverPCalibration/bin/CalibrationMomentum.cpp. You have to run it separately on electron and positron samples, because their momentum corrections are different. Here how you can do:

• Mount eos with the command: "eosmount eos". In this way the code will run much faster.
• Open the file EOverPCalibration/cfg/calibrationMomentum_cfg.py
• Change the line: chargeValue = cms.int32(-1) and put in -1 if you want to run on electrons (or 1 if you want to run on positrons)
• Run the code with:
```
CalibrationMomentum cfg/calibrationMomentum_cfg.py
```
• The file with the corrections will be created in the output/ folder.
• Now re-run the code on the positron sample (or the electron sample in case before you have run on positrons). Warning: the output file will have the same name as before, so be careful to not overwrite it.
• Unmount eos: "eosumount eos"

cfg/calibrationMomentum_cfg.py is a python containing the list of options for the code. The most important ones are infileDATA and infileMC, which tell to the code the position of the .txt files containing the path of the MC and data ntuples. (actually we only use data, so we are using only 1 file. Change that file if you want to change the list of ntuples)

### E/p iterative cut in the algorithm

The momentum calibration procedure is not sufficient to completely remove the phi-modularity observed in the intercalibration map, especially in the endcap. Several studies [2][3], and more useful [4], performed between run I and run II have shown that these phi-structures are mainly due to events where the electrons have an E/p ratio far from 1. In order to further remove these structures, a cut on the E/p distribution is applied in the algorithm. The cut is iterative, this means that at each loop the E/p distribution is updated and the E/p cut is re-applied (using always the same window).

The size of this window is a parameter that can be given in input. From [4], the chosen option is 0.15 for EB and 0.20 for EE. This means that in the calibration procedure, if an event has an E/p ratio larger than 1.15 or smaller than 0.85 (in the barrel), a weight of 0 is assigned to that event. Otherwise, the weight assigned to the event is determined by the E/p distribution.

In the first iteration the E/p cut is not applied, in order not to remove (through the E/p cut) the structures with IC values very far from 1.

## Instructions: how to use the code

The code is integrated inside the ECALELF framework. Detailed instructions on how to use ECALELF can be found here https://twiki.cern.ch/twiki/bin/viewauth/CMS/ECALELF and here the code https://github.com/ECALELFS/ECALELF

### ECALELF ntuples. A brief guide

A detailed documentation of the content of the ECALELF ntuples can be found inside the ECALELF documentation in the github package. The electron ntuples are structured into three different trees, namely:

• selected
• extraCalibTree
• eleIDTree

The "selected" tree is the main one, which contains the main observable related to the electrons (pt, eta, energy, etc..). The "extraCalibTree" contains instead quantities related to the single rechits (energy, laser correction values, etc..). Finally, the eleIDTree includes information related to electron ID variables. This tree is not necessary for the calibration with electrons. The ntuples include events of W->ev and of Z->ee. Each branch related to the electrons is an array of 3 positions, in order to store information for a maximum of 3 electrons per event (actually, the 3th position is never filled with an electron. It is used for other purposes). If the event is a W->ev event, only position [0] is filled with the electron from the W. If it is a Z->ee event, position [0] is filled with the tighter electron from the Z boson, while position [1] is filled with the information of the other electron. If the ntuple contains a mix of W and Z events, the easiest way to select Z or W events only is to ask for the following conditions:

• abs(chargeEle[1])==1 for the Z
• abs(chargeEle[1])!=1 for the W

If you want to look at the same time at variables that are contained in two different trees (e.g. you want to print the energy of the first electron, which is contained in the "selected" tree, and the energies of the electron rechits, which is contained in the "extraCalibTree"), you can do it using the "AddFriend" method of TTree, as in the following example: (suppose ntuple.root is the main ntuple containing the "selected" tree, while extraCalibTree.root contains the "extraCalibTree"):

```root -l ntuple.root extraCalibTree.root
TTree* selected = (TTree*)_file0->Get("selected")
TTree* extraCalibTree = (TTree*)_file1->Get("extraCalibTree")
selected->Scan("energySCEle[0]:energyRecHitSCEle1")
```

#### Ntuple's location

##### 2016

The list of the most recent ntuples used for the calibration can be found inside the scripts used for the calibration: https://github.com/lbrianza/ECALELF/blob/calibration_2016/ZFitter/submit_calibration_jobs_multifit.py https://github.com/lbrianza/ECALELF/blob/calibration_2016/ZFitter/submit_calibration_jobs_weights.py

##### 2015

Here is the most recent version of electron ntuples (containing both W-->ev and Z-->ee datasets). They correspond to the entire run2015D (silver json file applied), they are stored on eos:

```root://eoscms//eos/cms/store/group/dpg_ecal/alca_ecalcalib/ecalMIBI/lbrianza/ntupleEoP/data-Run2015D-25ns-multifit.root
root://eoscms//eos/cms/store/group/dpg_ecal/alca_ecalcalib/ecalMIBI/lbrianza/ntupleEoP/extraCalibTree-data-Run2015D-25ns-multifit.root
```

```root://eoscms//eos/cms/store/group/dpg_ecal/alca_ecalcalib/ecalMIBI/lbrianza/ntupleEoP/data-Run2015D-25ns-weights.root
root://eoscms//eos/cms/store/group/dpg_ecal/alca_ecalcalib/ecalMIBI/lbrianza/ntupleEoP/extraCalibTree-data-Run2015D-25ns-weights.root
```

For the MC (Z->ee only) use instead the following sample: (18M events!)

```root://eoscms//eos/cms/store/group/dpg_ecal/alca_ecalcalib/ecalelf/ntuples/13TeV/ALCARECOSIM/DYToEE_powheg_13TeV-RunIISpring15DR74-Asym25n-v1/allRange/246908-258750-Prompt_25ns-v1-esPlanes/DYToEE_powheg_13TeV-RunIISpring15DR74-Asym25n-v1-allRange.root
root://eoscms//eos/cms/store/group/dpg_ecal/alca_ecalcalib/ecalelf/ntuples/13TeV/ALCARECOSIM/DYToEE_powheg_13TeV-RunIISpring15DR74-Asym25n-v1/allRange/246908-258750-Prompt_25ns-v1-esPlanes/extraCalibTree-DYToEE_powheg_13TeV-RunIISpring15DR74-Asym25n-v1-allRange.root
```

There is also a MC of W->ev, with no selections on MET nor mT (~7.6M events)

```root://eoscms//eos/cmsstore/group/dpg_ecal/alca_ecalcalib/ecalelf/ntuples/13TeV/MINIAODNTUPLE/76X_mcRun2_asymptotic_v12/WJetsToLNu_TuneCUETP8M1_13TeV-madgraphMLM-pythia8/allRange/WJetsToLNu_TuneCUETP8M1_13TeV-madgraphMLM-pythia8-allRange.root
```

### The calibration code

The calibration code is integrated inside ECALELF.

The calibration algorithm is implemented in two classes (one for the barrel, one for the endcap): https://github.com/lbrianza/ECALELF/blob/master/EOverPCalibration/src/FastCalibratorEB.cc https://github.com/lbrianza/ECALELF/blob/master/EOverPCalibration/src/FastCalibratorEE.cc

These classes are launched by the ZFitter code, which is the main interface of ecalelf: https://github.com/lbrianza/ECALELF/blob/master/ZFitter/bin/ZFitter.cpp

This code has different options (run the fits on the Z, run the E/p monitoring, etc...). One of the option is indeed "EOverPCalib", which run the calibration with E/p. Several additional options are associated to the E/p algorithm, and the full list can be seen in the ZFitter code.

The main ones are the following:

• --doEB or --doEE --> run the barrel or the endcap calibration
• --nLoops (number) --> number of iterations of the L3 algorithm
• --splitStat (0 or 1) --> 0=run on the full statistics, 1=split events in odd/even samples (this is used to estimate the statistical precision of the method)
• --EPMin (number) --> the size of the E/p cut window (see previous sections)
• --applyPcorr (True or False) --> apply momentum calibration (see previous sections)

### How to use the code

In order to use the code, you must use this branch: lbrianza:master

A very straightforward way to use the code is running the following command:

```ZFitter.exe -f data/validation/EoverPcalibration_test.dat --EOverPCalib --outputPath output/ --doEB --splitStat 0 --nLoops 15 --EPMin 0.15 --noPU --applyPcorr True
```

here, data/validation/EoverPcalibration_test.dat is the configuration file which contains the list of ntuples on which you want to run the calibration. The meaning of the other options is described in the previous section.

However, with the previous command you will run directly on a set of ntuple stored on EOS, without copying them in local. This is generally a very slow way of running the code: it is better to do it using the submit_calibration_jobs*.py scripts.

The submit_calibration_jobs*.py code is the main script to be used to run the calibration. There are two versions: submit_calibration_jobs_weights.py and submit_calibration_jobs_multifit.py Inside the code there are different options to be set in order to run the calibration (e.g. type of energy to be used, type of local reco, size of E/p cut window..). These options also determine the name of the output. If you want to run the E/p calibration in local, do the following:

• set the option you want to use, then
```python submit_calibration_jobs_multifit.py --generateOnly
```
This command create a "Job_*" folder containing a .sh file that contains the list of commands needed to run the calibration. Then you can copy this file in a /tmp folder and run with
```sh fileName.sh
```
This script copies the ntuples in local and run the calibration on them, calling the ZFitter.

BE CAREFUL: do not run it on your afs area, because the script copies all the ntuples needed in local. So, remember to ALWAYS work in a /tmp folder (you can copy your output later to your afs area)

If you want to submit the jobs on lxbatch instead of running in local, just do

```python submit_calibration_jobs_multifit.py
```

This script does the following list of things:

• create a folder named Job_* containing the .sh files with the commands of the jobs (one for each desired job)
• submit the all jobs (only if the --generateOnly option is not set)
• create a folder named cfg_*. This folder contains one .py file for each job. These are the configuration files to be used to make the calibration plots
• create a script named "createAndPlotIC_*". When all the calibration jobs are done, you can run this script in order to make the calibration plots for each job
• The same script puts all the .txt files containing the IC constants in a folder named ICset_*

The IC produced are relative IC. This means that they are a correction of the IC contained in the prompt reconstruction (or from whatever your ntuples have been produced). To produce the absolute IC, first of all you have to recovery the original IC of the prompt reconstruction (let's call them IC_prompt.txt)

To dump this IC, you need to follow these steps:

```conddb_dumper -O EcalIntercalibConstants -c frontier://FrontierProd/CMS_CONDITIONS -t EcalIntercalibConstants_2012ABCD_offline
```
Where EcalIntercalibConstants_2012ABCD_offline is the name of the tag you want to dump. Then you will obtain a .dat file containing the IC values.

Then, you can use the CompareICSet.cpp code inside the ECALELF repository to dump the absolute IC:

```CompareICSet IC_prompt.txt your_IC.txt
```
This command will produce a file called absolute.txt: this file contains the set of absolute IC.

The set of IC contained in the prompt reconstruction of RUN2015D (tag: 74X_dataRun2_Prompt_v2) is also here:

/afs/cern.ch/user/l/lbrianza/work/public/EoPtest_december2015/IC_prompt.txt

The same code (CompareICSet.cpp) can also be used to compare two different set of ICs, you just have to run

```CompareICSet IC_set1.txt IC_set2.txt
```
The output is a .root file which contains a lot of useful plot (e.g: ratio map of the two IC sets, diff histograms, etc..)

There is also another version of the code, called CompareICSet_ratio, that is similar but it takes in input also the even/odd ICs (you need to give 6 .txt files in input), and produces the same plots+other stuff (e.g. the statistical precision of the ratio, using even/odd):

```CompareICSet_ratio IC_set1.txt IC_set2.txt IC_set1_even.txt IC_set1_odd.txt IC_set2_even.txt IC_set2_odd.txt
```

## 2016 most recent results

The following plots show the latest results using about 24.1/fb of 2016 data, using events from W-->ev and Z-->ee decays.

## Validation with the Z (this part needs some updates..)

Once you have derived a new set of IC constants, it has to be validate on data, checking the impact on the resolution of the Z-peak. In this section the workflow used for the validation is explained.

### Prepare the tag

Now you have to prepare the tag with the IC set you want to test. Instructions to download the code:

```cmsrel CMSSW_7_5_0
cd CMSSW_7_5_0/src
git cms-merge-topic shervin86:forTags
scram b -j16
cd CondTools/Ecal/test/energycalib
cmsenv
```

At this point, you have to convert the .txt file containing the IC in a .xml file. NB: YOU MUST USE THE ABSOLUTE IC, NOT THE RELATIVE!!!

```sh /afs/cern.ch/cms/CAF/CMSALCA/ALCA_ECALCALIB/RunII-IC/txtToXml.sh fileIC.txt > fileIC.xml
```

Now you can produce the sqlite file:

```cmsRun testEcalIntercalibConstants.py tag=EcalIntercalibConstants_Example fileName=absolutePatWithIC/fileIC.xml
```

This command produces a file named EcalIntercalibConstants_Example.db . You can check the tags inside this file with

```conddb --db EcalIntercalibConstants_Example.db listTags
conddb --db EcalIntercalibConstants_Example.db list EcalIntercalibConstants_Example
```

The final step is to create the python containing the tag. An example can be found here: https://github.com/ECALELFS/ECALELF/blob/newMaster/EcalAlCaRecoProducers/config/reRecoTags/Cal_Nov2015_ICEoP_v1.py

You must change the following two lines:

```tag = cms.string("EcalIntercalibConstants_Cal_Nov2015_EoP_v1"),
connect = cms.untracked.string("sqlite_file:/afs/cern.ch/cms/CAF/CMSALCA/ALCA_ECALCALIB/RunII-IC/Cal_Nov2015/eop/EcalIntercalibConstants_Cal_Nov2015_EoP_v1.db"),
```

with

```tag = cms.string("EcalIntercalibConstants_Example"),
connect = cms.untracked.string("sqlite_file:absolutePathWithIC/EcalIntercalibConstants_Example.db"),
```

Rename the python file in something like EcalIntercalibConstants_Example.py, now you are ready to run the rereco.

### Launch the rereco

```wget --no-check-certificate https://raw.githubusercontent.com/ECALELFS/ECALELF/80X-devel-new/setup_git.sh
chmod +x setup_git.sh
./setup_git.sh CMSSW_8_0_8
cd CMSSW_8_0_8/src/
cmsenv
cd Calibration/ZFitter && make && cd -
cd Calibration/EcalAlCaRecoProducers
source ../initCmsEnvCRAB2.sh
git clone https://github.com/ECALELFS/Utilities.git bin
```

Before launching the rereco on crab, please ensure that your tag works, running the rereco in local:

```cmsRun EcalAlCaRecoProducers/python/alcaSkimming.py isCrab=0 skim=ZSkim maxEvents=100 type=ALCARERECO files=/store/group/dpg_ecal/alca_ecalcalib/ecalelf/alcaraw/13TeV/DoubleEG-ZSkim-Run2015C-rereco05Oct2015-25nsReco/254227-255031/EcalUncalZElectron_11_1_XHD.root doTree=1 tagFile=/afs/cern.ch/user/l/lbrianza/work/public/EoPtest_december2015/ICset_multifit_20loop_2015_new/Cal_Dec2015_ICEoP_ref_noPcorr.py doTreeOnly=0
```
where you should change tagFile=... with the path of your .py file.

If everything works fine, you can launch the rereco on crab.

TEMPORARY FIX: There is an issue with one of the main scripts used for the rereco, which is scripts/RerecoQuick.sh, which has not been properly fixed yet. In order to avoid such a problem, please replace it with the following version:

``` /afs/cern.ch/user/l/lbrianza/work/public/EoP_additionalFiles/RerecoQuick.sh
```

In order to run the rereco, you can use this script

```/afs/cern.ch/user/l/lbrianza/work/public/EoP_additionalFiles/launchRereco2016_EoP.sh
```

where you should change this line

```for tagfile in config/reRecoTags/EoP_Aug2016_{0p15,0p20,nocut}_2015corr.py
```
with the list of tags you want to test. You can run it simply with

```sh launchRereco2016_Eop.sh
```

The first time you run it, it creates (and submit) all the rereco jobs for each tag. Then, you can re-run it again from time to time, and it will check if the jobs are done or not. When all the jobs for one rereco are done, the script merges all the ntuples for that particular rereco. You have to continue to re-run the script until all the jobs are done.

At the end you will have the ntuples ready for the validation.

### Make the validation (OBSOLETE)

Go into the ZFitter/ folder. For each tag, you have to create a configuration (.dat) file which must contain the list of ntuples produced at the previous step. The .dat file must be put into the data/validation/ folder. You can use as starting point this one

```data/validation/Run2015B_WF2_ICtransp2012_v6.dat
```
Changing "d1", "d2", with the path of your ntuples.

Now you can run the validation with

```./script/validation.sh -f data/validation/yourConfigurationFile.dat --invMass_var invMass_SC --baseDir test --validation --slides
```
This command will run the fit on the Z-peak on your data, in several categories, and it will produce some slides (.tex format) containing tables with the results. You can compile these slides with
```pdflatex test/slides/validation-invMass_SC-slides.tex
```

## Monitoring of laser corrections with electrons

To run the monitoring, you must be in this branch: lbrianza/master

The E/p distribution of the electrons is also useful to monitor the behaviour of the laser correction in ECAL. The algorithm works in the following way:

• Select a sample of electron candidates from W and Z, and build the total E/p distribution. This will be used as reference distribution.
• Fill the E/p distribution for different time periods (tunable), with and without the laser correction applied
• Use the E/p template to fit each of the distributions, extracting a scale parameter which tells how much the E/p peak is changing

The algorithm is implemented in the class LaserMonitoringEoP, contained in the EoverPcalibration area. The interface to run it is the ZFitter code, and it can be run using the --laserMonitoringEP option:

```ZFitter.exe -f data/validation/monitoring_LC_with_EP_2015.dat --evtsPerPoint 5000 --laserMonitoringEP
```

As in the case of the calibration, running in this way means running directly on the ntuples stored on eos. As before, this is generally a slow way for running the code.

For this reason, it is better to use the submit_monitoring_jobs*.py scripts (two versions: submit_monitoring_jobs_weights.py and submit_monitoring_jobs_multifit.py Inside this code you should set the type of local reco you want to use. If you want to run the monitoring in local, do the following:

```python submit_monitoring_jobs_multifit.py --generateOnly
```
This command create a "Job_*" folder containing a .sh file that contains the list of commands needed to run the monitoring. Then you can copy this file in a /tmp folder and run with
```sh fileName.sh
```
BE CAREFUL: do not run it on your afs area, because the script copies all the ntuples needed in local. So, remember to ALWAYS work in a /tmp folder (you can copy your output later to your afs area)

If you want to submit the jobs on lxbatch, just do

```python submit_monitoring_jobs_multifit.py
```

The output are two folders (one for EB and one for EE) containing various plots showing the stability of the E/p peak.

The following plots show the latest results using the full statistics of run2015D, using events from W-->ev and Z-->ee decays.

## The electron stream

Since 2015, a dedicate stream for electrons has been developed. The idea behind this stream is to store only a sub-set of the whole event, keeping a regional set of RAW information directly related to the electron. This results in a consistent reduction of the data size, and a gain of a factor 10 in the time necessary for the reconstruction. The cons are that some global event info (e.g. pfMET, vertices) cannot be reconstructed from the stream output.

The RAW datasets for the electron stream (namely 'AlCaElectron') are centrally produced and stored in the T2 at CERN.

The dataset containing the RAW of electron stream for RUN2015D is: /AlCaElectron/Run2015D-v1/RAW

### How to run the HLT path with the stream (2016)

Follow these instructions to setup the environment: https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideGlobalHLT#Preparing_a_80X_CMSSW_developer

then you can extract the whole HLT table (including the HLT path of the electron stream) and running it as follows: (on data)

```hltGetConfiguration /dev/CMSSW_8_0_0/GRun  --full --offline --data --unprescale --process TEST --globaltag auto:run2_hlt_GRun  --input /store/relval/CMSSW_8_0_5/SingleElectron/FEVTDEBUGHLT/80X_dataRun2_HLT_relval_v8_RelVal_sigEl2015D-v1/00000/FE6E2CE8-7908-E611-90DD-0025905A60F2.root --max-events=100 --l1-emulator 'Full' > hlt_EleStream_data.py
cmsRun  hlt_EleStream_data.py > out.txt
```

if you want to run it on MC, do:

```hltGetConfiguration /dev/CMSSW_8_0_0/GRun  --full --offline --mc --unprescale --process TEST --globaltag auto:run2_mc_GRun --input root://cms-xrd-global.cern.ch//store/mc/RunIISummer16DR80/WJetsToLNu_TuneCUETP8M1_13TeV-madgraphMLM-pythia8/GEN-SIM-RAW/FlatPU28to62HcalNZSRAW_80X_mcRun2_asymptotic_2016_TrancheIV_v6-v1/120000/060403FF-10AF-E611-99B7-20CF3019DF17.root --max-events=100 --l1-emulator 'Full' > hlt_EleStream_mc.py
cmsRun  hlt_EleStream_mc.py > out.txt
```

Be careful: with the above commands, you are running the full HLT menu, so not only the electron stream but also all the other paths. This can take a while.

If you open the hlt_EleStream_data.py file, you can check the content and the structure of the electron stream path. There are 3 paths actually: 1) AlCa_SingleEle_WPVeryLoose_Gsf_v* (the single electron path) 2) AlCa_DoubleEle_CaloIdL_TrackIdL_IsoVL_DZ_v* (the double ele path, with a cut on dZ) 3) AlCa_DoubleEle_CaloIdL_TrackIdL_IsoVL_v* (the double ele path, without any cut on dZ)

if you check the content of these paths you find all the information you need. Alternatively, you can use the GUI mentioned above.

### Produce Ntuples from the RAW of the stream - in local (branch: lbrianza/80X-devel-new-electronStream_2016)

In order to run the workflow, you must apply a patch to your CMSSW configuration, to implement the sequences for the electron stream. To do that, go into the src/ directory of your CMSSW release. Then do:

```git cms-addpkg Configuration/StandardSequences/
scram b -j8
cmsenv
```

These commands will modify your local Configuration/StandardSequences/python/AlCaRecoStreams_cff.py file, adding into it all the sequences required for the electron stream. REMEMBER to re-compile after these steps, otherwise it will not work.

Now you can run the RECO and ALCARECO step. To do that, run the following command:

```cmsDriver.py reco -s RAW2DIGI,RECO,ALCA:EcalCalWElectronStream -n 100 --filein=/store/data/Run2016B/AlCaElectron/RAW/v1/000/272/775/00000/363B1493-3114-E611-A9C3-02163E011B18.root --data --conditions=80X_dataRun2_Prompt_v8 --era=Run2_2016 --scenario=pp --nThreads=4 --dirout=./ --customise Calibration/EcalAlCaRecoProducers/customElectronStream.StreamReco
```

Now you should have a RECO and an ALCARECO file (EcalCalWElectronStream.root). You can now produce the ntuple with

```cmsRun EcalAlCaRecoProducers/python/alcaSkimming.py isCrab=0 skim=WSkim maxEvents=100 type=ALCARECO files=file:EcalCalWElectronStream.root doTree=3 tagFile=EcalAlCaRecoProducers/config/reRecoTags/80X_dataRun2_Prompt_v8.py doTreeOnly=1 electronStream=1
```

### On CRAB 3

There is a pre-formatted crab file to run the RAW->RECO->ALCARECO step on CRAB3. Be careful: before running this file, you should create the config file with the previous command:

```cmsDriver.py reco -s RAW2DIGI,RECO,ALCA:EcalCalWElectronStream -n 100 --filein=/store/data/Run2016B/AlCaElectron/RAW/v1/000/272/775/00000/363B1493-3114-E611-A9C3-02163E011B18.root --data --conditions=80X_dataRun2_Prompt_v8 --era=Run2_2016 --scenario=pp --nThreads=4 --dirout=./ --customise Calibration/EcalAlCaRecoProducers/customElectronStream.StreamReco
```

then you can run on crab3:

```source /cvmfs/cms.cern.ch/crab3/crab.sh
voms-proxy-init
crab submit -c EcalAlCaRecoProducers/test/run_alcareco_alcastreamElectron.py
```

there is a 2nd pre-formatted crab file to run the following step, ALCARECO->NTUPLE:

CRAB:

```crab submit -c EcalAlCaRecoProducers/test/run_ntuple_alcastreamElectron.py
```

(check carefully the options inside the two files!!!)

Attempt to run everything with CRAB2: (lbrianza/newMaster)

```./scripts/prodAlcareco.sh `parseDatasetFile.sh alcareco_datasets.dat | grep AlCaElectron | grep VALID |grep 2016B` --type ElectronStream --tag config/reRecoTags/Cal_Jun2016_ref.py -s WSkim --createOnly
crab -c crabDirectory -submit 450 (many times)
```

```./scripts/prodNtuples.sh `parseDatasetFile.sh alcareco_datasets.dat | grep AlCaElectron | grep -v database |grep 2016B`  -s WSkim --type ALCARECO  --json_name Cert_271036-274421_13TeV_PromptReco_Collisions16  --json /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions16/13TeV/Cert_271036-274421_13TeV_PromptReco_Collisions16_JSON.txt --isPrivate --doExtraCalibTree --electronStream
```

## References and useful material

Topic attachments
I Attachment History Action Size Date Who Comment
pdf Calibrazioni_MoCa_11_6_2015.pdf r1 manage 327.4 K 2015-12-23 - 12:24 UnknownUser
pdf Calibrazioni_MoCa_28_5_2015.pdf r1 manage 401.7 K 2015-12-23 - 12:24 UnknownUser
png EB___history_vsTime.png r1 manage 40.0 K 2015-12-22 - 12:23 UnknownUser
png EB_g_spread_vsEta_crackCorr.png r1 manage 21.1 K 2016-12-12 - 10:27 UnknownUser
png EB_h2_IC_crackCorr_phiNorm.png r1 manage 192.3 K 2016-12-12 - 10:27 UnknownUser
png EEM_g_spread_vsEta.png r1 manage 18.6 K 2016-12-12 - 10:27 UnknownUser
png EEM_h2_IC_raw_phiNorm.png r1 manage 55.4 K 2016-12-12 - 10:27 UnknownUser
png EEP_g_spread_vsEta.png r1 manage 18.8 K 2016-12-12 - 10:27 UnknownUser
png EEP_h2_IC_raw_phiNorm.png r1 manage 55.1 K 2016-12-12 - 10:27 UnknownUser
png EE___history_vsTime.png r1 manage 44.5 K 2015-12-22 - 12:23 UnknownUser
png formula.png r1 manage 9.0 K 2015-12-22 - 10:09 UnknownUser
png formula_Esc.png r1 manage 5.8 K 2015-12-22 - 12:15 UnknownUser
png formula_Feta.png r1 manage 6.3 K 2015-12-22 - 12:15 UnknownUser
png momentumCorrFormula.png r1 manage 7.4 K 2015-12-22 - 10:47 UnknownUser
Topic revision: r49 - 2017-03-03 - unknown

 Home Sandbox Web P View Edit Account
 Cern Search TWiki Search Google Search Sandbox All webs
Copyright &© 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback