The TWiki of vpascuzz (Vincent R. Pascuzzi)
Physics Division, Berkeley Lab
1 Cyclotron Road, Berkeley, CA 94720 USA
Office: 050B-6220
Tel: 510-486-4181
vrpascuzzi-at-lbl.gov
Involvements
Current
- (Simulation) FastChain convenor
- (ASG) acm
- (Analysis/Exotics) JDM dijet+lepton contact
- Accelerators (GPGPU)
Previous
- (Operations) LAr Run Coordinator
- (Simulation) FastCaloSimV2
- (Analysis/Exotics) DBL VV semi-leptonic 2016
- (Luminosity) LAr offline luminosity
- (Analysis/SM) VBS VV semi-leptonic
- (Analysis/Exotics) DBL VV semi-leptonic 2017
LAr Offline Luminosity analysis
Liquid argon (LAr) contributes to the relative luminosity measurements using two detectors: FCal and EMEC.
The gist (in a nutshell)
- High-voltage power supply (HVPS) system provides drift voltage to LAr detectors
- HVPS regulated to compensate for ionisation losses → maintaining a constant potential in gaps (filled with LAr)
- Induced current is proportional to particle flux
- Fit LAr current as a function of preferred luminosity (from LUCID) to get calibration parameters: slope and intercept
- Transform into relative luminosity measurements (i.e. map: current
luminosity)
The analysis
Two main steps are required to obtain LAr relative luminosity measurements.
1.
treemaker
-
python
codebase
- retrieves LAr currents and pedestals from Detector Control System (DCS)
- retrieves luminosity algorithms from
COOL
and PerLB
files (provided by Benedetto and Sara)
- writes data to
TTrees
2.
analysis
-
C++
codebase; JSON
for jobs configuration
- derives channel-by-channel calibrations using fit of current vs.
algo
data
- applies calibrations based on single or multiple runs
- current
relative luminosity measurement
- outputs
Running LArOflLumiAnalysis
It's assumed you're working from
lxplus
.
git
the code
First things first -- we need the code.
$ setupATLAS && lsetup git
$ git clone ssh://git@gitlab.cern.ch:7999/LArOflLumi/LArOflLumiAnalysis.git
It can take a bit of time, since the
PerLB
files alone are

400 MB and the
git
packs are

300 MB
Preparation
If running for the first time, it all begins with
treemaker
. However, you'll need to take care of some things before starting with it.
Let's say we want to get the latest Run-2 data from 2017. Check the file
treemaker/runlists/2017_runs.dat
to see where we left off last.
$ tail -3 runlists/2017_runs.dat
336832
336852
336915
Therefore, the last run (at the time of writing) is
336915
, so we need every run following.
Sidenote:
Back in the day, the
ATLAS Run Query
was used to look up runs and get the valid ("ready for physics") LBs. These were then entered manually into the run list and
custom_LB_bounds
files. Lucky for you, there is now a script that can be used to format and print the information you need.
To use this wonderful new tool, you need to setup the
master
branch of
athena
. In a new shell:
$ cd ~/some/temp/directory
$ setupATLAS && asetup master,Athena,latest
$ AtlRunQuery.py --run 336916+ --partition "ATLAS" --projecttag "data0*,data1*" --show run --show readyforphysics --verbose
$ pwd
$ cd /path/to/LArOflLumiAnalysis/scripts
where we used
336916
, i.e. the number of the last run we previously ran on plus
1
. Copy the output from
pwd
(say it's
~/some/temp/directory
), and
$ python ParseAtlRunQueryData.py ~/some/temp/directory/data/QueryResult.txt
...
#337098 < 100 LBs
#337030 < 100 LBs
#337017 < 100 LBs
Copy and paste the following run numbers into the respective `treemaker/runlists`:
...
336927
336944
336998
Copy and paste the following run numbers into the respective `analysis_config/params/custom_LB_bounds` file:
...
"336915": [
376,
1143
],
"336927": [
77,
1011
],
"336944": [
205,
414
],
The output of this script begins with
bad runs, i.e. those with less than 100 stable ("good for physics) LBs; we exclude these from the analysis. The second part are the
good runs, and last are the "custom LB bounds". The latter are not strictly necessary, but I prefer to be explicit so we know exactly what LBs we are using a priori (it also allows you to choose specific LBs for
special runs, e.g. vdM scans).
Step 1: treemaker
Now we have the latest set of runs, copied them to the runs list we want to use, and copied the "custom LB bounds" to it's respective file (phew!), we are finally ready to run
treemaker
. Now we retrieve LAr currents and pedestals from DCS, and get the algorithm data from
COOL
and
PerLB
files from the
LumiWG
area on
afs
(if they exist). Begin by setting up the infrastructure needed for
treemaker
. From a
lxplus
node:
$ cd LArOflLumiAnalysis/treemaker
$ source setup.sh
Yes, we are using an ancient version of
athena
, but it ain't broken, so why fix it? (Plus, lack of time migrating to a more recent release.) Since we already have data for the runs we didn't just add above, comment out all the previous runs (that weren't just added) with a
#
.
As already mentioned,
treemaker
retrieves a number of different data (three, actually). Therefore, we have three switches that can be used with
treemaker
.
The syntax is simple:
treemaker [-t,-c,-p] [FCal,EMEC] runlist
. Each of
-t
(trees, for
algo
data),
-c
(for LAr currents),
-p
(for pedestals) need to be run on both FCal and EMEC with the same runs list. The
-t
switch typically runs quite fast,

(10min). However, the
-c
(currents) and
-p
(pedestals) take much longer. For retrieving these, it's useful to start this section in a
screen
session, and prepend
krenew -t --
to the commands we are about to see.
An example of running
treemaker
to retrieve EMEC currents is:
$ ./treemaker.py -c EMEC runlists/2017_runs.dat
The output will be written to
$TestArea/output/{trees, currents, pedestals}
, depending on the switch. Once you've retrieved all the data for the runs, it's useful to create
symlinks
of the data to the respective
analysis_config/params/
directories. These are used as input to the second step.
N.B. When retrieving trees with
-t
, the
detector
argument must be
FCal
(don't ask).
Step 2: analysis
Once you have all the necessary data (i.e. trees, currents, and pedestals), the actual analysis code can be run. It's quite simple -- start by examining one of the "param trees" in
LArOflLumiAnalysis/analysis_config/params/param_trees
. The ones of general interest are
lumi_current_{FCal,EMEC}_2017.json
and
benedetto_{FCal,EMEC}_2017.dat
. The former are used first, to produce the calibration parameters, and the latter for producing the "Benedetto/Sara" files. The configuration files are easy to figure out; you only need to supply the correct directories for the input files. If you have created the
symlinks
, as described above, these shouldn't need to be modified.
As a concrete example, assuming your
cwd
is
LArOflLumiAnalysis
:
$ cd analysis && source build/setup.sh && cd ../analysis_config
$ ./lumi_analysis params/param_trees/lumi_current_EMEC_2017.json
$ ./lumi_analysis params/param_trees/benedetto_EMEC_2017.json
Output directories will be made on each call to
lumi_analysis
. The final product, the "Benedetto/Sara" files, are ready.
Plotting
Now that you've run
lumi_analysis
, you can validate the data. Lucky for you (again), there are a handful of plotting macros in the
macros
directory. Feel free to check them out.