Analysing Outer Tracker Threshold Scans
Introduction
The Outer Tracker (OT) threshold layer scan is used to measure gain variations in the OT. Threshold scans are performed during LHC operation, using the LHC beam. Gain variations are monitored by studying hit efficiency as a function of the OT electronics amplifier threshold. This Twiki contains instructions on how to analyse the data from such a threshold scan.
The idea of the procedure is as follows: For every layer of the OT (12 in total, 4 layers per OT station) the amplifier threshold of the OT read-out electronics is changed in 10 steps. The steps are defined as
[800 mV, 1000 mV, 1200 mV, 1250 mV, 1300 mV, 1350 mV, 1400 mV, 1450 mV, 1600 mV, 1800 mV]
Notice that the threshold scans recorded before June 2011, did not contain the last two thresholds.
As the threshold in the layer under consideration is changed, all other layers are operated at the nominal threshold value of 800 mV, to reconstruct charged particle tracks. The hit efficiency is defined as the number of found hits, divided by the total number of predicted hits, for tracks passing within 1.25\,mm from the wire. For a particular threshold and corresponding layer under study, the hit efficiency is measured in 85\,mm wide bins of the horizontal coordinate x and 56 mm high bins of the vertical coordinate y. The bin size in x corresponds to one quarter of the width of an OT module.
Process the raw data
General info
The data taken during a threshold scan is written to disk in the .RAW format. Typically we select 150 000 events per threshold. In total we have 10 thresholds times 12 layers, which comes down to 120*150 000 events ~ 18 M events. More information about the operational procedure of a threshold layers scan can be found here:
https://lbtwiki.cern.ch/bin/view/OT/HowToDoThresholdLayerScan
.
A Bender python script based on the
OTHitEfficiencyMonitor in the Brunel Monitoring is used to calculate the hit efficiency. The script uses Brunel for track reconstruction and needs an additional package containing a .h and an .xml file (attached to this page) to be able to use use some C++- classes (for instance the
OTLiteTime class) in the python script. A dictionary relating the threshold settings to the corresponding layers is defined in the scripts. The step number which specifies the threshold setting for a particular layer is a variable in the ODIN bank called calibrationStep().
Setting up environment
lhcb-proxy-init
SetupProject Bender --use-grid --build-env
cd ~/cmtuser/Bender_vXrYpZ
getpack Rec/Brunel
cd ~/cmtuser/Bender_vXrYpZ/Rec/Brunel/cmt
cmt make
Additionaly, add the package with the .h and .xml file and compile it.
The dictionary in the Bender script containing the steps and corresponding layers
The dictionary which maps every threshold to its corresponding layer is defined as follows in the python script:
mydict = {}
beginstep = 1
endstep = 120
nthresholdperlayer = 10
firstthreshold = 1
layer = -1 #Will increase every time the counter modulo (number of steps) is equal to 1
for i in range(beginstep,endstep+1):
if i%nthresholdperlayer == firstthreshold:
layer += 1
mydict[i]= layer
Notice that from June 2011 onwards, ten thresholds were defined, instead of eight.
Database Tags
Typically, the .raw data needs to be processed before there is a snapshot of the condition and detector database. Therefore the data type and database tags should be set by hand in the python script. The latest database tags are found here:
http://lhcb-release-area.web.cern.ch/LHCb-release-area/DOC/dbase/conddb/release_notes.html
For example for the March 2011 run this means you should set the following Bender properties
DataType = '2011',
DDDBtag = 'head-20110303' ,
CondDBtag = 'head-20110308',
OutputType = 'None'
In addition one should set
importOptions('$APPCONFIGOPTS/UseOracle.py')
Special t0 Database
The OT readout gate was changed in May 2011. However, when performing a threshold scan, the recipe still uses the old configuration (same readout gate for all OT stations instead of interspaced by 2ns). This means when anaylyzing the scans after May 15 2011, a dedicated database with module t0's is used when running the reconstruction:
from Configurables import ( CondDBAccessSvc, CondDB )
AlignmentCondition = CondDBAccessSvc("AlignmentCondition")
AlignmentCondition.ConnectionString = "sqlite_file:2011-05.v2-scan.db/LHCBCOND"
CondDB().addLayer(AlignmentCondition)
Veto Hlt Errors
The Hlt Error Filter checks whether there are Hlt2 decisions. Since this is not the case here, and we're not interested in them, the Hlt Error should be vetoed by setting the Brunel property
VetoHltErrorEvents = False
Testing
It is always a good idea to test the Bender script on a few events to see if it actually works. Reasons for this are for example new software releases and different
database tags. Testing can be done by running the python script locally or by submitting one subjobs only and us the Interactive backend in ganga.
Local testing
For this, the 'job steering' part of the Bender script needs a PFN of one raw file in the threshold scan run (in this case running only 100 events). This raw file should be available locally. Note: Take a file in the middle of the run. The first N files have only step 0 in the first 100 events and therefore your histograms will be empty!! :
files = [
"DATAFILE='castor:/castor/cern.ch/grid/lhcb/data/2011/RAW/FULL/LHCb/CALIBRATION11/91638/091638_0000000121.raw' SVC='LHCb::MDFSelector'"
]
run(100)
For local testing of the script, do
SetupProject Bender --use-grid
python -i [Bender script]
Interactive backend of Ganga
Once the script runs OK locally, it's a useful test to run it on the Interactive backend of Ganga, to see if the LFN's are read correctly.
There are some important lines in the submission script, since we want to send along our own installation of Bender. Notice that for using the Interactive backend of Ganga, it's important not to forget the --use-grid option (since the Interactive backend simply starts a subshell and sets up the projects as speciefied in the ganga script):
b = Bender(version = 'vXrYpZ')
b.setupProjectOptions = '--use-grid'
b.user_release_area = '/afs/cern.ch/user/N/Name/cmtuser'
b.module = {PATHTOBENDERSCRIPTS}
Submit Jobs to the Grid
With everything set up properly, and after assuring yourself the python script works, it is time to submit all the jobs to the Grid using the Dirac backend of Ganga. You should keep in mind that Dirac can only handle 100 input files per job. So split all the RAW files in option files of 100 each, and submit the jobs (with 100 subjobs each) to the Grid. The subjobs typically take about 1.5 days to finish.
lhcb-proxy-init
SetupProject Ganga
ganga SubmitScript.py
An example of a script to submit the jobs to the Grid is attached to this Twiki.
Merging the ROOT Files
Single Scan Analysis
Comparing Scans
--
DaanVanEijk - 15-Mar-2011