Hit and Impact Point Alignment Algorithm
Introduction
This twiki describes the Hit and Impact Point (HIP) Alignment Algorithm. Please refer to the paper, "
Sensor Alignment by Tracks
" for the alignment model description.
To see additional and updated instructions, please refer to the
HipPy software guide.
Recipe for installation
For any CMSSW release
Inside the CMSSW release base,
cd src/
cmsenv
git cms-init
git cms-addpkg Alignment/HIPAlignmentAlgorithm
scramv1 b -j 12
General workflow
We assume familiarity with
CMSSW Framework.
Usually it becomes necessary to run over millions of events to perform a decent alignment, we need to run HIP jobs in parallel and iteratively. This mode starts by running once the scripts main/initial_cfg_X.py prepared for each IOV X. These scripts produce the initial
IOUserVariables_Y.root,
HIPAlignmentAlignables_Y.root,
IOAlignedPositions_Y.root, and IOIteration_Y.root (actually a text file recording 0 as the iteration number) files, where Y is the starting run of IOV X. These files are updated at the end of each iteration.
Once these files are created, the different iterations consist of three steps:
The first step is to calculate the hit residuals from tracks. This is done by running multiple jobs in parallel, with each job running over a subset of events and writing its output to its own directory. This step is achieved by making a copy of align_cfg.py in each job directory. While this script is unique to the job directory and does not change between iterations, the inputs do change. At the beginning of running align_cfg.py, the IOIteration and
IOAlignedPositions, which contain the alignment information from the previous iteration, are copied from the main directory to the working directory of the batch job. An
IOUserVaribles.root file that contains the residuals information from that job is the final product of align_cfg.py.
The second step is to collect all the hit information recorded in the various
IOUserVariables.root files in the parallel jobs, calculate the final alignment parameters and update the ROOT files in the main directory. This step can only be executed after all the jobs in the first step are done, and no events are used at this step.
The final step in each iteration is to upload the information to a database (.db) file, which is executed by the upload_cfg_X.py scripts. The db file is named alignments_iterN.db for the Nth iteration..
Job submission is done by executing batchHippy.py. See
batchHippy.py --help
for the available options.
Preparing inputs
Starting CMSSW_9_3_X, HIP algorithm started to support options that are more customizable through the input files instead of changing the template python scripts directly.
The input chain still involves several steps:
Starting runs at each IOV
The starting runs of the IOVs need to be stored in a file separated by newlines
e.g.
294668
297215
297281
IOV starting runs have to be consistent with the geometry of the detector (e.g. one cannot specify IOV starting point at 1 and run over data from Run 294668 because the pixel geometry is in Phase-1 for 294668). If the alignment is done on a single IOV, one needs to specify a run number.
Lists of files to connect
HIP currently does not support connecting to DAS automatically to retrieve a file list for particular datasets or runs. This needs to be done externally as in the example below:
'/store/express/Run2017A/StreamExpress/ALCARECO/TkAlMinBias-Express-v1/000/295/209/00000/000C1C1E-DF41-E711-9E49-02163E01A790.root','/store/express/Run2017A/StreamExpress/ALCARECO/TkAlMinBias-Express-v1/000/295/209/00000/005A20C4-D741-E711-8C6F-02163E019D23.root','/store/express/Run2017A/StreamExpress/ALCARECO/TkAlMinBias-Express-v1/000/295/209/00000/0087CE98-E141-E711-A780-02163E0134EB.root','/store/express/Run2017A/StreamExpress/ALCARECO/TkAlMinBias-Express-v1/000/295/209/00000/02905BD8-E641-E711-8B84-02163E019D72.root','/store/express/Run2017A/StreamExpress/ALCARECO/TkAlMinBias-Express-v1/000/295/209/00000/02D4AFF8-D841-E711-A2A8-02163E0146D8.root'
'/store/express/Run2017A/StreamExpress/ALCARECO/TkAlMinBias-Express-v1/000/295/209/00000/0ACA77CF-E141-E711-928A-02163E01A213.root'
Each line in this file corresponds to one parallel job. The first line specifies a job processing 5 files, and the second line instructs the next job to process only 1 file. Because each job can only process events from a single IOV, please do not mix runs from different IOVs in the same line.
Notice that lines do not end with a comma, but the different files are comma-separated. Notice also the single (or double) quotes. These conventions are such that a python list can contain them as a list of strings.
List of file lists to process
An example list file is given below. The fields currently supported in each row are [FILES LIST],[TRACK SELECTION CONFIG (Optional)],[SELECTION FLAG]. The usually-optional (see below) comma-separated fourth field specifies the flag options for the mandatory selection flag.
## List of some Run2016 A/B samples (with dimuon constraint example)
# CDCs
cdcsruns_2016AB.txt.dat_294668,,CDCS,DataType:0 APVmode:deco Bfield:3.8T GT:92X_dataRun2_Express_v2
# COSMICS
cosmicsruns_2016AB.txt.dat_294668,,COSMICS,DataType:0 APVmode:deco Bfield:3.8T GT:92X_dataRun2_Prompt_v2
# MBVertex
mbvertexruns_2016AB.txt.dat_297281,mycustommbvertexselection_cff_py.txt,MBVertex,Datatype:1
# ZMuMu
zmumuruns_2016AB.txt.dat_297215,,ZMuMu,DataType:2 TBDconstraint:fullconstr
# ZMuMu (no dimuon constraint)
#zmumuruns_2016AB.txt.dat_297215,,ZMuMu,DataType:2
# UpsilonMuMu
ymumuruns_2016AB.txt.dat_297215,,YMuMu,DataType:3 TBDconstraint:fullconstr TBDselection:y1sel
# JpsiMuMu
jpsimumuruns_2016AB.txt.dat_297281,,JpsiMuMu,DataType:4 TBDconstraint:fullconstr
Each entry corresponds to one of the list of files prepared for the jobs to process. The first field provides the relative or absolute path to these files.
The second field is optional and provides the track selection configuration desired. batchHippy option --trkselcfg (default: Alignment/HIPAlignmentAlgorithm/python) determines the common directory that contains these files, and when nothing is specified, this configuration file is defaulted to [FLAG NAME LOWERCASE]TrackSelection_cff_py.txt (e.g. zmumuTrackSelection_cff_py.txt).
The third option, the flag name, specifies the default configurations for the track selection and HIP flag-specific options. The supported flags are (case-insensitive) MBVERTEX (min. bias), COSMICS (cosmics), CDCS (cosmics in collisions), ZMUMU (dimuon events with Z selection), YMUMU (dimuon events with Upsilon selections), and JPSIMUMU (J/psi dimuon events).
The fourth field is usually optional and specifies the options to override. The options are specified with the colons (:) and are separated by single whitespace. Please see
HipPyOptionParser inside python/OptionParser for the full set options to use. However, we outline a few of them here:
Mandatory for COSMICS or CDCS:
Bfield: 3.8T or 0T
APVmode: deco or peak
Mandatory for YMUMU:
TBDSelection:
Y1Sel (if a TBDconstraint is specified)
Some other options:
GT: Override global tag name (no colon, :, if auto tags are specified)
GTspecs: Load different conditions, e.g.
GTspecs:record=SiPixelTemplateDBObjectRcd|tag=SiPixelTemplateDBObject_phase1_38T_2017_v3_hltvalidation|connect=FrontierProd/CMS_CONDITIONS;record=...connect=alignments.db;record=...
TrackCollection (e.g.
ALCARECOTkAlMinBias)
DataType (integer used to specify the category of hits in nhits-dependent reweighting per alignable, see use case below)
OverallWeight (double-precision)
Specific comments on align_tpl_py
A typical configuration of alignment can be found in Alignment/HIPAlignmentAlgorithm/python/align_tpl_py.txt: note that all these directories are
templates so they are not regular python files, they require automatic manipulation inside automatic processing.
[... python includes ...]
# "including" common configuration
<COMMON>
In processing the string COMMON is substituted with the Alignment/HIPAlignmentAlgorithm/python/common_cff_py.txt file, which contains all general settings (global tag, starting geometry etc.).
if 'COSMICS' =='<FLAG>':
process.source = cms.Source("PoolSource",
fileNames = cms.untracked.vstring(<FILE>)
)
else :
process.source = cms.Source("PoolSource",
fileNames = cms.untracked.vstring(<FILE>)
)
These are input data divided by categories (only cosmics and no-cosmics are defined, but one can add more, like J/psi, Z etc.)
# "including" selection for this track sample
<SELECTION>
In processing the string SELECTION is substituted with the Alignment/HIPAlignmentAlgorithm/python/ALCARECO*TrackSelection_tpl_py.txt files, which contains track selection criteria.
# parameters for HIP
process.AlignmentProducer.tjTkAssociationMapTag = 'TrackRefitter2'
process.AlignmentProducer.hitPrescaleMapTag=''
process.AlignmentProducer.algoConfig.outpath = ''
... etc.
We highlight some of the parameters used by
HIPAlignmentAlgorithm_cfi.py
. For other parameters, refer to
SWGuideAlignmentAlgorithms or
SWGuideMisalignmentTools.
outpath and
uvarFile,
outFile,
outFile2 etc.: path and name of output files to save.
apeSPar: Alignment Position Errors (APE) for linear shifts (
x,
y,
z) given as a set of three numbers {
a,
b,
c }.
- a is the initial APE (usually the expected misalignment for iteration 1)
- b is the final APE
- c is the number of iterations
APE is an additional error applied to the reconstructed hits in order to take misalignment into account in the tracking and refitting. We usually set the APE to the size of the misalignment for iteration 1, and then reduce it as a function of iteration. For example,
apeSPar = {0.01, 0.0, 5}
means an APE of 0.01 cm for iteration 1 and shrink it to 0 in 5 iterations.
apeRPar: Alignment Position Errors (APE) for angular shifts (
α,
β,
γ) given as above.
applyAPE: if true, apply APE according to the previous 2 settings (
if one needs a specific APE set (i.e. all 0), this can also be done in the common settings by overwriting the TrackerAlignmentError record)
minimumNumberOfHits: only move alignables with at least this number of hits.
eventPrescale: change if you want to process only 1 event every n (can be useful to reduce huge datasets, like minimum bias)
fillTrackMonitoring: if true, save all the track quantities in HIPAlignmentEvents.root
[... ancestral data quality rerquirements, should probably be removed ...]
[... CMSSW paths, one per data category, they contain an alternation of selection and refitting steps...]
The collection step (collect_tpl_py)
Additional parameters for running the second step are shown below.
collectorActive: true to run HIPAlignmentAlgorithm as a collector.
collectorNJobs: number of parallel jobs.
collectorPath: parent path of parallel jobs' output directories. For example, if this is set to "out/", then it is assumed that the outputs of the parallel jobs are stored in "out/job1/", "out/job2/" and so on.
A part only for survey constraint, used only in old releases.
Run HIP
For all information regarding ALCARECO data samples, general information and alignment group activities refer to
TkAlignment.
To run the scripts, you must work from HIPAlignmentAlgorithm directory. Modify them according to your needs:
process.AlignmentProducer.algoConfig.fillTrackMonitoring=False
should be changed to True ONLY for testing purposes (e.g. only 1 iteration to check that track-related quantities make sense), for real alignment jobs it should be set to false to avoid huge ROOT files.
- In python/common_cff_py.txt , change:
- global tags
- initial geometry/APE (you may need to make an "unlimited IOV" DB object for this: see below)
- parameter selection
- In python/collect_tpl.py :
- you can set a minimum number of hits per alignable, changing the line
process.AlignmentProducer.algoConfig.minimumNumberOfHits = 30
-
- Also here there is a line:
process.AlignmentProducer.algoConfig.fillTrackMonitoring=True
which refers to the merged file (if false in single jobs, this parameter value is irrelevant)
Prepare initial geometry if you don't have one yet. In most cases, you need a DB object with an "unlimited Interval Of Validity (IOV)" (N.B. This is
absolutely necessary in CMSSW 5). Use these instructions (in the following example you obtain an unlimited-IOV DB object called GR10_v4.db with tag "Alignments" from the tag "TrackerAlignment_GR10_v4_offline" in the global tag with interval of validity [firstRun, lastRun])
cmscond_export_iov -s frontier://FrontierProd/CMS_COND_31X_ALIGNMENT -d sqlite_file:GR10_v4.db -t TrackerAlignment_GR10_v4_offline -b <firstRun> -e <lastRun> -l sqlite_file:log.db
cmscond_list_iov -c sqlite_file:GR10_v4.db -t TrackerAlignment_GR10_v4_offline > TrackerAlignment_GR10_v4_offline.dump [CMSSW 4]
or
cmscond_list_iov -c sqlite_file:GR10_v4.db -t TrackerAlignment_GR10_v4_offline -o [CMSSW 5]
Now edit TrackerAlignment _GR10_v4_offline.dump, changing firstRun into 1, lastRun into 4294967295 and replacing TrackerAlignment _GR10_v4_offline by Alignments, and finally do:
cmscond_load_iov -c sqlite_file:GR10_v4.db TrackerAlignment_GR10_v4_offline.dump
Prepare initial geometry - step 2: The second step to prepare the initial geometry is to prepare an unlimited IOV object for the GeometryPositionRcd. This object apply corrections stored in to the database to the initial geometry. In order to have a good alignment this object must to have an unlimited IOV. To produce it, you have to follow the same steps of the initial geometry, i.e.:
cmscond_export_iov -s frontier://FrontierProd/CMS_COND_31X_ALIGNMENT -d sqlite_file:GlobalAlignment.db -t GlobalAlignment_2009_v2_express -b <firstRun> -e <lastRun> -l sqlite_file:log.db
cmscond_list_iov -c sqlite_file:GlobalAlignment.db -t GlobalAlignment_2009_v2_express -o
Now edit the output GlobalAlignment _2009_v2_express.dump, changing firstRun into 1, lastRun into 4294967295 and replacing the tag with GlobalAlignment, and finally do:
cmscond_load_iov -c sqlite_file:GlobalAlignment.db GlobalAlignment_2009_v2_express.dump
Now you have to call this object in Alignment/HIPAlignmentAlgorithm/python/common_cff_py.txt.
Prepare data files : in HIPAlignmentAlgorithm/data prepare one or more data files according to these rules:
- each file name must match a configuration of an alignment track selector in HIPAlignmentAlgorithm/python. [filename].dat must have a corresponding [filename]TrackSelection_cff.py.txt . For example for a data file named ALCARECOTkAlMinBias.dat, the track selection used for that sample will be ALCARECOTkAlMinBiasTrackSelection_cff_py.txt.
- it must contains lines of filenames separated by commas. The job will be split in a number of jobs corresponding to the number of lines in this/these file/s and each job will run on the files grouped in one line.
Prepare a list file: in HIPAlignmentAlgorithm/scripts prepare a list file (.lst extension)
- Each line must be of the type [datafile].dat,[datatype], where [datafile] is one of the files defined above and [datatype] can be (for now) one of the following: MBVertex , Cosmics.
Run HIP: Go in the main HIPAlignmentAlgorithm directory.
- Create a directory for the output: mkdir [myoutputdir]
- ./scripts/iterator_py [n_iterations] [myoutputdir] ./scripts/[listfile].lst ( > [mylogfile].log ) , for example ./scripts/iterator_py 5 MyFirstAlignment ./scripts/cosmin.lst > hip_MyFirstAlignment.log
Tricks:
- Use queues cmscaf1nh or cmscaf1nd: if you observe crashes in the first case due to CPU time limitations (often true with unskimmed MB) move to the second one.
- Try to make job length uniform. E.g. try to make each line of the data files contain approximately the same number of events, if of the same type. Cosmics take much shorter to process since there is only one track to refit, so you can make cosmic data lines much larger.
- If the total job takes more than 24h, the AFS token expires and the last iterations are lost. However an alignment[n].db file is saved at evry 5th iteration, so you can restart from that geometry.
To run multiple IOV
scripts/iterator_py 20 <outputdir> <scripts/data.lst> <iovrun.dat>
The contains IOV boundaries of the input and output geometry, it looks like this
250985
251521
To enable multipleIOV mode, in python/common_cff_py.txt, one need to switch on the mutliIOV flag
process.AlignmentProducer.algoConfig.multiIOV= True
If only single IOV is wanted, set the multiIOV flag to False. One can still use iovrun.dat to configure the initial geometry. For example, if the
TrackerAlignmentRcd is set to GT, and the iovrun.dat file looks like this
250985
Then the payload in GT which correspond to 250985 will be used as the initial geometry.
Remarks
- The first argument $1 refers to the final iteration to end the alignment and not the number of iterations each time the script is run. If you are recovering an alignment that has crashed at iteration 2, and you do
scripts/submitJobs_py 5 out
, the alignment will continue for another three more iterations and stop at iteration 5.
- To check the status of the parallel jobs, use
bjobs -A
.
- The re-submission of failed jobs runs in an endless loop. It will keep retrying until all the parallel jobs are successfully done. If you find an iteration is taking too long to complete, it is likely that a job keeps crashing and you should check its log file for errors.
- The log file for a parallel job is saved in its directory as alignN.out, where N is the iteration number. Similarly, the log file for a collector job is save in "main/" as collectN.out.
- If you want to continue an alignment (i.e. run more iterations), simply
submitJobs_py
with more iterations.
Disk Usage
The disk usage depends on three factors:
- The number of parallel jobs.
- The number of iterations.
- The number of modules to align.
For each parallel job, a temporary file called IOUserVariables.root is written to its directory. (When the collector runs, it combines information from all the IOUserVariables.root files to determine the alignment parameters.) The size of this file scales with C. For the collector, its output scales in general with B and C.
To give an idea, suppose we align all 13300 modules in the Tracker for 10 iterations, and we split the events into 200 jobs. Then, each parallel job will take 3 MB (for 13300 modules in IOUserVariables.root) and the collector will take approximately 7 MB per iteration. Therefore, the total disk space is 3 × 200 + 7 × 10 = 670 MB. This is the disk space required to run the alignment. Since the parallel jobs' output are temporary, the final disk space occupied at the end of the alignment is lower. In this case, the final disk space is 70 MB as occupied by the collector's output.
What happens if you run out of space in the middle of an alignment?
When this happens, you can re-configure your setup by reducing the number of jobs so that you can continue the alignment with less disk usage. To do so, follow these steps.
- Edit the ALCARECO*.dat files by putting a few data files per line. (Remember, the number of parallel jobs depends on the number of lines. If you put two data files per line, then the number of jobs will be halved and so the disk usage will be lower. Of course, the time for one iteration will be longer.)
- Delete main/collect_cfg.py and the existing job directories ("job*/").
- Re-configure your setup by running the script
configure_py
on your new ALCARECO*.dat files. (Ignore the warnings about overwriting existing files.)
- Continue the alignment by
submitJobs_py
.
Troubleshooting
When you notice a problem (for example, the script
submitJobs_py
has hanged, or if you kill the script), and you need help troubleshooting, immediately list the jobs in the queue before the batch history is lost with these commands:
bjobs -w >& outdir/pend.jobs
bjobs -w -d >& outdir/done.jobs
Put the files pend.jobs and done.jobs in the output directory so that all the log files are in one place and we can easily check for the cause of the problem.
Standard Output Files
IOAlignedPositions.root: global positions of alignables after each iteration. The positions for iteration 1 are stored in the tree "AlignablesAbsPos_1", for iteration 2 in "AlignablesAbsPos_2" and so on.
IOAlignmentParameters.root: shifts (Δ
x, Δ
y, Δ
z, Δ
α, Δ
β, Δ
γ) from the previous iteration in local coordinates. The parameters for iteration 1 are stored in the tree "AlignmentParameters_1", for iteration 2 in "AlignmentParameters_2" and so on.
< this is combined into
IOUserVariables.root>
IOMisalignedPositions.root: initial (misaligned) positions of alignables in global coordinates. The tree is "AlignablesAbsPos_1".
(this is not used anymore)
IOTruePositions.root: true positions (before misalignment) of alignables in global coordinates. The tree is "AlignablesOrgPos_1".
(this is not used anymore. The starting position is recorded in the tree
AlignablesAbsPos _0 of
IOAlignedPositions.root.)
HIPAlignmentAlignables.root: contains info on alignables chosen
HIPAlignmentEvents.root: contains info on tracks used for alignment (will not be created if you did not select fillTrackMonitoring = true).
Alignment validation
For standard track-based validations see:
https://twiki.cern.ch/twiki/bin/viewauth/CMS/TkAlAllInOneValidation
Plotting (updated)
You can define your plots here:
HIPAlignmentAlgorithm /plots/runplot.C
plot types: convergence, shift, chi2, fitted parameter, and hit-map.
check the "defined by user" block.
Plotting convergence (original)
These plots can be done
after having ran HIP. They simply go in the HIP output directory, take some of the output .root files and plot the quantities saved in there. For these instructions, let's say that the output directory of your HIP job was /afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/HIP/covarell/outdir1/
- go there and create a directory called "psfiles". The plots will be saved there.
- go in the main/ directory and do "cp IOTruePositions.root IOIdealPositions.root". This step is just dumb: the file IOIdealPositions.root is not used, so it does not matter what is inside, but because of "historical" reasons, the code checks for its existence as well. If it does not find it, it will exit with an error.
- copy in your afs area the following files:
/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/HIP/covarell/MyAlignmentPlots/HIPplots.h
/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/HIP/covarell/MyAlignmentPlots/HIPplots.cc
/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/HIP/covarell/MyAlignmentPlots/plotter.C
- edit just the file plotter.C
- You see that at the top of it there is the assignment to the variable 'base_path'. Here you should put the name of your working dir: sprintf(base_path,"/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/HIP/covarell/");
- the variable dir should have the name of your job dir, i.e. "outdir1"
- as it is now, the code gives to the output ps files a name made by a standard name + a tag that is equal to the name of the output dir. You can change this by giving different values to the var "tag".
- the code works that first it produces (a lot) of histograms from the TTrees in output from HIP and then plots them. If one has the histograms ready, and wants to change only some graphics, one can skip the histo production by giving as input argument the value "true".
- make sure that plotHitMap is commented. The first argument is the path to the HIP output, do not touch it;
- uncomment the lines:
c->extractAlignParams( i, 0);
c->extractAlignableChiSquare( 0);
and
c->plotAlignParams( "PARAMS", outpath);
c->plotAlignableChiSquare(outpath ,0.1);
- execute it. Start a ROOT session. better that you do it in batch mode, i.e. with graphics turned off, otherwise it will try to open a canvas for each layer and it will take ages (especially if you are overseas).
$> root -b
root[0]> .L HIPplots.cc++
root[1]> .L plotter.C++
root[2]> plotter()
Check that no exceptions or segmentation faults were issued by ROOT.
Two sets of plots are produced: the convergence of the parameters and the convergence of the chi2 of the alignable. Let's focus on the former.
There are at most 6 alignment parameters (3 shifts + 3 rotations) that can be changed when moving an alignable. In the eps file above, there are indeed six pads (the 3 shifts in the top row, the three rots in the bottom one). For each of them, what is plotted is the variation of the parameter w.r.t. the values it had in the previous iteration. If the alignment converges, the corrections to the parameter become smaller and smaller at each iteration because the alignable is moving closer and closer to the optimal position that minimizes the chi^2. Therefore, if the alignment converges, the line should approach to zero. There are many lines for each parameter. If one aligns the pixel barrel modules, there 768 alignables and the program will plot all of them. If one aligned all the det units of the Tracker, the plot will contain - one superimposed to each other - 16588 curves. That's why the size of these .eps files can become huge and after they are produced it is not a bad idea to convert them in .png or .gif format.
If you look in HIPplots.cc, the method extractAlignParams accepts three arguments.
- Arg #1: which parameter to calculate. In plotter.C, the call is inside a loop, this means that the code automatically prepares the convergence for all the six parameters. If you aligned less than six params, the pads corresponding to the not-aligned paramters will have only one line on the zero (no alignment -> the parameter never changed along the iterations).
- Arg #2: min number of hits; you can choose to say: if an alignable collected less than 1000 hits, do not plot its line. Useful if you want to make sure that the convergence is good for the high statistics modules, but normally we leave it to zero.
- Arg #3: which subdet you want to consider; by default, it plots the convergence curves for all the subdetectors. If you set it explicitely, you can plot only alignables belonging to the subdet you chose. The numeration is the usual one (PXB=1, PXF=2,TIB=3...).
This extractAlignParams prepares a root file with a lot of histograms. The .eps is prepared by plotAlignParams. the first input to it is the string "PARAMS". Don't change it, it tells the function what to plot and is a special hard coded string. The second is where to save the eps, it is specified on the top of plotter.C .
The function extractAlignableChiSquare and plotAlignableChiSquare work in a similar way. What is plotted, is the value of the chi2/ndof of each alignable as a function of the iteration. This is the quantity that is actually minimized by the HIP algorithm. Modules are moved according to the value of this. Again, if the alignment converged, the curve at a certain point should reach a minimum (not necessarily zero, actually the best would be at 1.0) and stay there for the following iterations. The two arguments as input of extractAlignableChiSquare are the same as Arg#2 and Arg#3 for extractAlignParams. In plotAlignableChiSquare, the first argument is the path to the directory where to save the eps and the second is the minimum value of chi2/ndof in order to plot a curve. If some modules did not collect hits (because they are off, for example), the curve will always be at zero, potentially screwing up the scale on the vertical axis. In this case the minimum is set to 0.1 and it works for most of the cases.
Plotting hit maps
If convergence results are weird, it may be a good idea to check that alignables receive a sufficient number of hits. Using the same code as above:
- edit just the file plotter.C
- uncomment plotHitMap. The first argument is the path to the HIP output, do not touch it;
- the second argument is for which subdet you want to produce the hit maps. The numbering is: BPIX (a.k.a. PXB, a.k.a. TPB) = 1; FPIX (a.k.a. PXF, a.k.a. PXE, ak.a. TPE) = 2; TIB = 3; TID = 4; TOB = 5; TEC = 6; As it is now, the function is called in a loop such that it makes the plots for all the subdets. The third argument is the minimum number of hits collected by a module in order to display it. If it is 0, no minimum cut is applied.
- Comment the lines used to plot the convergence.
- if you did not produce the histograms previously, comment also the lines "plotAlign..." or you are going to get a bad crash (ROOT would try to read histograms that do not exist).
The files Hits_[etc etc].ps show the hit population module-by-module for he subdets (PXB, PXF, TIB...) separately and layer-wise. Notice that die to limitaitons in plotting histograms with ROOT, for the endcaps it is not true that a cell corresponds to a single module (they have a wedge shape and it is very hard to make them fit in a grid as TH2). For the barrel subdets, instead, one cell should correspond to one Det. In case of 2D layers, one cell of the histgram will contain two
DetUnits (i.e. the stereo and the rphi module), so do not be surprised if, for example, layer 1 and 2 of TIB have twice as the hit statistics of layer 3 and 4.
The hits that are counted in the plot are only the ones entering HIP. This means that if a track of the input sample was rejected by the quality cuts, its hits will not go in HIP, they will not be used for updating the alignable positions and will not go even in the hit maps.
Some warnings... the TTree used as input is the one contained in outdir1/main/HIPAlignmentAlignables.root . This file is created after the first iteration, so you need only one iteration for having the hit maps. Then they can be different respect to the n-th iteration because modules move and single hits can be rejected by the outlier rejection, but this is typically a <1% effect, we can neglect it. You do not need to run the hit plotting every time you align. this is typically stable unless major changes in the data samples or in the selections. Also, if you want to produce meaningful hit maps, you should set in collect_tpl.py the minimum number of hits per alignable to zero. The reason is that the workflow works such that first it checks how many hits were collected by an alignable. Only if they are above the threshold, the alignable is considered for the following steps like updating its position or filling its entry in
HIPAlignmentAlignables.root . In practice, at the end one typically makes these hit maps only once in a special alignment run with only one iteration. In order to save time and be efficient, this is also the run where the track monitoring is switched on.
Review Status
Responsible: Main.cklae
Last reviewed by: Reviewer