Tracker Alignment Calibrations and Beamspot for Heavy Ions

Who

Results, Presentations

Date of Meeting

  • Tracker Alignment Meeting - Overall BPIX Shifts vs Run, initial results and status update. Slides

Job Description

Online Data Taking

1. Monitor beamspot and PIXEL detector large structure shifts output by Prompt Calibration Loops (PCL) running on ALCARECO Express Streams. If shifts are within limits (easily identifiable in online DQM plots), then not much needs to be done. Re-check the shifts/beamspot conditions again in the future (~O(1 hr)).

CMS TV, Fill Reports, Web Based Monitoring, Example of Beamspot from Online DQM, PCL Mon, Online Beam Spot, Beam Monitor Shift

2. If large structure shifts are observed by PCL, we notify AlCa/DB (via email, or hypernews) of the observed shifts and request a validation of the newly derived conditions. Tracking reconstruction efficiency depends on the alignment conditions referred to by the online HLT global tag; if the alignment is no longer valid, tracking efficiency can drop, resulting in lost data (and therefore, time and effort).

AlCa/DB Twiki, AlCaRecoMatrix, Alignment/Calibration Hypernews Tracker Alignment Twiki, Tracker Alignment Hypernews

3. If the aforementioned newly derived alignment conditions pass validations, uploading the payload to HLT for online track reconstruction should be considered, and beamspot/large structure shifts derived by PCL afterwards should be closely monitored to verify the anticipated increase/maintenance in performance. After upload, a BPIX barycenter can, and should be derived, though it may not be the final number one settles on as the barycenter for the run (barycenter may change following offline alignment, track refitting, validations).

Condition Uploader, CMS Conditions Database, Get PIXEL Barycenter, Currently Available Alignment Constants, Global Tags

As Prompt Reconstruction Finishes

!!!Important!!!: If online data taking has been running smoothly and for long enough, prompt reconstruction of data earlier in the run will start finishing during the online data-taking period. Online monitoring is to be prioritized due to potential impacts on online tracking efficiency.

1. After significant statistics (i.e. # of tracks) are available via the prompt reconstruction streams, it is good practice to derive a beamspot utilizing the available dataset. This beamspot can be compared against preliminary beamspot fits given by PCL running on aforementioned express treams (these can be found via the online dqm tool).

2. Discuss uploading conditions derived by PCL express with AlCa; note that this is similar to the step discussed above in that it involves the same workflow. If this has been done as described above; it doesn't necessarily need to be done again. If data taking has finished and there has not yet been any upload of conditions, then it certainly should be done, but not to HLT. An upload to the productions database so that the payload/run(s) are referred to correctly in global/trackerAlignment tags, and available for offline workflows, should be sufficient.

3. Once alignment conditions have been uploaded and are available by the various workflows, a PIXEL barycenter should be derived.

4. At this point, if there is idle time, and alignment conditions are relatively steady, one can start setting up offline workflows, validations... etc.

Running Offline Alignment Calibrations and Validation

1. After data taking has finished, presumably a few things are already in place: online alignment conditions have been uploaded and can be referred to by offline workflows, preliminary beamspot, barycenter, and PIXEL detector shifts are on hand, and all ALCARECO prompt reconstruction PDs are well defined an available, with a golden JSON is available for lumi-masking. If any of these are not available, look into getting these details smoothed over first.

2. Determine the statistics (i.e. # of tracks and events) available across each of the following tracker alignment ALCARECO PD types; Isolated Muon, Z --> MuMu, Min Bias, and Cosmics. Determine approximate weights for tracks from each PD/tracker alignment collection type

3. Run MillePedeII (MPII) to re-derive alignment conditions with varying levels of sophistication (in order of least-to-most; large structures only, large structures with surface deformations, and large structures with surface deformations while permitting module-level shifts).

Run MPII for Alignment

4. After MPII spits out results, run validation workflows (Primary Vertex Validation [PV Val], Distribution of Median Residuals Validation [DMR Val], Z Boson Mass Validation) to decide if the new alignment's performance is better or worse than before. Change the alignment settings and re-run MPII and validation workflows as needed until satisfied with tracking performance.

5. Iterate on steps 3-4 until clear intervals of validity (IOVs) can be identified for a given set of alignment conditions, and performance is optimized for all data taken during the period. Because alignment performance can and will vary as a function of run for a given set of conditions, separate alignment conditions often need to be derived for different ranges of runs during the same data taking period.

IOV Determination and Alignment Validations

Derive PIXEL Barycenter

Supplementary info to this section to be added later.

See here for Barycenter.

Upload Alignment Conditions to CondDB (WIP)

Supplementary info to this section to be added later.

See here for how to upload to CondDB.

Derive Beamspot

You should use the data-taking CMSSW release. For pPb in 2016, this was CMSSW_8_0_23. Begin like so;

cmsrel CMSSW_8_0_23
cd CMSSW_8_0_23/src
cmsenv
git cms-init
git cms-addpkg RecoVertex/BeamSpotProducer
scram b -j8

out-dated section, kept for legacy reasons

Additional files you’ll need to run the fit and perform post-fit merging of results. Copy them to your new workspace like so

cp -r /afs/cern.ch/user/j/jcastle/public/forIanAndZhen/pPb5TeV RecoVertex/BeamSpotProducer/test/./ cp -r /afs/cern.ch/user/j/jcastle/public/forIanAndZhen/BeamspotTools RecoVertex/BeamSpotProducer/python/./

They should run out of the box. The standard checkout of these files are full of errors. and I spend a few hours last night fixing them.

*end out-dated section, kept for legacy reasons"

The cfg file of interest is located here:

RecoVertex/BeamSpotProducer/test/pPb5TeV/analyze_d0_phi_pPb_cfg.py

Run over StreamExpressPA data as that is all we have available (do this again when prompt reco finishes). We run over ALCARECO data in the beamspot case, specifically TkAlMinBias ALCARECO. Make sure the track collection name reflects this. To run locally, copy a few ALCARECO files from EOS and put the files in the 'source' section like so:

process.source = cms.Source("PoolSource",
                            fileNames = cms.untracked.vstring(
"file1.root",
"file2.root",
"file3.root",
...
"fileN.root"

Note; the script can be run over xrootd and will default to doing so for filenames beginning with '/store/'.

out-dated section, kept for legacy reasons

Some relevant lines from the cfg file:

L33: We’re runnning over express data, so we’re going to use the express GT

L73: This is the input track collection for the beam fitter. It’s ALCARECO data and doesn’t use generalTracks, the name in this case is ALCARECOTkAlMinBias

L83-84: This tells the fitter how many lumisections to use in a fit and to reset the fit parameters after a certain number of lumisections. When you run over data, the conditions can change on a lumi basis, so we must set the values as 1 for each. IF you were to run over MC, where conditions never change, you would set these values to -1. This tells the fitter to calculate the beamspot as an average over all lumis and to never reset the fit.

L87: This is our path. Since we’re not doing anything fancy here, we only need to run process.d0_phi_analyzer.

end out-dated section, kept for legacy reasons

When new alignments become available, the beam fit results will change. We’ll have to override the global tag and then refit the tracks before we fit the beamspot. This is why there are commented out features in the path, they should be utilized later. To run the fitter:

cmsRun analyze_d0_phi_pPb_cfg.py

This will spit out some fit results as it runs if there are only zeros for fit values, that means the fit failed for that lumi. A good sign is when it says the beamtype = 2. When the fitter is finished it will spit out a ROOT file and a text payload. The text payload is the important one, it will contain all fits for all lumis run over. This is a pain to read though, and in this case we’re interested in the average over all runs. Luckily the people in charge of this have written some nice python code that will take care of this for us.

Now move to the RecoVertex/BeamSpotProducer/python/BeamspotTools/test directory. Recall that text file that the fitter spit out? Put it in the StreamExpressPA_Run285244Test/files directory. In the future you’ll be putting a bunch of text files from crab jobs in there. We need to remove all the underscores in the name of the text file and give it a job number at the end. Run this command:

mv BeamFit_pPb502DATA.txt BeamFitpPb502DATA1.txt

Now have a look in the json directory

/afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/

Follow the directory names from there, copy the relevant JSON and put it in your current directory. Note: it's possible that a golden JSON is not yet availble.

Now have a look at the offline workflow file. Head back to the RecoVertex/BeamSpotProducer/python/BeamspotTools/test directory and open up test_workflow.py. The important lines:

L16: This is the name of the directory for each new job

L17: This number is set to the number of files in the “files” directory plus 1.

L19-20: First and last run of the data you ran over.

L28: This will look into the files directory and use every txt file in it to create a payload object. This is why we had to remove the underscores earlier. Just make sure the name in the string matches the names in the files directory. It will loop from 1 to jobrange (like looping over results from a crab job)

L51: This locates the JSON file in the json directory, make sure the names match

I believe that is all you’ll need to change. What this will do is loop over aaaaalll the fit results and calculate several different things. It will first look at the drifts between lumisections, if they are small, then the payloads will be merged to cut down on the number of IOVs and save some disk space. This will write a txt file in the weightedResults directory for the job, that contains these “harvested” IOVS. Look for something like *LumiIOV.txt.

Next it will calculate the average beam spot for every run used in the fit. Look in the weightedResults directory and there will be something like run*_AllRun.txt. Lastly, it will calculate the average over ALL runs in the job look for aveOverAllRuns.txt

Now run the offline workflow. First we have to setup a crab2 environment, it’s a pain, but necessary for a python package that this uses. Run this command:

source /afs/cern.ch/cms/ccs/wm/scripts/Crab/crab.sh
python test_workflow.py

Barring no errors, when this is finished look in the weightedResults directory and see what popped out.

Derive Beamspot, XeXe Data 2017

cmsrel CMSSW_9_2_13
cd CMSSW_9_2_13/src
cmsenv
git cms-init
git cms-addpkg RecoVertex/BeamSpotProducer
scram b -j8

The cfg file of interest is located here:

RecoVertex/BeamSpotProducer/test/analyze_d0_phi_pPb_cfg.py

Make sure "SaveNtuple" is set to "True", "Apply3DFit" set to "True", and "fitEveryNLumi"/"resetEveryNLumi" are set to 1 if running over data. "TrackCollection" should match that of the PD of interest (ALCARECOTkAlMinBias). Set "MinimumInputTracks" to 10. No global tag nor specific conditions need to be specified right now; these instructions are to provide a first-pass beamspot measurement using the track and vertex collections directly available in the PD.

If the PD of interest is available at a site other than T0/T1, one should run the job on crab, as the track ntuple output can be large. If it's only available at T0/T1, the filelist can be obtained via DAS and ran locally with cmsRun by editing the filelist in process.source;

process.source = cms.Source("PoolSource",
                            fileNames = cms.untracked.vstring(
"file1.root",
"file2.root",
"file3.root",
...
"fileN.root"

Note; the script can be run over xrootd and will default to doing so for filenames beginning with '/store/'. In this case, running without saveNtuple on may be prudent. If running on CRAB; copy the golden JSON to your local dir

cp /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions17/HI/Cert_304899-304907_5TeV_PromptReco_XeXe_Collisions17_JSON.txt .

and point your crab config to it with

config.Data.lumiMask = 'Cert_304899-304907_5TeV_PromptReco_XeXe_Collisions17_JSON.txt'

Running Alignments with MPII

Files

to edit, and double check for syntax errors

alignment_config.ini, universalConfigTemplate.py

Commands

often used commands/scripts

mps_alisetup.py, mps_stat.py, mps_fire.py, mps_fetch.py

step 0: setup workspace

cd /afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/MP/MPproduction
cd CMSSW_8_0_24/src
cmsenv
cd -
mps_setup_new_align.py -t data -d “${USER} 2016 PA alignment"
cd mp2302

the ‘2302’ at the end of mp2302 is a # assigned by the setup command. 2302 is what was given the first time these commands were used. future workspaces will have a different number

step 1: set up MPII job with alignment_config.ini

open the alignment config with your text editor of choice; e.g. emacs alignment_config.ini

1. On L29, point the datasetdir variable to the dataset directory,

/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/MP/MPproduction/datasetfiles/HI2016

2. After L29 create a json variable:

/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/MP/MPproduction/datasetfiles/HI2016/json/Cert_285090-286520_HI5and8TeV_PromptReco_Pbp_Collisions16_JSON_noL1T_MERGED.txt

3. L32 set globaltag as the one used to reprocess the PD you wish to run over. Check DAS to make sure you have the correct one for each PD.

4. L33 set FirstRunForStartGeometry as the first run in the era you’re running over, e.g.

PARun2016A = 284715
PARun2016B = 284756
PARun2016C = 285419
PARun2016D = 286510

5. Now go to the end of the file and comment out L85-104. Make a section for each of your 10 PDs following the same structure as the lines previously commented out. Each section needs the following:

[dataset:<trackType>]
collection = ALCARECO<track type in PD>
inputFileList = ${datasetdir}/<text file you made>
globaltag = <Global tag used to reprocess the data>
njobs = <number of jobs>

Some quick comments; * please check the globaltag field is set to what it should be***, the one listed for the PD according to DAS. If the PD has a lot of files ( ~O(10^3) ), set njobs to be somewhere between 25-50, unless it’s tiny (~O(10^1 or 10^2)), then just set njobs to 1. The text file you made contains the name of the track collection within.

The track collection to be used is one of the following:

ALCARECOTkAlMuonIsolatedPA
ALCARECOTkAlZMuMu
ALCARECOTkAlMinBias

The header needs to match the track collection, it will be one of the following:

dataset:IsoMu
dataset:ZMuMu
dataset:MinBias

4. Comment out all but one of the sections you made. DANGER! Every section that is uncommented will be run by MPII!

step 2: set up alignment with universalConfigTemplate.py

open the universal config;

emacs universalConfigTemplate.py

1. Uncomment L99-102 and make sure the tabbing is correct. This command will overwrite the global tag, we’ll be adding a few more of these in at a later time. Not needed right now.

2. Uncomment L113-126. This defines the level of alignment.

3. In L121-124, change the “111111” to “ffffff”, likewise for the “rrrrrr” this tells MPII to fix the positions of these large structures (i.e. don’t run a fit for these sections).

4. Uncomment L138-L151 and make sure the tabbing is correct. This sets the pede settings that we’ll be using later on.

step 3: running MPII

Okay, we’re all set to run an alignment! We’ll tell CMSSW to set up our configuration with the following command:

mps_alisetup.py alignment_config.ini

This will parse through the two files and set up the jobs. When complete it will have created a “jobData” directory and a “mps.db” database. Within the job directories will be subdirectories for each of the njobs you created as well as a jobm (merge step).

Now we need to submit our jobs. To do so, run the following command FROM THE mpXXXX directory:

mps_fire.py <njobs>

You can check to see whether a job has finished with the following command:

mps_stat.py

When jobs finish, retrieve them with the following command:

mps_fetch.py.

When all jobs have finished, fire the merge step:

mps_fire.py -m

When the merge is finished, fetch it with

mps_fetch.py.

Congratulations! You’ve just run your first alignment.

Now, let’s get some information from these jobs. We need two pieces of info. The first is the number of events that survived the procedure, you can get this from the mps_stat.py command.

Look at the event total at the bottom of the output. The second number we need is the number of tracks used. To get this navigate to the jobData/jobm directory and do,

gunzip pede.dump.gz

This file contains loads of information about the merge job that you can look through at your leisure. Do a string search for NREC. The number after NREC is the number of tracks used in the pede step.

With these two numbers (#tracks, #events survived), navigate to my working directory (mp2298) and put them in the TracksUsedInAlignment.txt file.

cd $MPPROD/mp2298

emacs TracksUsedInAlignment.txt

The procedure can now be repeated for the remaining PDs in your list.An example setup that works is in James’ working directory. More info:

https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideMillepedeProductionSystem

Alignment Validation Workflows

Primary Vertex (PV) Validation

Important Directories

all following dirs listed here are in /afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/

/data/commonValidation/results/$USER/PVValidation_TEST
/MP/MPproduction/CMSSW_8_0_24/src/Alignment/OfflineValidation/test
/MP/MPproduction/CMSSW_8_0_24/src/Alignment/OfflineValidation/macros
/MP/MPproduction/CMSSW_8_0_24/src/Alignment/OfflineValidation/macros/PPbRun2016C_BPIXMovements

Important Commands

validateAlignments.py -N PVVal_pPb_MB16C_Run<runnumber> -c PVVal_pPb_MB16C_<runnumber>.ini
getPVRootFiles_AND_makePlots_pPb.sh <runnumber>

Important File(s)

testPVValidation_all_ini_one.ini

Motivation

A primary vertex validation for every run in Run2016C is needed, meaning IOV boundaries must be defined. We want to use the primary vertex validation portion of the all-in-one tool to obtain, for example, the relative shift between the BPIX half barrels. Using these shifts, we can decide how many IOVs are needed.

Workflow

When determining the IOVs, we want to only consider the most important runs of 2016C; those in the golden JSON file. Using DAS in addition to the golden JSON, we can determine all of the runs in 2016C that we need to use. Recall that the golden JSON is placed here:

/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/MP/MPproduction/datasetfiles/HI2016/json/Cert_285090-286520_HI5and8TeV_PromptReco_Pbp_Collisions16_JSON_noL1T_MERGED.txt

The location of code for the primary vertex validation in the CMSSW release that we’re using for alignments:

/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/MP/MPproduction/CMSSW_8_0_24/src/Alignment/OfflineValidation

Navigate your way to the test directory and have a look at the example file testPVValidation_all_ini_one.ini, you can use this for reference (analog; alignment_cfg.ini for prev. alignments). There is very little to change for each run, we want to use the same starting point for each job. This way when we plot, for example, the half shell movement as a function of time (or run number), the only thing that is changing is the physical positions of the tracker.

Now let’s look at sections of testPVValidation_all_ini_one.ini

first is the initial conditions of the detector:

[alignment:alignment_data]
title                   = Run 285975
globaltag                 = 80X_dataRun2_Prompt_v15
condition TrackerAlignmentRcd       = frontier://FrontierProd/CMS_CONDITIONS,TrackerAlignment_v19_offline
condition TrackerAlignmentErrorExtendedRcd = Frontier://FrontierProd/CMS_CONDITIONS,TrackerAlignmentExtendedErrors_v6_offline_IOVs
condition TrackerSurfaceDeformationRcd  = frontier://FrontierProd/CMS_CONDITIONS,TrackerSurfaceDeformations_v9_offline
color                   = 1
style                   = 2

Here we have the global tag (which includes alignment conditions) and a specified set of starting alignment conditions, which will override those in the GT. If you look at the alignment conditions, they are formatted as a comma separated list. The first element is the connection string to the DB, and the second is the tag of the condition.

next is a list of settings for the PV Validation, don’t touch these ever.

[plots:primaryvertex]
doMaps     = true
stdResiduals  = true
autoLimits   = false
m_dxyPhiMax  = 40
m_dzPhiMax   = 40
m_dxyEtaMax  = 40
m_dzEtaMax   = 40
m_dxyPhiNormMax = 0.5
m_dzPhiNormMax = 0.5
m_dxyEtaNormMax = 0.5
m_dzEtaNormMax = 0.5
w_dxyPhiMax  = 150
w_dzPhiMax   = 150
w_dxyEtaMax  = 150
w_dzEtaMax   = 1000
w_dxyPhiNormMax = 1.8
w_dzPhiNormMax = 1.8
w_dxyEtaNormMax = 1.8
w_dzEtaNormMax = 1.8

The following outlines the data to run over.

[primaryvertex:test_pvvalidation]
maxevents    = 1000000
dataset     = /PAMinimumBias10/PARun2016C-TkAlMinBias-PromptReco-v1/ALCARECO
trackcollection = ALCARECOTkAlMinBias
vertexcollection = offlinePrimaryVertices
isda      = True
ismc      = False
numberOfBins  = 48
runboundary   = 285975
lumilist    = /afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/MP/MPproduction/datasetfiles/HI2016/json/Cert_285090-286520_HI5and8TeV_PromptReco_Pbp_Collisions16_JSON_noL1T_MERGED.txt
ptCut      = 3.
runControl   = True

A quick run down of each of the fields in the previous section:

isda:          Boolean, stands for is Data. If this is True, _ismc_ must be False
ismc:         Boolean, stands for is MC. If this is True, _isda_ must be False
runcontrol:     Boolean, tells the workflow if you want to run over a single run or a series of runs. If it’s true it will be a single run. If false, then it will run over all runs in the dataset file.
runBoundary:   Integer, if _runcontrol_ is true, then this is the run you will run over.
dataset:       String, name of the python dataset file located in the _OfflineValidation/python/_ directory, omit the<i> .py</i> extension
maxevents:    Integer, number of events to use in the validation
vertexCollection: String, name of the reconstructed vertices used in dataset
trackCollection: String, name of the reconstructed tracks used in dataset
lumilist:        String, points to the golden json file 
ptcut:         Float, minimum pt of the probe tracks selected in thus workflow

the current setup begins from the starting conditions of the last IOV, typically this is the latest and greatest available pp calibrations. Pulling starting conditions from PCL only considers the alignment of the pixel tracker as opposed to the entire detector. This will be the case for a full-scale alignment as well. For reference, the link of these conditions from the TkAlignment page is shown below. The important information here is to see that the last IOV in these tags is before the pPb run. This tells us that we will be picking up exactly the conditions we want.

https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideAlignmentConstants?redirectedfrom=CMS.SWGuideAlignmentConstants#2016_Legacy

now make a copy of testPVValidation_all_ini_one.iniand name it according to the run you’re working on. change only the runBoundary line for each job. To submit the job to LSF (the batch system on lxplus), run the following command after cmsenv and voms-proxy-init:

validateAlignments.py -N PVVal_pPb_Run<runnumber> -c <your-config.ini>

This scary output is generated; but is no harm

bash: module: line 1: syntax error: unexpected end of file
bash: error importing function definition for `BASH_FUNC_module'
sh: module: line 1: syntax error: unexpected end of file
sh: error importing function definition for `BASH_FUNC_module'
sh: module: line 1: syntax error: unexpected end of file
sh: error importing function definition for `BASH_FUNC_module'
sh: module: line 1: syntax error: unexpected end of file
sh: error importing function definition for `BASH_FUNC_module'
sh: module: line 1: syntax error: unexpected end of file
sh: error importing function definition for `BASH_FUNC_module'
sh: module: line 1: syntax error: unexpected end of file
sh: error importing function definition for `BASH_FUNC_module'
Validating TkAlPrimaryVertexValidation.test_pvvalidation.alignment_data
sh: module: line 1: syntax error: unexpected end of file
sh: error importing function definition for `BASH_FUNC_module'

This error output has been around since james started; no one’s fixed it, and it’s harmless to the workflow. Apparently, they now know and love it, and get worried when it doesn’t show up! To check the status of the submitted batch jobs, do bjobs. The submitted batch jobs can take about an hour (or hours, pending LSF traffic and job size). Feel free to take a break and come back later to check on the status of the jobs

When the job’s finished make your way over to the OfflineValidation/macros directory. A simple script that will run the final step for you, getPVRootFiles_AND_makePlots_pPb.sh. Open it up and have a look, it’s just a bash script. If edits can be made to make your life easier (for whatever reason), make a copy and use that (e.g. getPVRootFiles_AND_makePlots_pPb_ianEdit.sh is my version, meant to be used with runAll.sh). The output of the batch job will be stored here:

/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/data/commonValidation/results/$USER/PVValidation_TEST

This script takes one argument, the run number, and it will copy the corresponding output to your current working directory (OfflineValidation/macros) and run FitPVResiduals.C macro. The macro will generate many plots, a root file and a text file, which was moved into the PPbRun2016C_BPIXMovements directory. I have a file placed there for Run 285975, try running over the same run and see if you get the same results.

That’s it! The procedure can be repeated for all the runs of interest. Plot major results as a function of run number to consider where to put IOV boundaries.

Using MPII Derived Conditions (WIP)

under construction

i run getPVRootFiles_AND_makePlots_pPb.sh without deletion of plots/root files, outputting plots/output from the output of validateAlignments.py (which validates/looks at PV fits from mps_fire.py [which fits for large structure shifts/errs]). In

/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/MP/MPproduction/CMSSW_8_0_24/src/Alignment/OfflineValidation/macros

we see the output from validateAlignments for each run. It’s spit into separate folders with corresponding run #’s.

in directory…

/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/MP/MPproduction/CMSSW_8_0_24/src/Alignment/OfflineValidation/test

did above validateAlignments.py job, which used PVVal_pPb_MB16C_285993_JRC.ini (using alignment_data and alignment_MP starting conditions), puts output logs (.stdout, .stderr) in dir below

/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/MP/MPproduction/CMSSW_8_0_24/src/Alignment/OfflineValidation/test/PVVal_pPb_MB16C_Run285993

the actual output from the job gets moved to;

/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/data/commonValidation/results/ilaflott/PVValidation_TEST/PVVal_pPb_MB16C_Run285993/PrimaryVertexValidation/test_pvvalidation

and now there’s two folders of output, alignment_MP and alignment_data, one for each set of starting conditions. now i can run (and i have) getPVRootFiles_AND_makePlots_pPb_ianEdit.sh on each data, and see how the shifts change with the starting conditions.

/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/MP/MPproduction/CMSSW_8_0_24/src/Alignment/OfflineValidation/macros/pPb_BPIXMvmnt_MB16C_Run285993_MP

/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/MP/MPproduction/CMSSW_8_0_24/src/Alignment/OfflineValidation/macros/pPb_BPIXMvmnt_MB16C_Run285993_data

———————

issues with MP starting conditions; fits of poor quality etc. likely due to limited statistics, redoing the MP alignment, new work area:

/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/MP/MPproduction/mp2486

Distribution of Median Residuals (DMR) Validation (WIP)

under construction

Z Boson Mass Validation (WIP)

under construction

Cheat Sheet

Setup Alignment Workspace

cd /afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/MP/MPproduction
cd CMSSW_8_0_24/src
cmsenv
cd -
mps_setup_new_align.py -t data -d “${USER} 2016 PA alignment"
cd mp2302

the ‘2302’ at the end of mp2302 is a # assigned by the setup command. 2302 is what was given the first time these commands were used. future workspaces will have a different number

Current BPIX Longitudinal Shifts

MPII Alignment Conditions as of 6/20/2017

MB16C runs, Pixel Barrel Longitudinal shift+err in micrometers (delta_z + err) (~400k evt each), using muonic Z decays

RUN# , delta_z , err(delta_z)
285975, -2.35683 , 0.171269
285993 , -0.848118 , 0.171833
285994 , 0.193193 , 0.749995
285995 , -0.419457 , 0.331266
286009 , 1.36084 , 0.167302
286010 , -0.951415 , 0.175787
286023 , -1.28577 , 0.169183
286031 , -3.04083 , 0.166624
286033 , -2.6205 , 0.170877
286034 , -2.10406 , 0.173596

note: all shifts are basically negative now; likely due to limited track statistics available in ZMuMu datasets

Prev. Run's Alignment Conditions as of 3/21/2017

MB16C runs, Pixel Barrel Longitudinal shift+err in micrometers (delta_z + err) (~400k evt each)

RUN# , delta_z , err(delta_z)
285975 , 17.1831 , 0.174338
285993 , 18.7716 , 0.175173
285994 , 19.2014 , 0.748329
285995 , 19.0057 , 0.332514
286009 , 18.1493 , 0.170881
286010 , 18.4135 , 0.178411
286023 , 18.1894 , 0.172626
286031 , 16.5078 , 0.170346
286033 , 16.8900 , 0.173879
286034 , 17.4108 , 0.176599

[weights used]
wght_doubleMuIso=0.002
wght_singleMuIso=0.002
wght_MB10=0.002
wght_doubleMuZ=1.0
wght_cosmics=1.0

/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/MP/MPproduction/mp2298/TracksUsedInAlignment.txt

PARun2016 starting run geometries

PARun2016A = 284715
PARun2016B = 284756
PARun2016C = 285419
PARun2016D = 286510

PARun2016 ALCARECO PDs

bolded signifies significant # events/tracks

EraA

/PADoubleMuon/PARun2016A-TkAlMuonIsolatedPA-PromptReco-v1/ALCARECO
/PADoubleMuon/PARun2016A-TkAlZMuMuPA-PromptReco-v1/ALCARECO
/PAMinimumBias1/PARun2016A-TkAlMinBias-PromptReco-v1/ALCARECO
/PAMinimumBias2/PARun2016A-TkAlMinBias-PromptReco-v1/ALCARECO
/PASingleMuon/PARun2016A-TkAlMuonIsolatedPA-PromptReco-v1/ALCARECO
/Cosmics/PARun2016A-TkAlCosmics0T-PromptReco-v1/ALCARECO

EraB

/PADoubleMuon/PARun2016B-TkAlMuonIsolatedPA-PromptReco-v1/ALCARECO
/PADoubleMuon/PARun2016B-TkAlZMuMuPA-PromptReco-v1/ALCARECO
/PAMinimumBias1/PARun2016B-TkAlMinBias-PromptReco-v1/ALCARECO
/PASingleMuon/PARun2016B-TkAlMuonIsolatedPA-PromptReco-v1/ALCARECO
/Cosmics/PARun2016B-TkAlCosmics0T-PromptReco-v1/ALCARECO

EraC

/PADoubleMuon/PARun2016C-TkAlMuonIsolatedPA-PromptReco-v1/ALCARECO
/PADoubleMuon/PARun2016C-TkAlZMuMuPA-PromptReco-v1/ALCARECO
/PAMinimumBias10/PARun2016C-TkAlMinBias-PromptReco-v1/ALCARECO
/PASingleMuon/PARun2016C-TkAlMuonIsolatedPA-PromptReco-v1/ALCARECO
/Cosmics/PARun2016C-TkAlCosmics0T-PromptReco-v1/ALCARECO

odd one out; not using for now….

/PADoubleMuon/PARun2016C-TkAlUpsilonMuMuPA-PromptReco-v1/ALCARECO

EraD

/PADoubleMuon/PARun2016D-TkAlMuonIsolatedPA-PromptReco-v1/ALCARECO
/PADoubleMuon/PARun2016D-TkAlZMuMuPA-PromptReco-v1/ALCARECO
/PAMinimumBias10/PARun2016D-TkAlMinBias-PromptReco-v1/ALCARECO
/PASingleMuon/PARun2016D-TkAlMuonIsolatedPA-PromptReco-v1/ALCARECO
/Cosmics/PARun2016D-TkAlCosmics0T-PromptReco-v1/ALCARECO

Filelists

/afs/cern.ch/cms/CAF/CMSALCA/ALCA_TRACKERALIGN/MP/MPproduction/datasetfiles/HI2016

look at the HI2015 directory. Use das_client.py to make life easier. There should be one text file for each PD. MPII reads these text files to know which data to run over. Below is golden JSON run-range-intersections of important PDs

range: 285975 - 286034

/PAMinimumBias10/PARun2016C-TkAlMinBias-PromptReco-v1/ALCARECO

range: EMPTY

/Cosmics/PARun2016C-TkAlCosmics0T-PromptReco-v1/ALCARECO

range: 285479 - 285750 AND 286327 - 286034

/PADoubleMuon/PARun2016C-TkAlMuonIsolatedPA-PromptReco-v1/ALCARECO

range:285479 - 286200

/PASingleMuon/PARun2016C-TkAlMuonIsolatedPA-PromptReco-v1/ALCARECO
/PADoubleMuon/PARun2016C-TkAlZMuMuPA-PromptReco-v1/ALCARECO
/PADoubleMuon/PARun2016C-TkAlUpsilonMuMuPA-PromptReco-v1/ALCARECO

Good Reading

Beamspot Fitting Methods

Important note: this paper is somewhat old, but section 5.1 describes the log-likelihood fitter used today.

Broad Overview of DQM at CMS

CMS Tracker Alignment with Cosmic Ray and LHC data

Important note: track selection at link below might be old by now (Nov. 2017).

Track Selection for Re-Fitting

PhD Thesis with Helpful Alignment Overview

Edit | Attach | Watch | Print version | History: r11 < r10 < r9 < r8 < r7 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r11 - 2017-11-24 - IanLaflotte
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback