Difference: PIDSamplePrepare (1 vs. 23)

Revision 232016-11-08 - DonalHill

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 29 to 29
  makePIDCalibNtuples_Run2.py is for Run2 where the input is various trigger lines and this matching needs to be done too.
Changed:
<
<
makePIDCalibNtuples.ganga.py simply runs the jobs. Ganga v601r18 definitely works with this script, but subsequent versions appear not to work.
>
>
makePIDCalibNtuples.ganga.py simply runs the jobs.
  In dev/makePIDCalibNtuples.ganga.py, add %CODE{ lang="bash" num="off" }%

Revision 222016-11-07 - DonalHill

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 29 to 29
  makePIDCalibNtuples_Run2.py is for Run2 where the input is various trigger lines and this matching needs to be done too.
Changed:
<
<
makePIDCalibNtuples.ganga.py simply runs the jobs. Ganga v601r14 definitely works with this script, but v602 (with cmake functionality) appears not to work.
>
>
makePIDCalibNtuples.ganga.py simply runs the jobs. Ganga v601r18 definitely works with this script, but subsequent versions appear not to work.
  In dev/makePIDCalibNtuples.ganga.py, add %CODE{ lang="bash" num="off" }%

Revision 212016-11-07 - DonalHill

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 27 to 29
  makePIDCalibNtuples_Run2.py is for Run2 where the input is various trigger lines and this matching needs to be done too.
Changed:
<
<
makePIDCalibNtuples.ganga.py simply runs the jobs.
>
>
makePIDCalibNtuples.ganga.py simply runs the jobs. Ganga v601r14 definitely works with this script, but v602 (with cmake functionality) appears not to work.
  In dev/makePIDCalibNtuples.ganga.py, add %CODE{ lang="bash" num="off" }%
Line: 66 to 68
 After all jobs finished, you may need to download them to local directory. %CODE{ lang="bash" num="off" }% In [6]:for js in j.subjobs:
Changed:
<
<
...
if js.status == 'completed':
...
js.backend.getOutputData()
>
>
...
if js.status == 'completed':
...
js.backend.getOutputData()
 %ENDCODE%

CaibDataScripts: Produce tuples for each particles

Line: 75 to 79
  Hence this means that the workflow goes like this:

Changed:
<
<
Ntuples finish making --> Run ranges are defined -- > Data split into those specific run ranges -- > any additional selection applied -- >mass fit performed in each run range for each charge -- > data is sWeighted -->
spectator variables are added to the data set --> both charge datasets are merged . The same set of steps is repeated for each decay channel. For the protons, since there is more than one momentum range the fit is done separately in each range and then merged. For D*->D(Kpi)pi the data the definition of the run range and the data splitting is a common step for both K and pi, however the mass fits and are done twice (even though it is the same fit).
>
>
Ntuples finish making --> Run ranges are defined -- > Data split into those specific run ranges -- > any additional selection applied -- >mass fit performed in each run range for each charge -- > data is sWeighted --> spectator variables are added to the data set --> both charge datasets are merged . The same set of steps is repeated for each decay channel. For the protons, since there is more than one momentum range the fit is done separately in each range and then merged. For D*->D(Kpi)pi the data the definition of the run range and the data splitting is a common step for both K and pi, however the mass fits and are done twice (even though it is the same fit).
  First, get the package: %CODE{ lang="bash" num="off" }%
Line: 181 to 186
 A simple code to change pi to Pi: %CODE{ lang="bash" num="off" }% void ChangeName(){
Changed:
<
<
for(int i=0;i<36;i++){ std::cout<<"mv DSt_pi_MagDown_Strip21r1_"<<i<<".root DSt_Pi_MagDown_Strip21r1_"<<i<<".root"<<std::endl;
>
>
for(int i=0;i<36;i++){ std::cout<<"mv DSt_pi_MagDown_Strip21r1_"<<i<<".root DSt_Pi_MagDown_Strip21r1_"<<i<<".root"<<std::endl;
  } } %ENDCODE%
Line: 234 to 237
 Another example: scripts/python/PubPlots/MakeIDvsMisIDCurveRunRange.py
Changed:
<
<
python MakeIDvsMisIDCurveRunRange.py “26” “MagUp” “K” “Pi” “DLLK>” -15 -13 -11 -9 -7 -5 -3 -1 1 3 5 7 9 11 13 15
>
>
python MakeIDvsMisIDCurveRunRange.py “26” “MagUp” “K” “Pi” “DLLK>” -15 -13 -11 -9 -7 -5 -3 -1 1 3 5 7 9 11 13 15
  (When running it, you may probably encounter memory leak problem, my solution is

Revision 202016-09-28 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 236 to 236
  python MakeIDvsMisIDCurveRunRange.py “26” “MagUp” “K” “Pi” “DLLK>” -15 -13 -11 -9 -7 -5 -3 -1 1 3 5 7 9 11 13 15
Added:
>
>
(When running it, you may probably encounter memory leak problem, my solution is

Urania_v5r0/PIDCalib/PIDPerfTools/PIDPerfTools/PerfCalculator.h: virtual ~PerfCalculator( ) {delete m_Data; }

You may also change other parts correspondingly (remove Dataset.Delete() etc).

I do not commit it to svn to avoid problems in other parts, and it is up to you to see how to act.)

 then in scripts/python/PubPlots/NicePlots/MakeGraphNice.C

Revision 192016-09-21 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 231 to 231
 if you do python PlotCalibrationEfficiency.py it should be obvious what to type to make the raw curves
Added:
>
>
Another example: scripts/python/PubPlots/MakeIDvsMisIDCurveRunRange.py

python MakeIDvsMisIDCurveRunRange.py “26” “MagUp” “K” “Pi” “DLLK>” -15 -13 -11 -9 -7 -5 -3 -1 1 3 5 7 9 11 13 15

then in scripts/python/PubPlots/NicePlots/MakeGraphNice.C

 

Revision 182016-09-19 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 216 to 216
  are useful.
Added:
>
>
For example:

First, you could run Plots/PlotCalibEfficiency.py

Then PubPlots/NicePlots/MakeHistNice.C

You have to edit MakeHistNice to point to the correct root files and histograms that were made in step one

The MakeHistNice essentially calls one function (PlotFourTH1FwErrors)

and then adds a bunch of formatting. you just have to prepare which 4 histograms you want drawn and what the labels should read etc

if you do python PlotCalibrationEfficiency.py it should be obvious what to type to make the raw curves

 

Commit and release

The last step is to commit to LHCb software and make a release:

Revision 172016-08-26 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Revision 162016-06-14 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 222 to 222
 In practice you only need to tag PIDPerfScripts.

First make sure everything is committed (changes in Definitions.py and that the new pkl files are committed).

Added:
>
>
 Then in cmt/requriments change the locations back to the grid locations and change the version number to the new one.
Added:
>
>
 in doc/release.notes make the version notes (open the file and you will see what i mean) and write in any changes you did.

Added:
>
>
also in CMakeLists.txt, change the version to your tagged version

 then from PIDPerfScripts

svn commit -m “ Making Version 10rX” cmt/requirements doc/release.notes

Revision 152016-06-03 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 178 to 178
 The location of accessed files can be changed in PIDCalib/PIDPerfScripts/cmt (remember to change back)

Upload files

Added:
>
>
A simple code to change pi to Pi:
<!-- SyntaxHighlightingPlugin -->
void ChangeName(){
for(int i=0;i<36;i++){
    std::cout<<"mv DSt_pi_MagDown_Strip21r1_"<<i<<".root DSt_Pi_MagDown_Strip21r1_"<<i<<".root"<<std::endl;
  }
}
<!-- end SyntaxHighlightingPlugin -->
 The file used is $CALIBDATASCRIPTSROOT/scripts/python/uploadData.py

First copy these files in /data/lhcb/users/CalibData/ (oxford cluster).

Revision 142016-06-03 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 175 to 175
  2. Copy pklfiles into PIDCalib/PIDPerfScripts/pklfiles
Changed:
<
<
The location of accessed files can be changed in PIDCalib/PIDPerfScripts/cmt
>
>
The location of accessed files can be changed in PIDCalib/PIDPerfScripts/cmt (remember to change back)
 

Upload files

The file used is $CALIBDATASCRIPTSROOT/scripts/python/uploadData.py

Revision 132016-06-03 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 162 to 162
 In CalibDataScripts/jobs/Stripping5TeV/Dst, Lam0, Jpsi, so run corresponding codes to get calibration samples.
Added:
>
>
The directory PhysFit/RooPhysFitter should be obtained to benefit from the new functions inside.
 The produced file is in the location setting in configureGangaJobs.sh and it looks like: CalibData_2015/MagDown/K, pi, Mum p etc.

Revision 122016-05-30 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 94 to 94
 The first step is to choose the correct src directory to compile. This is just done by changing src directory to your corresponding one (except that of SetSpectatorVars.cpp) Then the usual %CODE{ lang="bash" num="off" }%
Changed:
<
<
cmt br cat make
>
>
cmt br cmt make
 %ENDCODE%

Then go to jobs/Stripping23

Revision 112016-05-26 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 11 to 11
 Please follow the instructions on PIDCalib Package webpage. Besides the above, you also need to %CODE{ lang="bash" num="off" }%
Changed:
<
<
getpack PIDCalib/CalibDataSel head
>
>
getpack PIDCalib/CalibDataSel head (do inside a DaVinci version, same version as makePIDCalibNtuples.ganga.py)
 %ENDCODE%

CalibDatSel Package: Produce tuples from DST

Revision 102016-03-23 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 173 to 173
  2. Copy pklfiles into PIDCalib/PIDPerfScripts/pklfiles
Added:
>
>
The location of accessed files can be changed in PIDCalib/PIDPerfScripts/cmt
 

Upload files

The file used is $CALIBDATASCRIPTSROOT/scripts/python/uploadData.py
Line: 204 to 206
 are useful.

Commit and release

Changed:
<
<
The last step is to commit to LHCb software and make a release.
>
>
The last step is to commit to LHCb software and make a release:

In practice you only need to tag PIDPerfScripts.

First make sure everything is committed (changes in Definitions.py and that the new pkl files are committed). Then in cmt/requriments change the locations back to the grid locations and change the version number to the new one. in doc/release.notes make the version notes (open the file and you will see what i mean) and write in any changes you did.

then from PIDPerfScripts

svn commit -m “ Making Version 10rX” cmt/requirements doc/release.notes then

svn commit -m v10rX (in PIDCalib/PIDPerfScripts directory)

tag_package PIDCalib/PIDPerfScripts (in Urania directory)

then check that it has indeed made so something like get pack PIDCalib/PIDPerfScripts —list-versions and you should see your new version available.

  -- WenbinQian - 2016-03-17

Revision 92016-03-23 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 27 to 27
  makePIDCalibNtuples_Run2.py is for Run2 where the input is various trigger lines and this matching needs to be done too.
Changed:
<
<
makePIDCalibNtuples.ganga.py simply runs the jobs. (I think you will need to update the DV version listed).
>
>
makePIDCalibNtuples.ganga.py simply runs the jobs.
  In dev/makePIDCalibNtuples.ganga.py, add %CODE{ lang="bash" num="off" }%
Line: 63 to 63
 In [6]:PIDCalib.S5TeVdn.submit() %ENDCODE%
Changed:
<
<
After all jobs finished, you may need to download them to local directory (not sure if needed)
>
>
After all jobs finished, you may need to download them to local directory.
 %CODE{ lang="bash" num="off" }% In [6]:for js in j.subjobs:
...
if js.status == 'completed':
Line: 139 to 139
  The ganga version works is v600r44, please check if there are other versions.
Added:
>
>
Don't forget to do it for all the files, like mu, MagUp etc.
 after that, go to ../ChopTrees and do %CODE{ lang="bash" num="off" }% ganga ganga_gp_chopTrees_Dst_MagDown.py
Line: 173 to 175
 

Upload files

The file used is $CALIBDATASCRIPTSROOT/scripts/python/uploadData.py
Deleted:
<
<
The following things are needed to be changed: prefix,
 
Changed:
<
<
test
>
>
First copy these files in /data/lhcb/users/CalibData/ (oxford cluster).

After that, you do:

<!-- SyntaxHighlightingPlugin -->
SetupProject LHCbDirac
lhcb-proxy-init -g lhcb_calib
python uploadData.py 15a 5TeV
<!-- end SyntaxHighlightingPlugin -->

For lhcb-proxy-init -g lhcb_calib, you need to ask Joel for lhcb_calib permit.

Test plots

Plots to compare with can be found in RICH performance paper: http://arxiv.org/pdf/1211.6759v2.pdf

To test the produced files: scripts in

/home/qian/cmtuser/Urania_v4r0/PIDCalib/PIDPerfScripts/scripts/python/Plots

and

/home/qian/cmtuser/Urania_v4r0/PIDCalib/PIDPerfScripts/scripts/python/PubPlots

are useful.

 
Changed:
<
<
change to user scripts
>
>

Commit and release

The last step is to commit to LHCb software and make a release.
  -- WenbinQian - 2016-03-17

Revision 82016-03-23 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 167 to 167
 

User scripts

The corresponding PID production should be added into user scripts so that they can be recognized.

Added:
>
>
1. Change python/PIDPerfScripts/Definitions.py (probably also in the install area)
 
Added:
>
>
2. Copy pklfiles into PIDCalib/PIDPerfScripts/pklfiles
 

Upload files

The file used is $CALIBDATASCRIPTSROOT/scripts/python/uploadData.py

Revision 72016-03-21 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 164 to 164
 and it looks like: CalibData_2015/MagDown/K, pi, Mum p etc.
Added:
>
>

User scripts

The corresponding PID production should be added into user scripts so that they can be recognized.

 

Upload files

The file used is $CALIBDATASCRIPTSROOT/scripts/python/uploadData.py The following things are needed to be changed:

Revision 62016-03-21 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 169 to 169
 The following things are needed to be changed: prefix,
Added:
>
>
test
 
Added:
>
>
change to user scripts
  -- WenbinQian - 2016-03-17

Revision 52016-03-21 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 78 to 78
 
Ntuples finish making --> Run ranges are defined -- > Data split into those specific run ranges -- > any additional selection applied -- >mass fit performed in each run range for each charge -- > data is sWeighted -->
spectator variables are added to the data set --> both charge datasets are merged . The same set of steps is repeated for each decay channel. For the protons, since there is more than one momentum range the fit is done separately in each range and then merged. For D*->D(Kpi)pi the data the definition of the run range and the data splitting is a common step for both K and pi, however the mass fits and are done twice (even though it is the same fit).
Added:
>
>
First, get the package:
 
<!-- SyntaxHighlightingPlugin -->
getpack PIDCalib/CalibDataScripts head
<!-- end SyntaxHighlightingPlugin -->

Inside, There are 3 src directories

Changed:
<
<
Src – for S20/S0r1 data
>
>
Src for S20/S0r1 data
 Src_S21 for S21 Src_Run2 for S22/23

Line: 92 to 93
 In cmt/requirements The first step is to choose the correct src directory to compile. This is just done by changing src directory to your corresponding one (except that of SetSpectatorVars.cpp) Then the usual
Changed:
<
<
Cmt br cat make
>
>
<!-- SyntaxHighlightingPlugin -->
cmt br cat make
<!-- end SyntaxHighlightingPlugin -->
  Then go to jobs/Stripping23 and modify configureGangaJobs.sh
Line: 130 to 133
 
<!-- SyntaxHighlightingPlugin -->
ganga ganga_gp_getRunRanges_Dst_MagDown.py 
<!-- end SyntaxHighlightingPlugin -->
Changed:
<
<
for example, please change Stripping version accordingly This is just a one-subjob ganga job The ganga version works is v600r44, Sneha said she also have other versions.
>
>
Please change Stripping version accordingly.

This is just a one-subjob ganga job.

The ganga version works is v600r44, please check if there are other versions.

  after that, go to ../ChopTrees and do
<!-- SyntaxHighlightingPlugin -->
ganga ganga_gp_chopTrees_Dst_MagDown.py 
<!-- end SyntaxHighlightingPlugin -->
Changed:
<
<
for example, please also change stripping version if needed
>
>
Please also change stripping version if needed.
 This will create a list of jobs on batch and could be viewed by qstat

All this script does is call

Line: 158 to 164
 and it looks like: CalibData_2015/MagDown/K, pi, Mum p etc.
Added:
>
>

Upload files

The file used is $CALIBDATASCRIPTSROOT/scripts/python/uploadData.py The following things are needed to be changed: prefix,
 

-- WenbinQian - 2016-03-17

Revision 42016-03-21 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 90 to 90
 The reason for different directories is due to changes in the ntuple format/naming conventions and changes in stripping cuts, which changed the selection cuts subsequently applied. Also the variables stored in the calibration datasets has also changed as a function of time. E.g for Run 2 we save online and offline variables.

In cmt/requirements

Changed:
<
<
The first step is to choose the correct src directory to compile. This is just done by changing src directory to your corresponding one.
>
>
The first step is to choose the correct src directory to compile. This is just done by changing src directory to your corresponding one (except that of SetSpectatorVars.cpp)
 Then the usual Cmt br cat make
Line: 126 to 126
 You shouldn’t need to change anything – it will look inside your .gangarc file to find your gangadir location etc etc. the output of these jobs gets sent to the jobs/Stripping23/ChopTrees directory as a .pkl file. This file contains the run number ranges that the script which this script calls defines. The file that is actually run by the ganga job is $CALIBDATASCRIPTSROOT/scripts/sh/GetRunRanges.sh which in turn calls $CALIBDATASCRIPTSROOT/scripts/python/getRunRanges.py for Dst and Jpsi. All this script does is look at your tuples, see how the candidates are distributed by runnumber and then split into an number of ranges such that each range contains about a million candidates but avoids the last dataset having too few.
Added:
>
>
do
<!-- SyntaxHighlightingPlugin -->
ganga ganga_gp_getRunRanges_Dst_MagDown.py 
<!-- end SyntaxHighlightingPlugin -->
for example, please change Stripping version accordingly This is just a one-subjob ganga job The ganga version works is v600r44, Sneha said she also have other versions.

after that, go to ../ChopTrees and do

<!-- SyntaxHighlightingPlugin -->
ganga ganga_gp_chopTrees_Dst_MagDown.py 
<!-- end SyntaxHighlightingPlugin -->
for example, please also change stripping version if needed This will create a list of jobs on batch and could be viewed by qstat

All this script does is call $CALIBDATASCRIPTSROOT/scripts/sh/ChopTrees.sh which in turn calls $CALIBDATASCRIPTSROOT/scripts/python/ChopTrees.py. All this does is look at the .pkl file in ChopTrees from the previous step. It then goes into your gangadir/jobdir. It loops over all the subjobs. In each subjob it creates a different file for each run range. So for example before you do this the only root file in directory would be PIDCalib.root. Once you’ve finished this stage it will look more like this: PID_0_dst_k_and_pi.root etc.

You’ll notice that most of the files are empty since that ganga sub jobs didn’t contain any runs that fall into run range x etc. That’s not a problem, but this is why you need to run the job at oxford since there is a lot of diskspace.

Fit, sWeight and final tuples

In CalibDataScripts/jobs/Stripping5TeV/Dst, Lam0, Jpsi, so run corresponding codes to get calibration samples.

The produced file is in the location setting in configureGangaJobs.sh and it looks like: CalibData_2015/MagDown/K, pi, Mum p etc.

  -- WenbinQian - 2016-03-17

Revision 32016-03-21 - WenbinQian

Line: 1 to 1
 
META TOPICPARENT name="RichSoftwareCalib"

Instructions on Preparing PIDCalib Samples

Line: 63 to 63
 In [6]:PIDCalib.S5TeVdn.submit() %ENDCODE%
Added:
>
>
After all jobs finished, you may need to download them to local directory (not sure if needed)
<!-- SyntaxHighlightingPlugin -->
In [6]:for js in j.subjobs:
   ...:     if js.status == 'completed':
   ...:         js.backend.getOutputData()
<!-- end SyntaxHighlightingPlugin -->
 

CaibDataScripts: Produce tuples for each particles

The RICH performance changes as a function of time (depends on conditions and alignment changes). A RooDataSet can only hold so many events and variables before it becomes too large and won’t save correctly. Both of these facts leads us to have a) more than one file per decay channel and b) the numerical index of each file ascends with run number. This is useful so that if someone wants to run over a specific run period they can just select the few relevant files.

Line: 81 to 88
 Src_Run2 for S22/23

The reason for different directories is due to changes in the ntuple format/naming conventions and changes in stripping cuts, which changed the selection cuts subsequently applied. Also the variables stored in the calibration datasets has also changed as a function of time. E.g for Run 2 we save online and offline variables.

Added:
>
>
In cmt/requirements The first step is to choose the correct src directory to compile. This is just done by changing src directory to your corresponding one. Then the usual Cmt br cat make

Then go to jobs/Stripping23 and modify configureGangaJobs.sh

Before submitting jobs to PBS, you need to do the following to make it recognize you: add the following lines in ~/.gangarc

<!-- SyntaxHighlightingPlugin -->
preexecute =
import os
env = os.environ
jobid = env["PBS_JOBID"]
tmpdir = None
if "TMPDIR" in env: tmpdir = env["TMPDIR"].rstrip("/")
else: tmpdir = "/scratch/{0}".format(jobid)
os.chdir(tmpdir)
os.environ["PATH"]+=":{0}".format(tmpdir)
postexecute =
import os
env = os.environ
jobid = env["PBS_JOBID"]
tmpdir = None
if "TMPDIR" in env: tmpdir = env["TMPDIR"].rstrip("/")
else: tmpdir = "/scratch/{0}".format(jobid)
os.chdir(tmpdir)
<!-- end SyntaxHighlightingPlugin -->
make sure you have the above lines everytime you run jobs.

and then go to GetRunRanges/

Here you will see a set of scripts, one for each polarity, one for each particle species. You shouldn’t need to change anything – it will look inside your .gangarc file to find your gangadir location etc etc. the output of these jobs gets sent to the jobs/Stripping23/ChopTrees directory as a .pkl file. This file contains the run number ranges that the script which this script calls defines. The file that is actually run by the ganga job is $CALIBDATASCRIPTSROOT/scripts/sh/GetRunRanges.sh which in turn calls $CALIBDATASCRIPTSROOT/scripts/python/getRunRanges.py for Dst and Jpsi. All this script does is look at your tuples, see how the candidates are distributed by runnumber and then split into an number of ranges such that each range contains about a million candidates but avoids the last dataset having too few.

 -- WenbinQian - 2016-03-17

Revision 22016-03-18 - WenbinQian

Line: 1 to 1
Changed:
<
<
META TOPICPARENT name="TWiki.WebPreferences"
>
>
META TOPICPARENT name="RichSoftwareCalib"
 

Instructions on Preparing PIDCalib Samples

This page provides information on how to create PIDCalib samples.

Changed:
<
<
And it is for experts only.
>
>
It is for experts only.
 

Latest PIDCalib setup instructions

Changed:
<
<
Please follow the instructions on PIDCalib Package webpage.
>
>
Please follow the instructions on PIDCalib Package webpage. Besides the above, you also need to
<!-- SyntaxHighlightingPlugin -->
getpack PIDCalib/CalibDataSel head
<!-- end SyntaxHighlightingPlugin -->
 
Added:
>
>

CalibDatSel Package: Produce tuples from DST

The files you need now boil down to: Src/TupleToolPIDCalib.cpp TupleToolPIDCalib.h EvtTupleToolPIDCalib.cpp EvtTupleToolPIDCalib.h

These are places to put Tuple variables.

And dev/makePIDCalibNtuples.ganga.py makePIDCalibNtuples_Run2.py makePIDCalibNtuples.py

makePIDCalibNtuples.py is for Run 1 where the input is various stripping lines

makePIDCalibNtuples_Run2.py is for Run2 where the input is various trigger lines and this matching needs to be done too.

makePIDCalibNtuples.ganga.py simply runs the jobs. (I think you will need to update the DV version listed).

In dev/makePIDCalibNtuples.ganga.py, add

<!-- SyntaxHighlightingPlugin -->
S5TeVdn = PIDCalibJob(
                    year           = "2015"
                 ,  stripVersion   = "5TeV"
                 ,  magPol         = "MagDown"
                 ,  maxFiles       = -1
                 ,  filesPerJob    = 1
                 ,  simulation     = False
                 ,  EvtMax         = -1
                 ,  bkkQuery       ="LHCb/Collision15/Beam2510GeV-VeloClosed-MagDown/Real Data/Reco15a/Turbo01aEM/95100000/FULLTURBO.DST"
                 ,  bkkFlag        = "OK"
                 ,  stream         = "Turbo"
                 ,  backend        = Dirac()
  )
<!-- end SyntaxHighlightingPlugin -->
Then execute the file inside ganga:
<!-- SyntaxHighlightingPlugin -->
In [5]:execfile('makePIDCalibNtuples.ganga.py')
Preconfigured jobs you can just submit: 
. PIDCalib.up11.submit()
. PIDCalib.validation2015.submit()
. PIDCalib.up12.submit()
. PIDCalib.down11.submit()
. PIDCalib.S23r1Up.submit()
. PIDCalib.down12.submit()
. PIDCalib.S5TeVdn.submit()
. PIDCalib.test.submit()
. PIDCalib.S23r1Dn.submit()
---------------------

In [6]:PIDCalib.S5TeVdn.submit()
<!-- end SyntaxHighlightingPlugin -->

CaibDataScripts: Produce tuples for each particles

The RICH performance changes as a function of time (depends on conditions and alignment changes). A RooDataSet can only hold so many events and variables before it becomes too large and won’t save correctly. Both of these facts leads us to have a) more than one file per decay channel and b) the numerical index of each file ascends with run number. This is useful so that if someone wants to run over a specific run period they can just select the few relevant files.

Hence this means that the workflow goes like this:

Ntuples finish making --> Run ranges are defined -- > Data split into those specific run ranges -- > any additional selection applied -- >mass fit performed in each run range for each charge -- > data is sWeighted -->

spectator variables are added to the data set --> both charge datasets are merged . The same set of steps is repeated for each decay channel. For the protons, since there is more than one momentum range the fit is done separately in each range and then merged. For D*->D(Kpi)pi the data the definition of the run range and the data splitting is a common step for both K and pi, however the mass fits and are done twice (even though it is the same fit).

<!-- SyntaxHighlightingPlugin -->
getpack PIDCalib/CalibDataScripts head
<!-- end SyntaxHighlightingPlugin -->

Inside, There are 3 src directories Src – for S20/S0r1 data Src_S21 for S21 Src_Run2 for S22/23

The reason for different directories is due to changes in the ntuple format/naming conventions and changes in stripping cuts, which changed the selection cuts subsequently applied. Also the variables stored in the calibration datasets has also changed as a function of time. E.g for Run 2 we save online and offline variables.

 -- WenbinQian - 2016-03-17

Revision 12016-03-17 - WenbinQian

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="TWiki.WebPreferences"

Instructions on Preparing PIDCalib Samples

This page provides information on how to create PIDCalib samples. And it is for experts only.

Latest PIDCalib setup instructions

Please follow the instructions on PIDCalib Package webpage.

-- WenbinQian - 2016-03-17

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback