track refitting

Refit Tracks in Stripping Location (TrackRefitting)

A short example is given below of how to use the DaVinciTrackRefitting package to refit tracks from a specific stripping location. The name of the stream and stripping line should be provided, with the name of the stream passed to the TrackRefitting configurable even in the case of DST. If you want to write the refitted tracks to a new location and keep the original tracks, you can set the ReplaceTracks, WriteToTes, and Output properties as shown below. In this case, the refitted tracks are first cloned while keeping the same key and this can be used to match them to the original track.

from STTools import STOfflineConf
STOfflineConf.DefaultConf().configureTools()

MyStream='/Event/EW'
is2015=True
fitasin='2015'
inputs = ["/Event/EW/Phys/Z02MuMuLine/Particles"]
from DaVinciTrackRefitting import TrackRefitting
refitter = TrackRefitting.TrackRefitting("refittingalg",
                                          fitAsIn=fitasin,
                                          Stripping23=is2015,
                                          RootInTES = MyStream, # Also needed on FullDST!
                                          Inputs = inputs
                                          ).fitter()
#next three lines only necesarry to write refitted tracks to new location
refitter.ReplaceTracks = False
refitter.WriteToTES    = True
refitter.Output        = "Rec/Track/Refit/"

The refittter object should be added to the sequence before any DecayTreeTuple etc.., analysis is performed.

Contributions on how to run it in the Selection framework are welcome.

PS: It should work out fine to use "Phys/StdAllNoPIDsPions/Particles" as input location for the TrackRefitting to refit all tracks in an event. (But it heavily depends on the input file what the actual result will be).

How to refit tracks from DST

Basically, the Phys/DaVinciTrackRefitting package is your friend. There are several algorithms provided and an example script. The package is designed to work transparently on DST and MDST.

TracksFromParticles

An algorithm by Matt N. provide a / several TES location (as to any other DaVinciAlgorithm) and an output location. All tracks being used for the candidates in these locations will be cloned to the output location. on this location you can run all standard brunel tools (however making physics objects (LHCb::Particles) is some work, so this is scope of tracking & reconstruction studies).

RefitParticleTracks

An algorithm by Paul Seyfert. Inherits some overhead from ghost probability use cases and studies, some code inspired by input from Matt N. and Patrick K.

If the properties are set accordingly all track which are used for candidates are refitted. The particles are however not updated, i.e. the track quantities will be updated, but not composite particles (e.g. J/Psi masses, etc). If you use DTF, don't worry about this caveat, if you are worried about track fit changes to your analysis (as systematics or from the momentum scale calibration) you should use DTF anyway. the ParticleRefitter is an attempt to update the "default" MM branch of your ntuples, but is in no way a safe tool.

RefitParticleTracks is encapsulated in the TrackRefitting described above. The Phys/DaVinciTrackRefitting/python/DaVinciTrackRefitting/TrackRefitting.py file is the only documentation for it.

ParticleRefitter

To pick up momentum scale correction, momentum error estimate corrections or refits with new alignments you need to refit your candidates to get the right physics quantities out. DTF does this out of the box (see above). If you don't do so (and you should seriously ask yourself why you're not using DTF while being interested in subpermille corrections) you need the ParticleRefitter. This will not be shipped in the successor of DV v33r1p1.

Due to the complexity of the problem, the abilities of the ParticleRefitter are limitted. It can only correctly update particles which have been combined with a default vertex fitter. Browsing through the stripping archive, there are several stripping lines which have differently configured vertex fitters. (e.g. for converted photons, Kshort or Lambda). Usage will be something like:

with momentum scale correction
from DaVinciTrackRefitting.ParticleRefitter import ParticleRefitterSeq
updater, scaler = ParticleRefitterSeq(inputs = ["Phys/B2JpsiX_B2JpsiKKLine/Particles","Phys/B2JpsiX_B2JpsipipiLine/Particles"], rootInTES= "/Event/Bhadron", scale = True)
DaVinci().appendToMainSequence( [updater, BSeq.sequence() , Ntuple ])

without momentum scale correction
from DaVinciTrackRefitting.ParticleRefitter import ParticleRefitterSeq
updater, scaler = ParticleRefitterSeq(inputs = Ntuple.Inputs, rootInTES= "/Event/Bhadron", scale = False)
DaVinci().appendToMainSequence( [updater, BSeq.sequence() , Ntuple ])

How to refit tracks from microDST

The above TrackRefitting should work fine on microDST. (In fact, only fullDST from Stripping21 for stripping lines which didn't request raw banks are affected by a bug in the stripping and cannot be refitted. all microDST all other fullDST I've seen so far are fine. Output from Turbo* hasn't been tested, though discussions with Thomas Ruf indicate that with a little bit of work it should be possible to perform track fits to some extent on turbo as well).

UPDATED Momentum Scale correction

The calibration constants are provided by Matt N. For detail on code see here

The design forsees to be used with DTF.

Momentum scaling is available for 2011, 2012, 2015, 2016, 2017 and 2018 data taking periods by Matt Needham. The scaling parameters are accessible via CONDDB using the appropriate global tag. (It is possible to run it using set of local tags but for such configuration it is not easy to get the consistent coherent setup). Therefore the first action (that can be considered as mandatory one) is to activate the latest global tags:

the_year = '2016' ## specify the data type, 
from Configurables import CondDB
CondDB ( LatestGlobalTagByDataType = the_year )
To avoid possible problems and potential confusions, one can setup datatype simultaneously for CondDB and DaVnci
the_year = '2016' ## specify the data type, 
from Configurables import CondDB,  DaVinci
CondDB ( LatestGlobalTagByDataType = the_year )
DaVinci ( DataType = the_year ) 
The actual scaling is perfomed by TrackScaleState algorithm, it requires to specify the input location of tracks to be scaled. This input location is very different for different data types and input types (DST, μDST, Turbo01, Turnbo02, Turbo03, Turbo++PersistReco), e.g.
  • e.g. runnig on DTS (and μDST with proper set of RootInTES parameter) the location is a standard/global one: "Rec/Tracks/Best"
  • for Turbo/2015 the locations need to be specified for each selection separately, e.g.
my_particles = ... 
my_tracks = my_particles.replace('/Particles','/Tracks')
alg = TrackScateState('SCALE', Input = my_tracks )
  • for Turbo/2016 the location is also standard "/Event/Turbo/Tracks"
  • When one needs information from PersistReco (Turbo02, Turbo03,...), one also needs to scale the tracks from PerisistReco, the are placed in "Hlt2/TrackFitted/Long"
Note that for processing of Turbo one needs to configure DaVinci as:
DaVinci( InputType = 'MDST' ,  RootInTES = '/Event/Turbo' ) 

Practical way to apply Momentum Scale corrections in a safe and robust way

The full configuration could be rather complicated. To avoid all these complications, it is recommended to apply momentum scaling via MomentumScaling wrapper selection. It ensures the correct instantiation of scaling algorithms, and their correct and efficient execution, e.g. the scaling will be performed only for events where it is needed. See its documentation in https://gitlab.cern.ch/lhcb/Phys/-/blob/run2-patches/PhysSel/PhysSelPython/python/PhysSelPython/MomentumScaling.py.

The basic code flow is:

my_selection = ...
from PhysConf.Selections import MomentumScaling
my_selection = MomentumScaling ( my_selection , <configuration parameters, when needed> ) 
consumer = AnotherSelection ( ... , my_selection, ... )

e.g.

from PhysConf.Selections import AutomaticData, MomentumScaling, TupleSelection
my_data = AutomaticData( ... )
my_data = MomentumScaling( my_data )
my_tuple = TupleSelection ( 'Tuple' , my_data , Decay = ... , ... ) 

The configuration details depends a bit on the actual data type and input type:

regular DST
no special configuration is needed
my_data = MomentumScaling( my_data )
regular μDST
no special configuration is needed when proper RootInTES is specified for DaVinci
my_data = MomentumScaling( my_data )
DaVinci ( InputType = 'MDST' , RootInTES = '/Event/Charm' ) 
Turbo
if one uses only "Turbo"-particles, the configuration is very simple, one just need to activate Turbo flag and specify data_type . Do not forget to info DaVinci that Turbo is MDST and specify RootInTES
my_data = MomentumScaling( my_data , Turbo  = True , Year  = the_year )
DaVinci ( InputType = 'MDST' , RootInTES = '/Event/Turbo' ) 
Turbo++/PersistReco
when one uses particles from PersistReco event (e.g. for spectroscopy studies) on Turbo files, one needs use following configuration:
my_data = MomentumScaling( my_data , Turbo  = 'PERSISTRECO' , Year  = the_year )
DaVinci ( InputType = 'MDST' , RootInTES = '/Event/Turbo' ) 

from Configurables import DstConf, TurboConf
DstConf   () .Turbo       = True
TurboConf () .PersistReco = True


## TEMPORARY (hopefully will be included into TurboConf().PersistReco ) 
## the owners of TurboConf have been asked to consider this inclusion.
## Rosen and  Alex will add these lines into TurboConf...
from Configurables import DataOnDemandSvc
dod = DataOnDemandSvc()
from Configurables import Gaudi__DataLink as Link
for  name , target , what  in [
    ( 'LinkProtoParticles', '/Event/Turbo/Rec/ProtoP'          , '/Event/Hlt2/Protos'   ) ,
    ( 'LinkPhys'          , '/Event/Turbo/Phys'                , '/Event/Phys'          ) ,
    ( 'LinkRec'           , '/Event/Turbo/Rec'                 , '/Event/Rec'           ) ,
    ( 'LinkPVs'           , '/Event/Turbo/Rec/Vertex/Primary'  , '/Event/Turbo/Primary' ) ,
    ( 'LinkDAQ'           , '/Event/Turbo/DAQ'                 , '/Event/DAQ'           ) ,
    ( 'LinkHlt2'          , '/Event/Turbo/Hlt2'                , '/Event/Hlt2'          ) ] :
    dod.AlgMap [ target ] = Link ( name , Target = target , What = what , RootInTES = '' ) 
This is relatively transparent, safe and robust way to insert momentum scaling into your algorithm flow. *If/when* the last fragment is included into TurboConf, the actual user configuration will be practically identical for all types of input data:
  1. regular DST
  2. regular μDST
  3. "pure" Turbo (both for 2015-specific configuraion and later)
  4. Turbo++/PersistReco

At the end of the configuration step DaVinci prints the actual momentum scaling configuration, e.g.

# MomentumScaling           INFO    Scaler 'SCALER' scales the tracks from 'Tracks' location
# MomentumScaling           INFO    Scaler 'SCALER_PERSISTRECO' scales the tracks from 'Hlt2/TrackFitted/Long' location
Possible misconfiguration problem will be reported as WARNING and/or ERROR messages.

NEW How to check that momentum scaling is picking up the correct scaling constants?

One needs to inspect log file

  • First, one should see in log-file something like this with reasonable interval of validity.
LHCBCOND             INFO Using TAG "cond-20170120-1"
SCALER               INFO  Condition: /MomentumScale     Valid;   Validity: Fri Jan  1 00:00:00 2016 -> Sun Jan  1 00:00:00 2017
Appearence of weird validity interval means that default scaling (all constants equal to 1) will be used, e.g.
SCALER               INFO  Condition: /MomentumScale     Valid;  Validity: Sun Jan  1 00:00:00 2012 -> Sat Apr 12 01:47:16 2262
  • At the end of log-file scaling algorithms print their counters, e.g.
SCALER            SUCCESS Number of counters : 7
 |    Counter                                      |     #     |    sum     | mean/eff^* | rms/err^*  |     min     |     max     |
 | "#CONDB update"                                 |         2 |          2 |     1.0000 |     0.0000 |      1.0000 |      1.0000 |
 | "#DELTA update"                                 |         1 |   -0.00019 |-0.00019000 | 1.2824e-12 | -0.00019000 | -0.00019000 |
 | "#POLARITY change"                              |         1 |          0 |     0.0000 |     0.0000 |      0.0000 |      0.0000 |
 | "#RUN   change"                                 |         1 |          1 |     1.0000 |     0.0000 |      1.0000 |      1.0000 |
 | "#RUN   offset"                                 |         1 |8.77048e-05 | 8.7705e-05 |     0.0000 |  8.7705e-05 |  8.7705e-05 |
 | "SCALE"                                         |       496 |   496.0668 |     1.0001 |  0.0011440 |     0.99695 |      1.0023 |
The last line "SCALE" summarizes all applied scaling coefficients. One should see some values with typical rms of 0.001 (the 5th column. If the 4th column contains 1.0 and three subsequent columns are empty, it is a signal that only trivial corrections are applied.
Edit | Attach | Watch | Print version | History: r14 < r13 < r12 < r11 < r10 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r14 - 2022-04-12 - VitaliiLisovskyi
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LHCb All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback