DaVinci FAQ
Search in mails
You can search for problems in
DaVinci mails from here :
https://groups.cern.ch/Pages/lhcb.aspx
.
DaVinci FAQ
See also the
DaVinci Tutorial and the general
Gaudi FAQ.
How do I run over microDST ?
Migrating from analysis on DSTs to microDSTs requires some small changes. Assuming you are reading from a stream called "StreamName"
and a stripping line called "MyStrippingLine", you should do the following (as of
DaVinci v33r1).
Add into your python options:
rootInTes = "/Event/StreamName"
from PhysConf.MicroDST import uDstConf
uDstConf ( rootInTes )
Assume you have a
GaudiSequencer. You need to set the
RootInTES property to the
StreamName location:
fullseq = GaudiSequencer("MyTupleSeq")
fullseq.RootInTES = rootInTes
here, the
RootInTES property will be passed down to all algorithms within the sequence
You should access your Particles container using "AutomaticData" (notice, no "/Event/StreamName"):
myTes = "Phys/MyStrippingLine/Particles"
mySel = AutomaticData(Location = myTES )
If your options fail to produce an ntuple, the first thing to make sure is that you are not running algorithms (e.g.
TupleTools) that are trying to
access data not available on the uDST. If using
DecayTreeTuple, try and set only a single tool that should work, such as:
myTuple.ToolList = [ "TupleToolKinematic" ]
If that works, then likely one of your tools is trying to access data that is not available. As of this time, ones that do not work
on uDST are "TupleToolPrimaries" and "TupleToolTagging" ( there may be others ).
How do I find out which package is compatible with a given version of DaVinci?
Note there is no guarantee that the
project DaVinci vXrYpZ will contain the
package Phys/DaVinci vXrYpZ. You can do
SetupProject DaVinci <version>
cd $DAVINCISYSROOT/cmt
cmt sh uses | grep DaVinci
cmt sh uses | grep Hlt2Lines
similarly with any package...
I get a Database timeout, what's wrong?
ApplicationMgr INFO Application Manager Initialized successfully
ApplicationMgr INFO Application Manager Started successfully
SIMCOND.TimeOut... INFO Disconnect from database after being idle for
120s (will reconnect if needed)
DDDB.TimeOutChe... INFO Disconnect from database after being idle for
120s (will reconnect if needed)
Nothing is wrong. This is a normal info message informing you the database connection has been timed out, perfectly normally, since your job hasn't been using it.
There is nothing wrong with this message and it does not cause your jobs to fail or to hang, it is just something printed to the screen by any normal job. If there is stderr or more stdout after this line it may give you a better clue. If the job has stopped, you may have run out of CPUtime, or memory. If the job is still running, it may be looping. If you have access to the machine where it is running, you can connect to it with gdb:
where you replace
with the process id of the hanging job, then from the gdb prompt you call
This should help debugging the problem. Alternatively, rerun the job with the
NameAuditor
enabled:
from Configurables import AuditorSvc
ApplicationMgr().ExtSvc += [ 'AuditorSvc' ]
AuditorSvc().Auditors += [ “NameAuditor” ]
This will print a new line every time a new algorithm is entered - so you’ll get lots of output - but it will let you identify the algorithm where the job is getting stuck.
I get a Database not up-to-date error
ONLINE_201005 ERROR Database not up-to-date. Latest known update
is at 1273462317.0, event time is 1273473608.888256
XmlParserSvc ERROR DOM>> File , line 0, column 0: An exception occurred! Type:RuntimeException, Message\
:The primary document entity could not be opened. Id=/scratch/lhcb04925760.ccwl0445/tmp/https_3a_2f_2flcglb01.gridpp.rl.ac.uk_3a9000_2fE9EuUi9fAWRDih_5f7L10lAg/8294060/conddb:/Conditions/Online/LHCb/Magnet/Measured
XmlGenericCnv FATAL XmlParser failed, can't convert /Measured!
You try to access the conditions database for an event that is more recent than the last update. If you are accessing very recent data (run ended in the last few hours), please wait a few hours to allow the automatic update to propagate the conditions throughout the cvmfs file system. Otherwise, this points to a problem in the automatic update itself, please send a mail to
lhcb-distributed-analysis@cernNOSPAMPLEASE.ch
I get a get different integrated luminosity on the same data on which I ran a few weeks ago
It may be that the DQ flags have changed. Some data is flagged as subsystem-BAD in the CondDB. It is still there on the bookkeeping but DaVinci would skip the corresponding events if you do not instruct it to do otherwise. To find out which runs are flagged and why
- Run the CondDBBrowser and figure out what DQ flags apply to your run of interest. That may depend on the DQflags tag you give to DaVinci. DaVinci always loads the last flags that were available at time of release. So this may actually change if you change DaVinci versions (for the better).
- Go to the Problems database
and figure out how serious the problem is. If you do not care, you can accept the events by configuring DaVinci appropriately. To turn off DQ status checking completely in DaVinci (so accept everything) just add
DaVinci().IgnoreDQFlags = True
But this will get you all bad events of any kind. To ignore specific flags:
from Configurables import BasicDQFilter, DQAcceptTool
accept = DQAcceptTool()
accept.addTool(BasicDQFilter, "Filter")
accept.Filter.IgnoredFlags = ["A", "B", "C"]
For example
accept.Filter.IgnoredFlags = ["LUMI"]
if you do not care about the integrated luminosity.
My Decay Descriptor suddenly doesn't work any more
See
LoKiNewDecayFinders.
We had to move away from the unmaintainable old Decay Finder to the one in
LoKi. See the v33r6p1
announcement
by Chris, the
savannah bug
and Patrick Spradlin's talk at
PPTS
. Most problems are simply solved by replacing
cc
with
CC
.
DaVinci silently skips events
See
above.
My job complains that HltDecReports
are already there
StrippingDiMuon... ERROR HltDecReports already contains report
EventLoopMgr WARNING Execution of algorithm DaVinciMainSequence failed
EventLoopMgr ERROR Error processing event loop.
This will happen if you re-run the stripping or the HLT on real data already containing decisions. Do
sc = StrippingConf( HDRLocation = "SomeNonExistingLocation", Streams = ... )
I get an error about incomplete L0DUReport
PropertyConfigSvc INFO resolving alias TCK/0x00a30046
PropertyConfigSvc INFO resolved TCK/0x00a30046 to 636242ae28cb9ea78ad75d94d732582f
ToolSvc.L0DUConfig ERROR L0DUMultiConfigProvider:: The requested TCK = 0x0046 is not registered
L0DUFromRaw.L0D...WARNING L0DUFromRawTool:: Unable to load the configuration for tck = 0x0046 --> Incomplete L0DUReport
This happens if the data has been triggered with a L0 TCK which is more recent than the last update of the DBASE/TCK/L0TCK installed on the machine where the job is running or, in the case of jobs submitted via Ganga, on the machine where the job was prepared. The package needs to be updated in your local installation, or you can switch to cvmfs to pick up all new versions without needing to do your own installation. After that you'll need to unprepare and resubmit a different job in Ganga.
Why is the Particle momentum different from the Track momentum?
There's no such thing as a Track momentum. You get a momentum for each state on the track. One of them (usually, the first) is chosen to define the Particle momentum. But always remember: a momentum of anything charged in LHCb only makes sense at a given z. Hence the answer to your question is: when the z at which the momentum is measured is different.
Another case are electrons: The Particle contains bremsstrahlung correction. The Track does not.
Why is the mass (MM
) of downstream Ks after the vertex fit better than before?
This effect is particularly visible with downstream tracks, but affects in principle all tracks.
The mass of any combination of particles is calculated by simply taking the mass sqrt(E^2-p^2) of the sum momentum vector of all particles. The problem is that the momentum of a particle is not a well defined quantity. It changes with z because of the curvature in the magnetic field. Since we do not know where the particles were created we choose to return the momentum at a "reference point" (which is returned by the
referencePoint()
method). For long tracks this is usually the point closest to the beam while for downstream tracks it is the first measurement somewhere in TT.
The correct measured mass of the Ks is only correctly obtained by
extrapolating the daughters through the magnetic field to the actual decay vertex of the Ks and taking the momentum at this point. This can only be done after the vertex fit. And hence the mass of the Ks before the fit is poorer than after. In
CombineParticles
it is advised to apply a loose cut before the fit (
CombinationsCut
) and a tighter after (
MotherCut
).
How is the bestPV() chosen?
The
DaVinci::bestPV()
method is used to apply many pointing and vertex separation constraints in the selection framework. And it is used as
OWNPV
in
DecayTreeTuple
.
It all depends whether you are online or offline. The logic goes as follows:
- DVAlgorithm uses an
IRelatedPVFinder
tool to determine which PV is the best.
- The instance of this tool is determined by the
OnOfflineTool
that does
const std::string& OnOfflineTool::relatedPVFinderType() const
{ return online() ? m_onlinePVRelatorName : m_offlinePVRelatorName ; }
with
, m_offlinePVRelatorName("GenericParticle2PVRelator__p2PVWithIPChi2_OfflineDistanceCalculatorName_/P2PVWithIPChi2")
, m_onlinePVRelatorName("GenericParticle2PVRelator__p2PVWithIP_OnlineDistanceCalculatorName_/OnlineP2PVWithIP")
3) They are both defined in
P2PVLogic.h
:
struct _p2PVWithIP
{
static double weight(const LHCb::Particle* particle,
const LHCb::VertexBase* pv,
const IDistanceCalculator* distCalc)
{
double fom(0.);
const StatusCode sc = distCalc->distance(particle, pv, fom);
return ( (sc.isSuccess())
&& fom > std::numeric_limits<double>::epsilon() )
? 1./fom : 0. ;
}
};
struct _p2PVWithIPChi2
{
static double weight(const LHCb::Particle* particle,
const LHCb::VertexBase* pv,
const IDistanceCalculator* distCalc)
{
double fom(0.);
double chi2(0.);
const StatusCode sc = distCalc->distance(particle, pv, fom, chi2);
return ( (sc.isSuccess())
&& chi2 > std::numeric_limits<double>::epsilon() )
? 1./chi2 : 0 ;
}
};
So: Online the best PV is the one with the smallest IP, offline the one with the smallest IPchi2.
How do I refit the PV ?
Set the
ReFitPVs
property of your algorithm to
True
.
What about failures from TupleToolPropertime
about 'Can't get the origin vertex'?
This tool requires a PV to be present. You can require PVs to exist in the standard location by inserting an instance of the
CheckPV
algorithm into your sequence.
When refitting, a PV may be discarded as the removal of tracks belonging to the signal candidate reduces the number of tracks in the PV such that the PV cannot be fitted. One then needs
an algorithm that checks PVs after refitting
.
from Configurables import DaVinci, DecayTreeTuple, FilterDesktop
dtt = DecayTreeTuple('SomeName')
dtt.Inputs = ['Phys/StrippingLine/Particles']
# ...
# Check that the candidate has a valid BPV *after* refitting
check_refitted_pv = FilterDesktop('CheckReFitPV',
Code='BPVVALID()',
Inputs=dtt.Inputs,
ReFitPVs=True)
seq = GaudiSequence('TupleSeq', Members=[check_refitted_pv, dtt])
DaVinci().UserAlgorithms = [seq]
How do I refit a Track from a Particle in DaVinci ?
How do I filter some Hlt decision in my sequencer ?
Create a configurable corresponding to an algorithm that filters on the contents of
HltDecReports
. See
HltEfficiency for instructions. And make sure you understand the caveats of using
Hlt2Global
and
Hlt1Global
(same page).
How do I add a monitoring algorithm to run only on my selected events?
Assuming you start from the
DaVinci tutorial 4 and have a selection called
theSequence
, then get the sequencer generated by this sequence and add you monitoring algorithms to its members.
MySelection = theSequence.sequence()
from Configurables import ReadHltReport
MySelection.Members += [ ReadHltReport() ]
DaVinci().UserAlgorithms = [MySelection]
I get an error about unpacking some location
For instance:
EventSelector SUCCESS Reading Event record 1. Record number within stream 1: 1
UnpackMuons FATAL UnpackTrack:: Exception throw: get():: No valid data at '/Event/pRec/Track/Muon' StatusCode=FAILURE
UnpackMuons.sys... FATAL Exception with tag= is caught
UnpackMuons.sys... ERROR UnpackMuons:: get():: No valid data at '/Event/pRec/Track/Muon' StatusCode=FAILURE
IncidentSvc ERROR Exception with tag= is caught
IncidentSvc ERROR UnpackMuons:: get():: No valid data at '/Event/pRec/Track/Muon' StatusCode=FAILURE
MuonPIDsFromProtosWARNING MuonPIDsFromProtoParticlesAlg:: Muon Tracks unavailable at Rec/Track/Muon
If
DaVinci().DataType = "DC06"
DaVinci assumes that the DST contains packed containers for all types of data except DC06. If you run on DC06 data and set DataType
to something else, you will fail.
How do I check that my LoKi functors do the right thing?
See https://twiki.cern.ch/twiki//bin/view/LHCb/FAQ/LoKiFAQ#How_to_monitor_various_LoKi_func.
How do I run from the nightlies?
See the LHCb nightlies twiki.
How do I make and use an XML catalog?
The ROOT/POOL catalogue stores links back to input files containing information about objects that objects in a given file are linked to via SmartRefs. It is necessary to have such a catalog in order to navigate back such objects. Examples include accessing MCHits in .sim files from .digi, MCParticles from .dst files, certain Tracks or raw data from MicroDST files. files. First, a catalogue (inputCatalog
in the reading example below) has to be generated using genXMLCatalog.
Then, it is used as follows:
from Gaudi.Configuration import FileCatalog
FileCatalog().Catalogs = ["xmlcatalog_file:MicroDSTCatalog.xml", "xmlcatalog_file:inputCatalogue"]
The first line is used for read/write access, i.e. to create or update the information about the file that is being directly accessed,
further lines are catalogues obtained with genXMLCatalog and are used for reading only.
How do I find what application version and database tags where used in a production?
From the book-keeping gui select Advanced Query
at the top and navigate until the production id number, for example:
LHCb - Collision09 - Beam450GeV-VeloOpen-MagDown - RealData-RecoToDST-07 - 90000000 - 5842
Click the right button on the production id number and select More Information.
How do I print the content of the Transient Event Store (TES)
SetupProject Bender
dump_dst <name of file>
Other method: In your python options, including the following
from Configurables import StoreExplorerAlg
storeExp = StoreExplorerAlg()
ApplicationMgr().TopAlg += [storeExp]
storeExp.Load = 1
storeExp.PrintFreq = 1.0
storeExp.OutputLevel = 1
in your .py job file prints the current structure of the TES each event
How Do I configure a local tool correctly ?
- If there is a list of tools to configure, or if you can name this tool independently, do that first.
MyThingy.SomeToolName = 'SomeTool/SomeToolName'
- Declare a configurable and assign it.
from Configurables import SomeTool
MyThingy.addTool(SomeTool, name='SomeToolName')
MyThingy.SomeToolName.AnAttribute='Something'
How Do I configure and add a TupleTool?
Please note that addTool
only defined the configuration of a tool. It will not add it to the list of TupleTools to run (ToolList
does that). There is a similar method, known only by DecayTreeTuple
and EventTuple
that does both: addTupleTool
. Example:
rs = tuple.addTupleTool("TupleToolRecoStats")
rs.Verbose = True
See DaVinciTutorial 7 and following for details.
How Do I configure TupleToolTrigger
and TupleToolTISTOS
?
EXAMPLE:
from Configurables import TupleToolTrigger
TTT = BsTree.Bs.addTupleTool("TupleToolTrigger")
TTT.VerboseL0=True #if you want to fill info for each trigger line
TTT.VerboseHlt1=True
TTT.VerboseHlt2=True
TTT.TriggerList=['Hlt1...... ' ,'L0.....' ]
#the list of triggers you are interested in, if you want verbose stuff.
Note that all strings must end with "Decision". L0Global
, Hlt1Global
and Hlt2Global
do not end with Decision
and are already included in the list and thus should not be given (if you do add Hlt2Global
it will complain it does not end in Decision
, if you add Hlt2GlobalDecision
it will fail ton find such a line.)
For TupleToolTISTOS, just replace "Trigger" with "TISTOS" in the above.
I don't seem to get any verbose output from TupleToolTrigger
or TupleToolTISTOS
?
Yes, but I don't understand... how do I get the verbose output??
- Recommended: pass the list of trigger lines you are interested in. To find them out, you can ask other members of your group, or run ReadHltReport in your sequence to get statistics of which triggers were run and accepted your events.
- If you really really have to: set
useAutomaticTriggerList
to True
to try and dynamically fill each entry, but this should be different event-to-event on real data and therefore make your ntuple unmergeable or fail during creation.
- NB: the AutomaticList and Collate list options are removed from version v3r11p2 onwards, so you must find out your list of triggers a different way.
How do I get L0 verbose output from TupleToolTrigger
and TupleToolTISTOS
?
- see the entry above
- see the entry above
- For TupleToolTISTOS see the entry below!
New since DecayTreeTuple (v3r6) (in DaVinci v25r6, in nightlies since 2010-07-08):
tuple = DecayTreeTuple("tuple")
#.... your configuration of DecayTreeTuple
tuple.ToolList+=["TupleToolTrigger"]
tuple.addTool(TupleToolTrigger, name="TupleToolTrigger")
tuple.TupleToolTISTOS.VerboseL0=True
tuple.TupleToolTISTOS.TriggerList=["L0HadronDecision","L0DiMuonDecision"]
In older versions:
tuple = DecayTreeTuple("tuple")
#.... your configuration of DecayTreeTuple
tuple.ToolList+=["TupleToolTrigger"]
tuple.addTool(TupleToolTrigger, name="TupleToolTrigger")
tuple.TupleToolTISTOS.VerboseL0=True
tuple.TupleToolTISTOS.TriggerList=["Hlt1L0HadronDecision","Hlt1L0DiMuonDecision"]
(if you don't undesrtand why there is a "Hlt" in these, see below!.)
OR (only for TupleToolTrigger in older versions and only for MC)
tuple = DecayTreeTuple("tuple")
#.... your configuration of DecayTreeTuple
tuple.ToolList+=["TupleToolTrigger"]
tuple.addTool(TupleToolTrigger, name="TupleToolTrigger")
tuple.TupleToolTISTOS.VerboseL0=True
tuple.TupleToolTISTOS.useAutomaticTriggerList=True
How can I TisTos L0?
- TupleToolTISTOS In new version (DecayTreeTuple v3r6) you simply add "L0XXXDecision" to TupleToolTISTOS.TriggerList, where XXX is the L0 channel name as it appears in the L0DUReport. This info comes directly from L0 reports. Using useAutomaticTriggerList=True will also work.
In older versions you have to rely on emulation of L0 decisions in Hlt1 (unreliable especially for muons!) and add "Hlt1L0XXXDecision" to the TriggerList.
In the older versions the results will show up under "L0XXXDecision" instead of "Hlt1L0XXXDecision".
If you specify "Hlt1L0XXXDecision" to the newer version you will get results under "Hlt1L0XXXDecision".
If you want to understand Hlt1L0 stuff - read below (the best thing is to switch to the new version and not bother with Hlt1L0).
Most of Hlt1L0 stuff is explained in the older presentation on the TisTos tool; see
slide 21-22
; linked from:
TriggerTisTos .
However, these docs don’t cover TupleToolTISTOS which is maintained on top of the TisTos tool. The TupleToolTISTOS was recently rewritten by Vava and Rob.
For every L0 trigger line based on selection of candidates (L0XXX) Hlt1 runs Hlt1L0XXX which just converts the L0 candidates responsible for the positive decision to a format which can be saved in Hlt Raw Data bank (HltSelReports) for every event. There is no prescaling! These lines are POSTSCALED at 10-6 level, which means that even if candidates were found (i.e. L0XXX trigger decision was true) the decision of Hlt1L0XXX is set to false, except for one time in a million when L0XXX decision is copied to Hlt1L0XXX decision. Therefore, the decision of Hlt1L0XXX is not equal to L0XXX decision most of the time. However, the candidates are always saved, this is how postscale works. With special calls to the TisTos tool you can get decision of L0XXX and its Tis/Tos from the Hlt1L0XXX output (see the slides referenced above for details). These details are hidden from you in TupleToolTISTOS. The results are reported under L0XXX in the trigger line name (i.e. not under Hl1L0XXX). If you try accessing decision of Hl1L0XXX in HltDecReport then you will get almost always false because of the postscaling. This is not what is stored as decision of L0XXX in TupleToolTISTOS. If L0XXX line is not based on selection of candidates in MU, HCAL or ECAL (e.g. a line which just requires SPD multiplicity to be greater than something), then both decision and tis/tos info obtained via TupleToolTISTOS are wrong (decision would be always false since there are no candidates). Because of this, decision and tis/tos of L0Global via TupleToolTISTOS may also be incorrect. One more reason not to rely on Global decisions.
If L0XXX is candidate based, but has some global cuts (like SPD multiplicity) then the global cuts are neglected in Tis/Tos.
There are some serious shortcomings in using Hlt1L0 lines to Tis/Tos L0 muon triggers. You can read about this in L0TriggerTisTos page.
How Do I configure TupleToolMCTruth
correctly ?
MCDecayFinder doesn't find as many decays as I expect
See LHCb.FAQ.LoKiNewDecayFinders.
Obsolete This refers to DaVinci before v33r6p1
Most likely you have not added enough optional photons and electrons (from material). For instance the following was found to work for Λb→J/ψpπ:
"[ Lambda_b0 -> (^Lambda0 -> ^p+ ^pi- {,gamma}) (^J/psi(1S) -> ^mu+ ^mu- {,gamma}{,gamma}{,gamma}){,gamma}{,e-}{,e+}]cc"
This is due to the optimal low energy photons that are added by PHOTOS.
Also make sure you include oscillated B's. For instance
"{[[B0]nos -> (^K*(892)0 -> ^K+ ^pi- {,gamma}{,gamma}) (J/psi(1S) -> ^e+ ^e- {,gamma}{,gamma}{,gamma}{,gamma}{,gamma}){,gamma}]cc, [[B0]os -> (^K*(892)~0 -> ^K- ^pi+ {,gamma}{,gamma}) (J/psi(1S) -> ^e+ ^e- {,gamma}{,gamma}{,gamma}{,gamma}{,gamma}) {,gamma}]cc}"
How Do I configure MCTupleToolP2VV
as part of TupleToolMCTruth
?
EXAMPLE:
from Configurables import TupleToolMCTruth, MCTupleToolP2VV
BsTree.Bs.addTool(TupleToolMCTruth, name="BsMCTruth")
BsTree.Bs.ToolList = [ "TupleToolMCTruth/BsMCTruth"]
BsTree.Bs.BsMCTruth.ToolList = [ "MCTupleToolP2VV/Bs2JPsiPhi"]
BsTree.Bs.BsMCTruth.addTool( MCTupleToolP2VV, name="Bs2JPsiPhi" )
BsTree.Bs.BsMCTruth.Bs2JPsiPhi.Calculator = 'MCBs2JpsiPhiAngleCalculator'
How Do I plot/tuple what I will cut on?
DecayTreeTuple with identical particles in decay descriptor
If using LoKi decay finders with DecayTreeTuple there is no way to pick up the identical particles using standard machinery.
One needs to use for example CHILD(i, ) for particle in position i.
In what unit are the numbers stored in DecayTreeTuple
, or generally what are the default units?
Gaudi has a default set of units and all algorithms must used them unless it is absolutely made clear they use something else. All momenta and energies are in MeV, all distances in mm and all times in ns. Anything else is a bug and should be reported. See this tutorial.
How Do I translate DecayTreeTuple variables into CombineParticles
?
I Don't know which functors are what in LoKi
- If you don't know which functors are what, there are lists:
Is there some way to plot functors from CombineParticles
?
How to "monitor" various LoKi functors, used in CombineParticles/FilterDesktop framework?
The monitoring is described in detail here
What does the chi2 distance mean in distanceCalculator ?
It is the increase of the chi2 of the PV vertex fit when one adds the track into the vertex. It behaves almost like (IP/IP_error)^2.
See Vanya's talk at 10/3/2008 T-rec
.
How to know the TES location of "standard" particles?
The standard particles are defined in the package Phys/CommonParticles
. One always can list the availabel configuration files in the directory
$COMMONPARTICLESROOT/python/CommonParticles/
:
1000cd $COMMONPARTICLESROOT/python/CommonParticles/
2000
3000ls -al
4000
To get more details about the various categories, try to execute the file from this directory with the most approiate name, e.g.:
1000python $COMMONPARTICLESROOT/python/CommonParticles/StandardBasic.py
2000
3000python $COMMONPARTICLESROOT/python/CommonParticles/StandardIntermediate.py
Also one can manually import & investigate these files in python:
1000from CommonParticles.Utils import locationsDoD
2000
3000from CommonParticles.StandardBasic import locations
4000
5000print locationsDoD ( locations )
How to change the default vertex fitter for CombineParticles?
Vertex fitting in CombineParticles is performed by implementations of IParticleCombiner. CombineParticles holds a map of these, and the key to the default one is the empty string, "". You can change the vertex fitter that is use by CombineParticles by using code like this:
# define the Combineparticles instance
myCombineParticles = CombineParticles("_loosePhi2KK",
DecayDescriptor = "phi(1020) -> K+ K-" ,
CombinationCut = "(AM < 1100.*MeV)",
MotherCut = "ALL",
OutputLevel=1)
# change the default IParticleCombiner
myCombineParticles.ParticleCombiners.update( { "" : "MyNewVertexFitter"} )
Here, it is assumed that you have a new IParticleCombiner
C++ class called MyNewVertexFitter
installed. The default IParticleCombiner
in CombineParticles is inParticleCombiners[""], so we change it to the name of our new vertex fitter. See the doxygen for IParticleCombiner
to find out the list of available tools which can be used for fitting. Just replace "MyNewVertexFitter" for your IParticleCombiner of choice.
How to change the properties of the default vertex fitter for CombineParticles?
The following code relies on the fact that you know that the default offline IParticleCombiner, used for vertex fitting in CombineParticles, is OfflineVertexFitter. This is chosen by the OnOfflineTool and can be over-written (see above).
myCombineParticles = CombineParticles(.....)
# OfflineVertexFitter is the default. We have to know that.
from Configurables import OfflineVertexFitter
myCombineParticles.addTool(OfflineVertexFitter)
myCombineParticles.OfflineVertexFitter.useResonanceVertex = False
The default vertex fitter (currently OfflineVertexFitter) is chosen by OnOfflineTool. See the doxygen for IParticleCombiner
to find out the list of tools which can be used for fitting.
How can I use relation tables for track <-> MC links ?
The procedure for python is described in details here. The procedure in C++ is very
similar.
1000// get the relation table (2D) from Transient Event Store
1010const LHCb::Track2MC2D* table2d = get <LHCb::Track2MC2D> ( LHCb::Track2MCLocation::Default ) ;
For the given track track
one can get the list of related MC-particles :
1000const LHCb::Track* track = ... ;
1010
1020// get all links for the given track:
1030LHCb::Track2MC2D::Range links = table2d -> relations ( track ) ;
For the obtained links one can e.g. make explicit loop over all links and extract the relaetd MC-particle as well as the weigt, associated with the given relation:
1000// Loop over all links
1010for ( LHCb::Track2MC2D::Range::iterator link = links.begin() ; links.end() != link ; ++link )
1020{
1030 // get the related MC-particle
1040 const LHCb::MCParticle* mcp = link->to() ;
1050 // get the weight associated to the given link:
1060 const double weight = link->weight() ;
1070
1080 info() << " Related MC-particle key/weight " << mcp->key() << "/" << weight << endmsg ;
1090
1100}
Note that by construction container of is ordered, and thus one can easily to get the relation with largest weight:
1000if ( links.empty() )
1010{
1020 // get MC-particle with largest weight:
1030 const LHCb::MCParticle* largest = links.back().to() ;
1040 // get its "weitht" :
1050 consts double weight = links.back().weight() ;
1060}
The treatment of "inverse" relations ( MCParticle -> Track ) is very similar:
1000// get the relation table (2D) from Transient Event Store
1010const LHCb::Track2MC2D* table2d = get <LHCb::Track2MC2D> ( LHCb::Track2MCLocation::Default ) ;
1020
1030// get its "inverse" part:
1040const LHCb::MC2Track* itable = table2D->inverse() ;
1050
1060// get all the links for teh given MCparticle:
1070const LHCb::MCParticle* mcparticle = .... ;
1080
1090LHCb::MC2Track::Range links = itable->relations ( mcparticle ) ;
1100
1110//loop over all related tracks:
1120for ( LHCb::MC2Track::Range::iterator link = links.begin() ; links.end() != link ; ++link )
1130{
1140 // get the related Track:
1150 const LHCb::Track* track = link->to() ;
1160 // get the weight:
1170 const double weigt = link->weight() ;
1180
1190 ...
1200}
1210
1220if ( !links.empty() )
1230{
1240 // get the track with largest weight:
1250 const LHCb::Track* largest = links.back().to() ;
1260 // get its weigth:
1270 const double weight = links.back().weight() ;
1280}
Important: for job configuration one*must* to add the following line into configuration python script :
1000## Configure the relation tables Track <--> MCParticles
1010import LoKiPhysMC.Track2MC_Configuration
1020
The example is avaialbe in Ex/LoKiExmaple package: C++ code, python configuration.
How to use LoKi algorithms-filters
?
LoKi-algorothms-filters allows to filter events on the basic of some event properties, e.g. properties of
LHCb::ODIN
, LHCb::L0DUReport
, LHCb::HltDecReports
and many others.
The basic filters are:
-
LoKi::ODINFilter
-
LoKi::L0Filter
-
LoKi::HDRFilter
-
LoKi::VoidFilter
The details are described here.
How to select the certain HLT -line ?
See here
How to select the certain stripping -line ?
See here
How to select MC-events with no pileup?
See here
How to select MC-events with interesting particles/decays?
See here
How to select events with multiple reconstructed primary vertices?
See here
How to select events with certain multiplicity of long tracks?
See here
How to select events with at least one backward track?
See here
How to select events with at least two good high-pt muons?
See here
How to evaluate various polarization angles?
See here
How to add event-level information into an N-tuple?
The simplest way is through the usage of LoKi::Hybrid::EvtnTupleTool
, that allows to
add into n-tuple infomration from LHCb::ODIN
, LHCb::L0DUReport
, trigger & stripping decisions,
and rather generic infomration
1000from Configurables import LoKi__Hybrid__EvtTupleTool as MyTool
1010
1020myTool = ...
1030
1040myTool.ODIN_Variables = {
1050 'run' : 'ODIN_RUN' ,
1060 'bxtype' : 'ODIN_BXTYP'
1070 }
1080
1090myTool.L0_Variables = { ... }
1100
1110myTool.HLT_Variables = {
1120 'microBias' : " switch ( HLT_PASS_RE('Hlt1MBMicro.*Decision') , 1 , 0 ) "
1130 }
1140
1150myTool.VOID_Variables = {
1160 "pileup" : " CONTAINS ( 'Gen/Collisions') " ,
1170 "nLong" : " TrSOURCE('Rec/Track/Best', TrLONG) >> TrSIZE " ,
1180 "nMu" : " SOURCE( 'Phys/StdLooseMuons') >> SIZE "
1190}
How to use refit decay trees/How to use DecayTreeFitter
?
There exists nice utility DecyaTreeFitter
coded by Wouter Hulsbergen.
This utiilty is very useful to refitting the whole decay tree of the particle, taking into accoutn allinternal decay
structure, energy-momenutm balance in each vertex, internal pointing constraints, etc..
Optionally one can add the primary vertex pointing constraint for the head of the decay and/or mass-constrainst for any decay component.
The usage of this utility is described in detail in my presentation for
40th LHCb Sofetare Week, 11 June 2k+10
, see
slides #3-18
.
How to use DecayTreeFitter
?
See slides #3-9
Note that there's also a TupleToolDecayTreeFitter
that will fit your candidates. It also has the ability to change PID hypotheses.
How to use IDecayTreeFit
tool ?
See slides #10-14
How to use specialized DTF_
LoKi-functors to deal with DecayTreeFitter
?
See slides #15-18
How to perform the refit of the whole decay tree ?
There is new algorithm FitDecayTrees
, that allows to perform the refit of the decay trees using the DecayTreefitter::Fitter
utiilty by Wouter Hulsbergen.
The usage is fairly trivial & transparent:
1000from Configurables import FitDecayTrees
1010
1020myAlg = FitDecayTrees (
1030 ... ,
1040 ## the decay tree to be refit:
1050 Code = "DECTREE('[ B_s0 -> ( J/psi(1S) -> mu+ mu- ) ( phi(1020) -> K+ K- ) ]CC')" ,
1060 ## chi2/DoF-cut:
1070 MaxChi2PerDoF = 10 ,
1080 ...
1090 )
Note that the algorithm stored at output locatioin the cloned refitted tree (and of course does not change its input). Thus in this case the output container will containe
in addition to
canddates also
,
and
and
particles.
One can optionally apply the primary vertex pointing constraint:
1000from Configurables import FitDecayTrees
1010
1020myAlg = FitDecayTrees (
1030 ... ,
1040 ## the decay tree to be refit:
1050 Code = "DECTREE('[ B_s0 -> ( J/psi(1S) -> mu+ mu- ) ( phi(1020) -> K+ K- ) ]CC')" ,
1060 ## chi2/DoF-cut:
1070 MaxChi2PerDoF = 10 ,
1080 ## Use PV-pointing constraint? The default value is False
1090 UsePVConstraint = True ,
1100 ...
1110 )
Also one can optionally apply mass-constraints:
1000from Configurables import FitDecayTrees
1010
1020myAlg = FitDecayTrees (
1030 ... ,
1040 ## the decay tree to be refit:
1050 Code = "DECTREE('[ B_s0 -> ( J/psi(1S) -> mu+ mu- ) ( phi(1020) -> K+ K- ) ]CC')" ,
1060 ## chi2/DoF-cut:
1070 MaxChi2PerDoF = 10 ,
1080 ## Use Mass-constraints for specified particles:
1090 MassConstraints = [ 'J/psi(1S)' ] ,
1100 ...
1110 )
Both Primary Vertex pointing constraint and mass-constrainst could be combined together:
1000from Configurables import FitDecayTrees
1010
1020myAlg = FitDecayTrees (
1030 ... ,
1040 ## the decay tree to be refit:
1050 Code = "DECTREE('[ B_s0 -> ( J/psi(1S) -> mu+ mu- ) ( phi(1020) -> K+ K- ) ]CC')" ,
1060 ## chi2/DoF-cut:
1070 MaxChi2PerDoF = 10 ,
1080 ## Use PV-pointing constraint? The default value is False
1090 UsePVConstraint = True ,
1100 ## Use Mass-constraints for specified particles:
1110 MassConstraints = [ 'J/psi(1S)' ] ,
1120 ...
1130 )
How to process the stripped DSTs in EFFICIENT way ?
Typical selection lines take only a small part of the stripped stream, and thus they can be processed much faster if one
- use the appropriate filter
- use the approriate ordering of algorithms
From DaVinci v25r8 onwards, the second step is handled trivially by adding the appropriate filter to the DaVinci().EventPreFilters list, as shown in the example below.
Assuming one needs to access few
lines form CHARM stream
In this case one can start from creation of a helper filter sequence:
1000from PhysConf.Filters import LoKi_Filters
1010fltrs = LoKi_Filters (
1020 HLT1_Code = " HLT_PASS_RE ('Hlt1MBMicro.*Decision') | HLT_PASS_RE ('Hlt1.*Track.*Decision') " ,
1030 HLT2_Code = " HLT_PASS_RE ('Hlt2Topo.*Decision') | HLT_PASS_RE ('Hlt2CharmHad.*Decision') " ,
1040 STRIP_Code = " HLT_PASS_RE ('StrippingStripD2K(K|P)P_(A|B)_LoosePID_Sig.*Decision') " ,
1050 VOID_Code = " EXISTS ('/Event/Strip') & EXISTS ('/Event/Charm') "
1060 )
1070
1080fltrSeq = fltrs.sequence ( 'MyFilters' )
1090
1100myAlgorithms = [ ... ]
1110
1120daVinci = DaVinci (
1130 ...,
1140 EventPreFilters = [filtrSeq],
1150 UserAlgorithms = myAlgorithms ,
1160 ...
1170 )
How to add into EventTuple the information for Global Event Cuts?
The simplest way is to use LoKi::Hybrid::EvtTupleTool
, see here
The example:
1000from Configurables import LoKi__Hybrid__EvtTupleTool as LoKiTool
1010
1020 ...
1030myTupleAlg.addTool ( LoKiTool , 'MyLoKiTool' )
1040myTupleAlg.ToolList += [ "LoKi::Hybrid::EvtTupleTool/MyLoKiTool" ]
1050
1060myTupleAlf.MyLoKiTool.VOID_Variables = {
1070 ## SPD-multiplicity
1080 "nSpd" : " CONTAINS('Raw/Spd/Digits') " , ## multiplicity in Spd
1090 ## OT-multiplicity
1100 "nOT" : " CONTAINS('Raw/OT/Times') " , ## number of OT-times in OT
1110 ## IT-clusters:
1120 "nITClusters" : "CONTAINS('Raw/IT/Clusters')", ## number of Clusters in IT
1130 ## TT-clusters:
1140 "nTTClusters" : "CONTAINS('Raw/TT/Clusters')", ## number of Clusters in TT
1150
1160 (in a simular way)
1170 ...
1180 ## total amouny of tracks in "Rec/Track/Best" container:
1190 'nTracks' : " CONTAINS ('Rec/Track/Best') " , ## total number of tracks
1200 ...
1210 }
In case one need more fine granularity e.g. for gifferent track categories:
1000myTupleAlg.MyLoKiTool.Preambulo = [
1010 "from LoKiTrigger.decorators import *",
1020 "from LoKiCore.functions import *"
1030 ]
1040
1050myTupleAlg.MyLoKiTool.VOID_Variables[ 'nVelo'] = "TrSOURCE('Rec/Track/Best' , TrVELO) >> TrSIZE " ## number of Velo tracks
1060
How do I list all known Particles in friendly format ?
SetupProject DaVinci
dump_particle_properties
How do I use Particles not listed in ParticleTable.txt
You can add any particle to the table via options:
from Configurables import LHCb__ParticlePropertySvc as PPS
PPS().Particles = [ '...' ]
where in parentheses you give a line as you would put in the particle table.
How are x, y, z, PHI defined?
As you look along the detector from VELO to MUON, z is increasing, x is to your left, y is above, and phi=0 is defined along the x axis and phi=90 along the y axis. This means x points towards the Saleve. The definition for the LHC machine is opposite to this.
I get an error about failure to load libGFAL.so
For instance:
Error in <TCint::AutoLoad>: failure loading library libGFAL.so for class TGFALFile
Error in <TPluginHandler::SetupCallEnv>: class TGFALFile not found in plugin GFAL
Message: (file "/afs/cern.ch/sw/lcg/app/releases/ROOT/5.32.00/x86_64-slc5-gcc43-opt/root/lib/libGFAL.so", line -1) dlopen error: libgfal.so.1: cannot open shared object file: No such file or directory
IODataManager ERROR Error: connectDataIO> Cannot connect to database: PFN=gfal:guid:449A0CA8-F3F0-E011-86D2-E0CB4E19F971 FID=449A0CA8-F3F0-E011-86D2-E0CB4E19F971
IODataManager ERROR Error: connectDataIO> Failed to resolve FID:449A0CA8-F3F0-E011-86D2-E0CB4E19F971
RootCnvSvc ERROR Error: Cannot open data file:449A0CA8-F3F0-E011-86D2-E0CB4E19F971
RootCnvSvc ERROR Error: createObj> Cannot access the object:449A0CA8-F3F0-E011-86D2-E0CB4E19F971:/Event/Dimuon
The important line here is the last one. You try to access an TES location that does not exist on your input file but exists on some other file that was made persistent before your one. The example tries to access the Dimuon stream in a BHADRON.DST.
Another typical example is when you try to access a location that would be there on a DST and is not on the microDST, like /Event/DAQ/ in
EventSelector SUCCESS Reading Event record 926. Record number within stream 1: 926
SysError in <TGFALFile::TGFALFile>: file guid:D8A8BBD2-5A02-E211-9535-90E6BA0D09E2 can not be opened for reading
(Communication error on send)
IODataManager ERROR Error: connectDataIO> Cannot connect to database: PFN=gfal:guid:D8A8BBD2-5A02-E211-9535
-90E6BA0D09E2 FID=D8A8BBD2-5A02-E211-9535-90E6BA0D09E2
IODataManager ERROR Error: connectDataIO> Failed to resolve FID:D8A8BBD2-5A02-E211-9535-90E6BA0D09E2
RootCnvSvc ERROR Error: createObj> Cannot access the object:D8A8BBD2-5A02-E211-9535-90E6BA0D09E2:/Event/
DAQ
EventSelector SUCCESS Reading Event record 927. Record number within stream 1: 927
Make sure you understand what you need to run your job and that it's available on the microDST. If the error comes from DecayTreeTuple. you may be including a tool that does not work on microDST. Or you found a bug which you should report.
How do I copy events from an existing DST file to a new one?
This is what you should do to select events:
- Add a HLT_PASS filter for your stripping line.
- Append
IOHelper().outputAlgs("SomeFileName.dst","InputCopyStream")
to your UserAlgs
This is what you should do to merge all events with no selection at all:
- Append
IOHelper().outputAlgs("SomeFileName.dst","InputCopyStream")
to your UserAlgs
Iff you need to run a filterdesktop, or other intemediate filtering step, then the process is slightly different:
- Add a HLT_PASS filter for your stripping line.
- Make a Selection sequence with your filters
- Pass the selection sequence to the appropriate DstWriter
- Append the DstWriter sequence to your UserAlgs
LoKi filters are described here. Filters work like:
from PhysConf.Filters import LoKi_Filters
fltrs = LoKi_Filters(STRIP_Code="""HLT_PASS('StrippingBetaSBu2JpsiKDetachedLineDecision') | HLT_PASS('StrippingB2D0PiD2HHBeauty2CharmLineDecision')""" )
and then:
DaVinci().EventPreFilters = fltrs.filters ('fltrs')
LoKi access to isolation variables in Stripping 21
This changed moving from Stripping 20 to Stripping 21. ExtraInfo locations are "P2ConeVar1", "P2ConeVar2" and "P2ConeVar3" corresponding to cone angles of 1.5, 1.7 and 1.0 radians. So use e.g.
lokiVars[ branch ]["ptasy_1.50"] = "RELINFO('/Event/Bhadron/Phys/<YourLine>/P2ConeVar1','CONEPTASYM',-1000.)"
How to run Momentum scaling in DaVinci ?
See here
How to copy RELINFO into EXTRAINFO, when particle containers are cloned
The information stored in RELINFO is not part of the Particle object. Therefore, if/when you clone a particle container, the RELINFO information is not transmitted.
The "trick" (DaVinci emails from Vanya, Jason Andrews) is to copy the RELINFO into "ExtraInfo", which IS part of the Particle object, and therefore is carried over
when cloning a particle object. Below is the recipe for a specific use case of copying over one of the Cone variables:
B2D0pi = AutomaticData(Location = locationB) # [ locationB = put your Stripping Line location ]
name = "B2D0pi"
filter = FilterDesktop( 'infoWriter_for%sCand' % name )
path = "/Event/Bhadron/Phys/B2D0PiD2HHmuDSTBeauty2CharmLine/P2ConeVar1"
preamble = ["x = SINFO( 9014, RELINFO('"+path+"', 'CONEANGLE' , -99.),True)",
"z = SINFO( 9025, RELINFO('"+path+"', 'CONEPTASYM', -99.),True)"]
filter.Preambulo = preamble
filter.Code = '(x>-1000) & (z>-1000)'
infoWriterSelForB2D0pi = Selection( 'infoWriterSelFor_%s' % name, Algorithm = filter, RequiredSelections = [B2D0pi] )
after this, you can use this Selection in further Selections(), SubstitutePID(), etc. to then extract that information into your DecayTreeTuple:
LoKi_B = LoKi__Hybrid__TupleTool("LoKi_B")
LoKi_B.Variables = {
"LOKI_MASS_D0Constr" : "DTF_FUN ( M , True , 'D0')",
"PTANG" : "INFO(9014, -100. )",
"PTASY" : "INFO(9025, -100. )"
}
Obsolete FAQs
We think these questions are not relevant any more. If you still run into these problems and use a modern version of DaVinci
, please report to the mailing list.
Obsolete: Something wrong with MC truth on stripped data
- My jobs failed strangely when running on bb-dimuon samples with a segmentation in
BackgroundCategory::doAllFinalStateParticlesHaveACommonMother ()
- All my events are classified as ghosts
- I find no/strange association on my background.
This is all due to a mismatch between the stored linker tables and the saved tracks. One needs to rerun the association to get it right. Include
#include "$DAVINCIROOT/options/DaVinciMainSeqFixes.opts"
This should now be done automatically with DaVinci().RedoMCLinks
on DC06.
Obsolete: Can I use python options?
Yes, but not if you run DaVinci.exe
. At the command line do
gaudirun.py $DAVINCIROOT/options/DaVinci.py |
Try gaudirun.py --help
for more options. In particular the -v
mode is very useful and allows to print all options to the logfile. You can also use old style options with gaudirun.py
gaudirun.py $DAVINCIROOT/options/DaVinci.opts |
You can mix python and old-style options in both cases. In an .opts
file you can do
#include "$DAVINCIROOT/options/DaVinci.py"
and in a python file
importOptions('$DAVINCIROOT/options/DaVinci.opts')
See DaVinci Tutorial 1 for more examples, and the LHCb FAQ for pointers to Python options documentation.
Obsolete: What's the difference between DaVinci
, DaVinci.exe
and gaudirun.py
?
-
DaVinci.exe
is the old-style Gaudi application. It is built from GaudiMain.cpp
. DaVinci.exe
can only read old-style .opts
files. There is no reason to use it from the command line any more.
- There is one caveat:
ganga
still uses it, which is why it is still present in the release. But that does not prevent you from using python options in ganga
. ganga
calls gaudirun.py
to translate all optiosn to old style format and then passes this to DaVinci.exe
.
-
gaudirun.py
is the main application to be used.
-
DaVinci
is a linux alias for gaudirun.py
. In older versions of DaVinci (before v19r13
) it was pointing to DaVinci.exe
.
Obsolete: How do I use units in text options with DaVinci v22r0?
If you need units in your text options do
importOptions("$DAVINCIROOT/options/PreloadUnits.opts")
before anything else *in DaVinci v22r0 and v22r0p1.
From v22r1 do
importOptions("$STDOPTS/PreloadUnits.opts")
Obsolete: I get a JobOptionsSvc FATAL [some file] : ERROR #3 : Syntax error
where [some file]
ends in .py
.
You are trying to include python options (maybe from a text options file) while using DaVinci.exe
. Use gaudirun.py
and see below.
Obsolete: Should I use MakeResonances
or CombineParticles
?
CombineParticles
! See LHCb/DaVinciTutorial4.
Obsolete: How can I run the trigger on MC09 data (June 2010)?
I got it to work in DaVinci v25r5, June 2010, with:
if Truth:
DaVinci().Simulation = True
### Emulate L0 ###
DaVinci().ReplaceL0BanksWithEmulated = True
DaVinci().L0 = True
DaVinci().Hlt = True
DaVinci().Hlt2Requires = 'L0+Hlt1'
DaVinci().HltThresholdSettings = 'Physics_25Vis_25L0_2Hlt1_2Hlt2_Apr10'
Where the Threshold setting should correspond to a physics scenario you are interested in!
Obsolete: I get an error from MuonRec
For instance:
MuonRec.sysExecute() FATAL Exception with tag=KeyedContainer is caught
MuonRec.sysExecute() ERROR KeyedContainer Cannot insert element to Keyed Container! StatusCode=FAILURE
ProtoPRecalibration.sysExecute() FATAL Exception with tag=KeyedContainer is caught
ProtoPRecalibration.sysExecute() ERROR KeyedContainer Cannot insert element to Keyed Container! StatusCode=FAILURE
TestProtoPRecalibration.sysExecute() FATAL Exception with tag=KeyedContainer is caught
TestProtoPRecalibration.sysExecute() ERROR KeyedContainer Cannot insert element to Keyed Container! StatusCode=FAILURE
PhysInitSeq.sysExecute() FATAL Exception with tag=KeyedContainer is caught
PhysInitSeq.sysExecute() ERROR KeyedContainer Cannot insert element to Keyed Container! StatusCode=FAILURE
DaVinciInitSeq.sysExecute() FATAL Exception with tag=KeyedContainer is caught
DaVinciInitSeq.sysExecute() ERROR KeyedContainer Cannot insert element to Keyed Container! StatusCode=FAILURE
MinimalEventLoopMgr.executeEvent() FATAL Exception with tag=KeyedContainer thrown by DaVinciInitSeq
MinimalEventLoopMgr.executeEvent() ERROR KeyedContainer Cannot insert element to Keyed Container! StatusCode=FAILURE
EventLoopMgr WARNING Execution of algorithm DaVinciInitSeq failed
You are trying to run on 2008 or MC09 data with DC06 options. Do for instance
DaVinci().DataType = "MC09"
Yes, everybody does. Don't worry, it doesn't harm. This is being followed up at https://savannah.cern.ch/bugs/index.php?51794
.
Obsolete: How do I access L0 (or similar) information when running on rDST
?
The rDST was designed to contain all the information needed by the preselections to perform the stripping on MC. Since no preselection asks for the trigger info, it is not present. You could add a file catalogue to get access to the raw data, but this is probably not very effective, except you need it only for a few events.
Obsolete: I get a runtime exception to do with LumiSettings
XmlParserSvc ERROR DOM>> File , line 0, column 0: An exception occurred! Type:RuntimeException, Message:The primary document entity could not be opened. Id=/afs/cern.ch/user/r/rkumar/conddb:/Conditions/Online/LHCb/Lumi/LumiSettings
XmlGenericCnv FATAL XmlParser failed, can't convert /LumiSettings!
ServiceManager ERROR Unable to initialize service "HltReferenceRateSvc"
LoKiSvc.REPORT ERROR LoKi::Scalers::RateLimitV *EXCEPTION* : Unable to locate IReferenceRate* "HltRefere
nceRateSvc" StatusCode=FAILURE
ToolSvc.CoreFac... ERROR PyError Type : <type 'exceptions.TypeError'>
ToolSvc.CoreFac... ERROR PyError Value : none of the 4 overloaded methods succeeded. Full details:
It's a problem with a backward incompatible change in the Lumi/Hlt code that requires you to use a new Simcond tag.
Unfortunately this breaks the rule given up to now that you should always use the same database tags in analysis as was used in production. This has to be improved in future.
You should be able to fix your problem by adding the following two lines:
from Configurables import HltReferenceRateSvc
HltReferenceRateSvc().UseCondDB = False
This is not needed any more with DaVinci v25r5. This version should pick up the right database tag.
Obsolete: How do I translate my options to DaVinci()
?
See DaVinciConfigurable.
I run MCDecayTreeTuple on signal MC but have less than one candidate per event
This should be solved with LoKiNewDecayFinders. If not try PrintMCTree
. See DaVinciTutorial5.
This is likely because you are missing radiative photons. You decay descriptor should include at least one optional photon per charged particle, if not more. For instance
mcTupleKPi.Decay = "{[[B0]nos => (^D~0 -> ^K+ ^pi- {,gamma}{,gamma}) ^K+ ^pi- {,gamma}{,gamma}]cc, [[B~0]os => (^D~0 -> ^K+ ^pi- {,gamma}{,gamma}) ^K+ ^pi- {,gamma}{,gamma}]cc}"
You can check what's going on by using PrintMCTree
mct = PrintMCTree()
mct.ParticleNames = [ "B0", "B~0", "B_s0", "B_s~0", "B+", "B-", "Lambda_b0", "Lambda_b~0" ]
-- Vanya Belyaev - 06-May-2k+10
-- PatrickSKoppenburg - 09-Mar-2012