+ A collection of things to be done... Just a brain-dump of ideas, wishes etc, if you'd like to contribute please contact me.

  • Software for the Drift Tubes
    • DT Digitization: DT digitization in ORCA is in quite good shape. It is based on the GARFIELD parametrization of the cell response by the CIEMAT Group. What is missing is a systematic comparison with real data, to validate the parametrization (even if a comparison was made by the authors with a dedicated GARFIELD study), but also to check other effects:
      • Cell inefficiency: how/wheter it should be modelled
      • Secondaries: tune the OSCAR (GEANT) cuts. Determine if the contribution of soft delta rays is taken into account properly. Note that for part of the soft delta rays the parametrization cannot be used, so their hits are currently ignored. Several proposals to handle them exist and should be evaluated by comparisons with real data.
      • Noise, Afterpulses, Neutron Background: Neutron-originated SimHits will be added during digitization (contact: T. Cox). Noise and afterpulses are currently not modelled. It is not clear to me if this can be done in a realistic way. For afterpulses, a good selection at reconstruction level is possible, so maybe it is not necessary to model it in simulation?
      • Cell Parametrizations: Nice improved studies have been started in CIEMAT (finer granularity, simulation of the electronics, spcace(time) function rather than inverse function), contact: P. Garcia-Abia
    • DT RecHit reconstruction: In ORCA right now DT RecHits can be reconstructed either using a linear drift velocity assumption and the inverse parametrization developed by the CIEMAT group. Things to be done:
      • Comparison with TB: Check if the parametrization behaves well. (Active people: C. Villanueva, G. Cerminara, K. Hoepfner)
      • Calibration: Prepare automatic algorithms for determination of t_0s and of calibration of drift velocity (for the linear drift velocity reconstruction) and of the parametrization. Test that they work on real data. Study the effect of residual miscalibration on reconstruction.
        Note that neither the t0 nor the average drift velocity have the same meaning (and thus definition) for the two reconstruction algorithms. Note also that even if algorithms are well known, they should be automatized, engineered to HLT operation level and released. (Active people: M. Giunta, G. Cerminara, etc.)
      • Afterpulses, noise: Even if these are not simulated, strategies e.g. to identify and discard afterpulses should be developed in ORCA based on experience on real data.
    • DT segment reconstruction.: Segments are currently built with a straight line chi2 fit of individual hits based on combinatorics. This is fine for clean segments but does not behave well for superlayers with many secondaries.
      • For SLs with showers, this method is very slow (combinatorics) and fails often. However even in the presence of a large shower the SL provides inportant information - for example, a simple centroid of all the hits, with a vertex-pointing direction and appropriate errors, even if contributes very little to the pT assignment (because it carries limited spatial information) will help increasing efficiency.
      • Other segment fit startegies not based on combinatorics?
      • Study the effect on track resolutions (pT, eta, phi) of using individual hits composing the segments in the fit rather than the segments themselves
  • L2 Muon Reconstruction algorithms.
    • Seed refiner. Right now, the L1 seed refiner is just an inside-out kalman filter with segment collection. The actual fit is done outside-in and performs segment collection again. This is an overkill; it is extremely slow and not optimal in terms of algorithmic efficiency. Idea: take L1 candidates and re-reconstruct segments in the chamber where they were found at L1. Use these to fit a refined seed (no segment discovery). Perform the final fit as before.
    • Study of the L2 pT resolution: Try to understand the origin of the tails of the L2 pT resolution distribution, as a function of the pT. A detailed study has never been carried out. If the tails can be correlated with other quantities (number of hits or chi2 in some combination of chambers, missing measurements...), try to suppress the tails. The loss in efficiency will be balanced by a huge reduction of the rate that will allow to reduce the L2 threshold. Focus on the "higher-pT" tail and on low pT muons.
    • Filtering algorithm: L2 muon reconstruction uses a plain kalman filter, which is probably not optimal in the presence of large multiple scattering. Other algorithms are available and probably better suited (GSF? ...). Expertise and suggestions available in the Tracker group. Algorithms reducing tails will be extremely useful (see above).
    • Optimization of forward muon DetLayer navigtion. Complete optimization has been done for the barrel, there is still room for improvement in the endcaps.
  • HLT studies
    • Detailed study of efficiency, resolutions, and timing: for L2, resolutions are critical, see above. Such studies should not only document the performances, but also spot problems and possible improvements (there are surely may)
    • Production of inclusive MB samples for background studies: contact D. Holmes, U. Gasparini.
    • Optimization of the HLT strategy: this requires the samples described above for the estimation of trigger rates.

-- NicolaAmapane - 27 Jan 2005

Edit | Attach | Watch | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r5 - 2005-04-12 - NicolaAmapaneSecondary
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback