Overview

  • TDR in Q1'17, Demonstration through Q2'16.
  • Need a cohere *nt definition of objectives and expectations
  • Accelerate process of codifying metrics, definitions and scope

Baseline Performance Metrics

  • Primary measurements to be made:
    • Tracking Efficiency
    • Fake/duplicate rate
    • Track parameter resolutions
  • All evaluated in various samples
    • Scenarios
      • Muon/e/pi gun no PU
      • Muon/e/pi gun + PU
      • ttbar: muon & jets + PU
    • Pt Regimes

Additional Metrics

  • Track isolation efficiency
    • Single mu, >20 GeV, + 140/200
    • Using tracks > 2.0, 2.5, 3.0 GeV
  • Jets
    • Focus on PU rejection from "vertexing"
    • Tracking in jets
      • Number of fit tracks and resolutions per jet
      • ttbar + PU. Maybe W+jets instead

Efficiency Definitions

  • Numerator: final "good" output tracks (ie: TTTracks)
  • Denominator: MC truth particles with stub requirements
  • Comments:
    • Must be able to directly compare between approaches
      • This can/should use common tools from TTI
    • Also need a detailed understanding of what contributes to net efficiency
      • Generally this will be approach-specific, although some factors will be common
    • "Good" output tracks implies qualification, need to settle on an operational definition
      • Fit tracks with at least 4 or 5 stubs matched to truth
      • Fit tracks with parameters "close" to truth
      • Logical AND of the above
      • ....
    • Efficiency targets: what defines success?

Fake/Duplicate Rate

  • Requires consistent definitions. Both must be measured
  • Another possibility: focus on overall rate instead
    • Obscures some aspects of system performance, but it ultimately what trigger cares about

Simulation

  • Full software simulation needed for the TDR
    • Input: common stub samples ala Gaelle and Seb
    • Output: TTTracks
    • At least a subset of input tracks should be findable by all approaches
  • Samples should be centrally produced/hosted
  • Simulation codebase from each approach should go public on git before TDR
  • Bitwise emulation of a tower/sector
    • Use the same simulated stub data as input to both HW and emulation

System Performance

  • Barrel, hybrid, endcap
  • All salient operations must be demonstrated
    • Seed/road finding
    • Patter recognition
    • Track finding
    • Duplicate removal
    • I/O: all necessary inter-board communication, time MUX, etc
  • Procedure
    • Run a set of common events (100 BX) through the systems
    • Compare with emulation, between approaches
  • Latency
    • From stubs input from the "DTC" to tracks out of L1Tk
    • DTC
      • Generically: "a board with FIFOs containing 8 BX packets"
      • All stub manipulation on the DTC must be accounted for in the latency measurement
    • 8 BX packets: measure latency wrt the first word of the packet

System Constraints

  • Explore both trigger friendly/unfriendly calbeing scenario

System Projections

  • TDR must sketch a path to a final system
  • Need an agreed-upon ansatz for future technology
  • Agree on realistic technology projections to 2025?
    • What capabilities will be available at what cost?
    • Will is be possible to realistically leverage them?

Documentation

  • Individual Technical Notes from each approach

-- KristianHahn - 2015-08-21

Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2015-08-21 - KristianHahn
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback