AtlasPublicTopicHeader.png

Approved plots for the L1Track Trigger project

Introduction


Approved plots that can be shown by ATLAS speakers at conferences and similar events.
Please do not add figures on your own. Contact the responsible project leader in case of questions and/or suggestions.

L1 Trigger object rates and rejection factors

ATL-COM-DAQ-2013-085 Tracking for the ATLAS Level 1 Trigger for the HL-LHC

The impact of different curvature resolutions on the RoI-to-truth matching for muons from L1_MU20 RoIs, showing the maximum p of the matching true muon after smearing of the curvature variable, q/p by values shown in the figure.


png pdf

The fraction of |η| <1.3 L1_MU20 RoIs that remain after matching to a truth muon with various increasingly tight matching requirements; The “All RoIs” bin shows the fraction remaining (1.0) with no matching requirement as a reference; The “TruthMatch” bin shows the fraction after a match to a true muon with any pT; The “TruthMatch pT> 15” bin shows the fraction after a match to a true muon with pT > 15 GeV; and the “SmTruMatch > 15” shows the fraction after a match to true muon with pT > 15 GeV after smearing the muon curvature, q/pT by the resolutions shown in the figure.


png pdf

Appoved plots from the Chip readout Discrete Event Simulation

ATL-COM-DAQ-2013-085 Tracking for the ATLAS Level 1 Trigger for the HL-LHC

The arrival time at the end-of-stave of the final R3 packet following an R3 request, for barrel hybrid 0 in the innermost ITK Strip Tracker layer. The L0 and L1 accept rates are 500 kHz and 200 kHZ with 10% R3 detector occupancy and 160 Mbps bandwidth from the HCC. In the simulation there are 10 ABC chips, arranged in 2 daisy chains, each of 5 chips attached the the HCC. R3 data is prioritised on the HCC. The separation between the peaks is determined by the time taken to transfer packets between adjacent chips in each daisy chain.


png pdf

The time required to read out all the R3 data packets for 95% of all R3 requests as a function of the Level 1 accept rate for the discrete event simulation of the Phase II ITK Strip Tracker. The different curves in each of the two groups correspond to all the hybrids in the inner most Strip Tracker Barrel layer. The Level 0 accept rate is 500 kHz and the Regional occupancy is 10%. The bandwidth from the HCC is 160 Mbps. In the simulation there are 10 chips per hybrid in 2 daisy chains of 5 chips. Shown are the latencies both with, and without prioritisation of the R3 data on the HCC.


png pdf

The time required to read out all the R3 data packets for 95% of all R3 requests as a function of the Level 1 accept rate for the discrete event simulation of the Phase II ITK Strip Tacker for all the hybrids in the Endcap petal furthest from the interaction point. The Level 0 accept rate is 500 kHz and the Regional occupancy is 10%, the bandwidth from the HCC is 160 Mbps. The number of chips per hybrid varies with the hybrid number - hybrid 6 has 12 chips. Shown are the latencies including prioritisation of the R3 data on the HCC with the solid lines corresponding to the latencies using a FIFO with a maximum depth of 32 packets to receive from each daisy chain, the dotted lines are for a FIFO with unlimited depth.


png pdf

The time required to read out all the R3 data packets for 95% of all R3 requests as a function both of the Level 1 accept rate and the R3 rate (occupancy×L0 rate) for the discrete event simulation of the Phase II ITK Strip Tracker for the highest occupancy hybrid in barrel layer 0. In the simulation, the bandwidth from the HCC is 160 Mbps and the number of chips attached to the hybrid is 10, in 2 daisy chains of 5 chips. The latencies including prioritisation of the R3 data on the HCC. For reference, the dotted lines represent the baseline 200 kHz L1 rate and 500kHz × 10% occupancy = 50 kHz R3 rate.


png pdf

The time required to read out all the R3 data packets for 95% of all R3 requests as a function both of the Level 1 accept rate and the R3 rate (occupancy×L0 rate) for the discrete event simulation of the Phase II ITK Strip Tracker for hybrid 6 in the endcap petal furthest from the interaction point. In the simulation, the bandwidth from the HCC is 160 Mbps and the number of chips attached to the hybrid is 12, in 2 daisy chains of 6 chips. The latencies including prioritisation of the R3 data on the HCC. For reference, the dotted lines represent the baseline 200 kHz L1 rate and 500kHz × 10% occupancy = 50 kHz R3 rate.


png pdf

The time required to read out all the R3 data packets for 95% of all R3 requests as a function both of the Level 1 accept rate and the R3 rate (occupancy×L0 rate) for the discrete event simulation of the Phase II ITK Strip tracker for hybrid 6 in the endcap petal furthest from the interaction point. In the simulation, the bandwidth from the HCC is 320 Mbps and the number of chips attached to the hybrid is 12, in 4 daisy chains of 3 chips. The latencies including prioritisation of the R3 data on the HCC. For reference, the dotted lines represent the baseline 200 kHz L1 rate and 500kHz × 10% occupancy = 50 kHz R3 rate.


png pdf

ATL-COM-DAQ-2015-067 L1 Track HCC latency maps for readout at 1 MHz

Detector map showing the latency within which 95% of L0-Priority requests for regional detector readout after an L0 request can be completed. The full system delivers a rate of 1MHz of full detector data of which 10% are L0-Priority requests corresponding to a Regional Readout Request (R3) .The data from the R3 requests will be processed by the L1 Track system. The latencies are estimated with the L1Track discrete event simulation. The readout bandwidth from the Hybrid Chip Controllers (HCC) for each hybrid is 320 Mbps. The data format for the cluster data within the system is the same for both L0 and L0-Priority requests. The chip hit occupancies correspond to a mean inclusive pileup interaction multiplicity, <μINCL> of 196 interactions per bunch crossing, the upper limit which is expected with a bunch separation of 25 ns at an instantaneous luminosity 7× 10^34 cm-2 s-1.


png pdf

Detector map showing the latency within which 99% of L0-Priority requests for regional detector readout after an L0 request can be completed. The full system delivers a rate of 1MHz of full detector data of which 10% are L0-Priority requests corresponding to a Regional Readout Request (R3) .The data from the R3 requests will be processed by the L1 Track system. The latencies are estimated with the L1Track discrete event simulation. The readout bandwidth from the Hybrid Chip Controllers (HCC) for each hybrid is 320 Mbps. The data format for the cluster data within the system is the same for both L0 and L0-Priority requests. The chip hit occupancies correspond to a mean inclusive pileup interaction multiplicity, <μINCL> of 196 interactions per bunch crossing, the upper limit which is expected with a bunch separation of 25 ns at an instantaneous luminosity 7× 10^34 cm-2 s-1.


png pdf

Detector map showing the latency within which 95% of non-prioritised requests for full detector readout, at either L0 or L1, can be completed. The full system delivers a rate of 1MHz of full detector data of which 10% are L0-Priority requests corresponding to a Regional Readout Request (R3). The data from the R3 requests will be processed by the L1 Track system. The latencies are estimated with the L1Track discrete event simulation. The readout bandwidth from the Hybrid Chip Controllers (HCC) for each hybrid is 320 Mbps. The data format for the cluster data within the system is the same for both L0 and L0-Priority requests. The chip hit occupancies correspond to a mean inclusive pileup interaction multiplicity, <μINCL> of 196 interactions per bunch crossing, the upper limit which is expected with a bunch separation of 25 ns at an instantaneous luminosity 7× 10^34 cm-2 s-1.


png pdf

Detector map showing the latency within which 99% of non-prioritised requests for full detector readout, at either L0 or L1, can be completed. The full system delivers a rate of 1MHz of full detector data of which 10% are L0-Priority requests corresponding to a Regional Readout Request (R3) .The data from the R3 requests will be processed by the L1 Track system. The latencies are estimated with the L1Track discrete event simulation. The readout bandwidth from the Hybrid Chip Controllers (HCC) for each hybrid is 320 Mbps. The data format for the cluster data within the system is the same for both L0 and L0-Priority requests. The chip hit occupancies correspond to a mean inclusive pileup interaction multiplicity, <μINCL> of 196 interactions per bunch crossing, the upper limit which is expected with a bunch separation of 25 ns at an instantaneous luminosity 7× 10^34 cm-2 s-1.


png pdf

Simulated performance plots

ATL-COM-DAQ-2016-065 Single lepton efficiencies

Signal vs. background efficiencies for three track selection strategies as functions of a track pT cut in the region of interest 0.1≤η≤0.3, 0.3≤φ≤0.5. The efficiency is defined as the number of events passing the L1Track trigger over the number of L0 single lepton trigger accepts. The signal is composed of single electrons and the background are semileptonically decaying jets weighted to the expected pT spectra of events firing the L0 EM18 triggers, which could not be simulated, and overlaid with a pileup of <μ> = 200. The number next to each marker signifies the pT cut applied to the track candidates resulting from the L1Track fit, the candidate was selected either by highest pT (light blue), highest pT of the two candidates with best χ^2 (dark blue) or the candidate with the best χ^2 (black).


png pdf
Signal vs. background efficiencies for three track selection strategies as functions of a track pT cut in the region of interest 0.1≤η≤0.3, 0.3≤φ≤0.5. The efficiency is defined as the number of events passing the L1Track trigger over the number of L0 single lepton trigger accepts. The signal is composed of single muons and the background are semileptonically decaying b-jets weighted to the expected pT spectra of events firing the L0 MU20 triggers, which could not be simulated, and overlaid with a pileup of <μ> = 200. The number next to each marker signifies the pT cut applied to the track candidates resulting from the L1Track fit, the candidate was selected either by highest pT (light blue), highest pT of the two candidates with best χ^2 (dark blue) or the candidate with the best χ^2 (black).


png pdf
Summary of the pattern recognition and track fitting performance on single muon and minimum bias events in the barrel region, 0.1≤η≤0.3, 0.3≤φ≤0.5, for two layer configurations: one with strip layers only and one where the innermost strip layer has been replaced by a pixel layer, both using the Phase II upgrade Letter of Intent layout. The pattern matching efficiency, ε_pattern, is defined as the fraction of single muon events with a matched pattern; the track fitting efficiency, ε_fit, is defined as the fraction of those events where at least one track fit is successful and has a χ^2 < 40; and < N fits> is the average number of fits in minimum bias events at a <μ > =200 level of pileup interactions.


png pdf
The resolutions of the track parameters from the fit for single muon events in the barrel region, 0.1≤η≤0.3, 0.3≤φ≤0.5, for two layer configurations: one with strip layers only and one where the innermost strip layer has been replaced by a pixel layer, both using the Phase II upgrade Letter of Intent layout.


png pdf

Major updates:
-- MarkSutton - 06-Oct-2013

Responsible: MarkSutton
Subject: public plots for L1 Track

Topic attachments
I Attachment History Action Size Date Who Comment
PDFpdf L0-latency-95.pdf r1 manage 396.5 K 2015-06-10 - 14:16 MarkSutton latency maps
PNGpng L0-latency-95.png r1 manage 352.0 K 2015-06-10 - 14:16 MarkSutton latency maps
PDFpdf L0-latency-99.pdf r1 manage 400.1 K 2015-06-10 - 14:16 MarkSutton latency maps
PNGpng L0-latency-99.png r1 manage 365.6 K 2015-06-10 - 14:16 MarkSutton latency maps
PDFpdf L0Priority-latency-95.pdf r1 manage 393.8 K 2015-06-10 - 14:16 MarkSutton latency maps
PNGpng L0Priority-latency-95.png r1 manage 345.9 K 2015-06-10 - 14:16 MarkSutton latency maps
PDFpdf L0Priority-latency-99.pdf r1 manage 395.3 K 2015-06-10 - 14:16 MarkSutton latency maps
PNGpng L0Priority-latency-99.png r1 manage 347.7 K 2015-06-10 - 14:16 MarkSutton latency maps
PDFpdf LoI_pixVSstrip_eff.pdf r1 manage 32.9 K 2016-09-19 - 14:12 PerOlovJoakimGradin Tables with pattern matching and fitting performance
PNGpng LoI_pixVSstrip_eff.png r1 manage 118.8 K 2016-09-19 - 14:12 PerOlovJoakimGradin Tables with pattern matching and fitting performance
PDFpdf LoI_pixVSstrip_res.pdf r1 manage 38.3 K 2016-09-19 - 14:12 PerOlovJoakimGradin Tables with pattern matching and fitting performance
PNGpng LoI_pixVSstrip_res.png r1 manage 93.9 K 2016-09-19 - 14:12 PerOlovJoakimGradin Tables with pattern matching and fitting performance
PDFpdf ROC_h_maxpt_event_wtOverFlow_3_electronsPU.pdf r1 manage 17.1 K 2016-08-17 - 16:37 PerOlovJoakimGradin Performance plots
PNGpng ROC_h_maxpt_event_wtOverFlow_3_electronsPU.png r1 manage 430.0 K 2016-08-17 - 16:38 PerOlovJoakimGradin Performance plots
PDFpdf ROC_h_maxpt_event_wtOverFlow_3_muonsPU.pdf r1 manage 16.5 K 2016-08-17 - 16:38 PerOlovJoakimGradin Performance plots
PNGpng ROC_h_maxpt_event_wtOverFlow_3_muonsPU.png r1 manage 421.1 K 2016-08-17 - 16:38 PerOlovJoakimGradin Performance plots
PNGpng l1track-1.png r1 manage 144.4 K 2013-10-06 - 10:20 MarkSutton  
PNGpng l1track-2.png r1 manage 99.2 K 2013-10-06 - 10:20 MarkSutton  
PNGpng l1track-3.png r1 manage 465.7 K 2013-10-06 - 10:20 MarkSutton  
PNGpng l1track-4.png r1 manage 299.2 K 2013-10-06 - 10:20 MarkSutton  
PNGpng l1track-5.png r1 manage 349.7 K 2013-10-06 - 10:20 MarkSutton  
PNGpng l1track-6.png r1 manage 691.1 K 2013-10-06 - 10:20 MarkSutton  
PNGpng l1track-7.png r1 manage 319.4 K 2013-10-06 - 10:20 MarkSutton  
Edit | Attach | Watch | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r5 - 2016-09-19 - PerOlovJoakimGradin
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Atlas All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback