AtlasPublicTopicHeader.png

TriggerCoreSWPublicResults

Introduction

Approved plots that can be shown by ATLAS speakers at conferences and similar events. Please do not add figures on your own. Contact the responsible project leader in case of questions and/or suggestions. Follow the guidelines on the trigger public results page.

Plots not included here can be found on the TriggerOperationPublicResults, TriggerPublicResults or TdaqPublic pages

Run3

Performance

Trigger software performance scaling Internal link - ATL-COM-DAQ-2023-001

(a) Application throughput in events / s, (b) CPU usage (CPU time divided by wall time) in percent and (c) memory usage in GB as a function of the number of events processed in parallel for the ATLAS Athena 1, 2 application executing Trigger selection algorithms. The measurements were performed with a data sample containing a mix of events representative of the real ATLAS High-Level Trigger (HLT) input data and Trigger selection configuration identical to one used during data-taking. Four ways of achieving the parallelism are presented. Blue squares represent a multi-processing approach where the main process is forked after initialisation into a number of worker processes equal to the number of events requested to process in parallel. Each worker processes events independently using a single thread, sharing read-only memory with other workers via the Copy-On-Write mechanism. Pink circles represent a multi-threading approach with a single process using a number of threads equal to the number of events requested to process in parallel, sharing both read-only and writeable memory. Threads are not bound to events; instead, a pool of NT threads processes a pool of NE=NT events. Green diamonds and orange triangles represent a hybrid approach where a number of processes NP = NE / NT forked after initialisation use a fixed number of threads each (NT = 4 and 8, respectively) to process NE events in parallel. During 2022 data-taking the ATLAS HLT used the multi-process configuration with 48 forks. The measurements were performed in a standalone local environment using a machine identical to those used in the ATLAS HLT computing farm during data-taking. It is a dual processor machine with 128 GB RAM using a NUMA memory architecture and two AMD EPYC 7302 CPUs, where each CPU has 16 real cores with two hyper-threads per core, giving the total number of 64 threads.

1 ATLAS Collaboration, 2019, Athena [software], Release 22.0.1, https://doi.org/10.5281/zenodo.2641997
2 ATLAS Collaboration, 2022, Athena [software], Release 22.0.102, https://gitlab.cern.ch/atlas/athena/-/releases/release/22.0.102


(a) png (a) pdf
contact: RafalBielski

(b) png (b) pdf
contact: RafalBielski

(c) png (c) pdf
contact: RafalBielski
(a) Application throughput in events / s, (b) CPU usage (CPU time divided by wall time) in percent and (c) memory usage in GB as a function of the number of events processed in parallel for the ATLAS Athena 1, 2 application executing Trigger selection algorithms. The measurements were performed with a t¯t ATLAS Monte Carlo (MC) simulation sample with an average pileup of 52, executing a wide range of High-Level Trigger (HLT) selections as well as simulation of the Level-1 hardware Trigger. Four ways of achieving the parallelism are presented. Blue squares represent a multi-processing approach where the main process is forked after initialisation into a number of worker processes equal to the number of events requested to process in parallel. Each worker processes events independently using a single thread, sharing read-only memory with other workers via the Copy-On-Write mechanism. Pink circles represent a multi-threading approach with a single process using a number of threads equal to the number of events requested to process in parallel, sharing both read-only and writeable memory. Threads are not bound to events; instead, a pool of NT threads processes a pool of NE=NT events. Green diamonds and orange triangles represent a hybrid approach where a number of processes NP = NE / NT forked after initialisation use a fixed number of threads each (NT = 4 and 8, respectively) to process NE events in parallel. In 2022, ATLAS MC production tasks were executed in the multi-threaded mode and submitted most commonly to 8-core task queues in the Worldwide LHC Computing Grid 3, running on various CPU architectures. The measurements were performed in a standalone local environment using a machine identical to those used in the ATLAS HLT computing farm during data-taking. It is a dual processor machine with 128 GB RAM using a NUMA memory architecture and two AMD EPYC 7302 CPUs, where each CPU has 16 real cores with two hyper-threads per core, giving the total number of 64 threads.

1 ATLAS Collaboration, 2019, Athena [software], Release 22.0.1, https://doi.org/10.5281/zenodo.2641997
2 ATLAS Collaboration, 2022, Athena [software], Release 22.0.102, https://gitlab.cern.ch/atlas/athena/-/releases/release/22.0.102
3 I. Bird et al, 2014, Update of the Computing Models of the WLCG and the LHC Experiments, CERN-LHCC-2014-014 / LCG-TDR-002


(a) png (a) pdf
contact: RafalBielski

(b) png (b) pdf
contact: RafalBielski

(c) png (c) pdf
contact: RafalBielski

Diagrams

Software Screenshots

LS2

Performance

Trigger Cost Monitoring example results in Run 3 framework Internal link - ATL-COM-DAQ-2021-053

Example of measured framework time, defined as the time a thread spends outside of scheduled algorithms while waiting for an algorithm to be dispatched. It includes input/output and control flow operations. For all events the framework time is stable and the mean value equals to 20 ms. The plot was created with a small 2018 data sample that was preloaded into the HLT farm and processed repeatedly.
png pdf
contact: AleksandraPoreba

Trigger Cost Monitoring example results in Run 3 framework Internal link - ATL-COM-DAQ-2021-045

Example of measured time of all the algorithms executed on the thread as fraction of the monitored event time window. On the right plot the fractional time is showed as a function of the monitored event time window. Three peaks can be observed in the plots - one when thread processes algorithms 60% of total time, which happens for short events (approximately 100 ms), when the event data doesn't fulfill the requirements to trigger execution of time consuming algorithms. The other two peaks represent long events (approximately 300 ms and 3000 ms) when algorithm processing takes majority of the total time. The peaks correlate with recorded times of algorithm execution per event. For some of the events included in the data sample the algorithm processing takes 0% of the time, which is happening when the event does not fulfill requirements to trigger any of the algorithm executions. The rest of the time is spent for so-called "framework time" when a thread is performing framework related operations, including input/output and control flow handling. The plots were created with a small 2018 data sample that was preloaded into the HLT farm and processed repeatedly.
left png pdf
contact: AleksandraPoreba

right png pdf
contact: AleksandraPoreba
Example of measured framework time on one thread as fraction of the monitored event time window. On the right plot the fractional time is showed as a function of the monitored event time window. The framework time is defined as the time a thread spent outside of scheduled algorithms while waiting for an algorithm to be dispatched. It includes input/output and control flow operations. Three peaks can be observed - one when thread performs framework operations 40% of the time during short events processing (approximately 100 ms). The other two correspond to long events (approximately 300 ms and 3000 ms) when the framework time takes a minor fraction of the event time window. For some of the events included in the data sample the framework time takes 100%, what is happening when the event does not fulfill requirements to trigger any of the algorithm executions. The plots were created with a small 2018 data sample that was preloaded into the HLT farm and processed repeatedly.
left png pdf
contact: AleksandraPoreba

right png pdf
contact: AleksandraPoreba

Diagrams

Software Screenshots

Trigger Cost Monitoring example results in Run 3 framework Internal link - ATL-COM-DAQ-2021-045

Representation of Algorithm Summary table available on TriggerCostBrowser website containing details about the algorithm executions. They include number of events in which algorithm was activated, number of algorithm calls per event, rate of algorithm calls, rate of events in which algorithm was executed, algorithm call duration or total time of algorithm execution. Created based on reprocessing 2018 proton-proton collision data with the latest HLT software.
png pdf
contact: AleksandraPoreba
Representation of Chain Summary table available on TriggerCostBrowser website containing details about the chain executions. They include groups the chain belongs to, number of events in which chain was activated, chain execution rate, number of algorithm calls that chain made and chain duration. Created based on reprocessing 2018 proton-proton collision data with the latest HLT software.
png pdf
contact: AleksandraPoreba
Representation of Chain Item Summary on TriggerCostBrowser website that lists all algorithms related to a particular chain. In this example a jet reconstruction chain is presented. Each algorithm displays its class and the number of calls that it made ("AllChains calls"). Created based on reprocessing 2018 proton-proton collision data with the latest HLT software.
png pdf
contact: AleksandraPoreba

Run2

Diagrams

Operation of the ATLAS trigger system in Run 2 JINST 15 (2020) P10004

A simplified representation of the database structure for the trigger configuration highlighting the primary keys and the relationship between a selection of tables. The primary keys (black rectangles) are associated with an index of a table (rectangles). Most of the top-level table structure is shown (red rectangles) with each of these linking to further tables (for example a trigger chain). These linked tables contain various objects (for example a chain name) and subsequent links to still further tables (for example the algorithm for the given trigger chain) as demonstrated by the blue (L1) or green (HLT) rectangles.
png pdf
contact: ATLAS TRIG conveners

Software Screenshots

Operation of the ATLAS trigger system in Run 2 JINST 15 (2020) P10004

The main panel of the TriggerTool, displaying all available Super Master Keys (SMKs) which are currently contained in the connected TriggerDB with a minimal set of information such as its creator, time of creation and software release with which it was created. In the bottom left part, some of the available prescales keys are shown. In the bottom right, a visual display of the trigger chains and their corresponding L1 items is displayed.
png pdf
contact: ATLAS TRIG conveners
The HLT prescale tab of the TriggerTool GUI with the list of available Prescale Set Keys for use with the selected Super Master Key. When a prescale set is chosen from the list, the related settings for each chain are shown in the table. Similar information for L1 items is available under the L1 prescales tab.
png pdf
contact: ATLAS TRIG conveners
The Standard (left) and Alias (right) tabs in the Trigger Panel. The following settings are displayed: the current prescales, the bunch group set and the Super Master Key (under Trigger menu), as well as the Prescale Key Sets (PSKs) to be used for various data-taking cases (Standby, Emittance, Physics). In the alias panel, for the case of physics data-taking, the alias table is displayed with the relevant luminosity range, L1 and HLT PSK and additional comment field as a suggestion of which prescale keys to use.
left png left pdf
contact: ATLAS TRIG conveners

right png right pdf
contact: ATLAS TRIG conveners
The Trigger Rate Presenter is used to monitor the rates of various triggers and streams while taking data. Shown here is an overview which includes various L1 rates and total HLT output rates, as well as rates in important physics and calibration streams during a data-taking run starting at around 15:35 and ending around 17:40.
png pdf
contact: ATLAS TRIG conveners
The Data Quality Monitoring Display (DQMD) used for the HLT. The data (black) are compared with the reference (purple) using a Kolmogorov--Smirnov test to compare the shapes of the distributions. Based on the output of the comparison test, the histograms are flagged either green (good agreement with the reference), yellow (tolerable disagreement with the reference), or red (major disagreement with the reference).
png pdf
contact: ATLAS TRIG conveners


Major updates:
-- MarkStockton - 2023-01-26
-- SavannaShaw - 2023-01-26

Responsible: WernerWiedenmann, MarkStockton, SavannaShaw
Subject: public

Topic attachments
I Attachment History Action Size Date Who Comment
PDFpdf ATL-COM-DAQ-2023-001-Fig-1a.pdf r1 manage 25.5 K 2023-01-26 - 15:09 MarkStockton Trigger software performance scaling
PNGpng ATL-COM-DAQ-2023-001-Fig-1a.png r1 manage 103.8 K 2023-01-26 - 15:09 MarkStockton Trigger software performance scaling
PDFpdf ATL-COM-DAQ-2023-001-Fig-1b.pdf r1 manage 26.4 K 2023-01-26 - 15:09 MarkStockton Trigger software performance scaling
PNGpng ATL-COM-DAQ-2023-001-Fig-1b.png r1 manage 97.2 K 2023-01-26 - 15:09 MarkStockton Trigger software performance scaling
PDFpdf ATL-COM-DAQ-2023-001-Fig-1c.pdf r1 manage 26.1 K 2023-01-26 - 15:09 MarkStockton Trigger software performance scaling
PNGpng ATL-COM-DAQ-2023-001-Fig-1c.png r1 manage 91.6 K 2023-01-26 - 15:09 MarkStockton Trigger software performance scaling
PDFpdf ATL-COM-DAQ-2023-001-Fig-2a.pdf r1 manage 25.8 K 2023-01-26 - 15:09 MarkStockton Trigger software performance scaling
PNGpng ATL-COM-DAQ-2023-001-Fig-2a.png r1 manage 92.0 K 2023-01-26 - 15:09 MarkStockton Trigger software performance scaling
PDFpdf ATL-COM-DAQ-2023-001-Fig-2b.pdf r1 manage 26.7 K 2023-01-26 - 15:09 MarkStockton Trigger software performance scaling
PNGpng ATL-COM-DAQ-2023-001-Fig-2b.png r1 manage 92.5 K 2023-01-26 - 15:09 MarkStockton Trigger software performance scaling
PDFpdf ATL-COM-DAQ-2023-001-Fig-2c.pdf r1 manage 26.6 K 2023-01-26 - 15:09 MarkStockton Trigger software performance scaling
PNGpng ATL-COM-DAQ-2023-001-Fig-2c.png r1 manage 87.5 K 2023-01-26 - 15:09 MarkStockton Trigger software performance scaling
Edit | Attach | Watch | Print version | History: r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r2 - 2023-01-26 - MarkStockton
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Atlas All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback