(a) Application throughput in events / s, (b) CPU usage (CPU time divided by wall time) in percent and (c) memory usage in GB as a function
of the number of events processed in parallel for the ATLAS Athena 1![]() ![]() ![]() ![]() 2 ![]() ![]() |
![]() (a) png (a) pdf contact: RafalBielski |
![]() (b) png (b) pdf contact: RafalBielski |
![]() (c) png (c) pdf contact: RafalBielski |
(a) Application throughput in events / s, (b) CPU usage (CPU time divided by wall time) in percent and (c) memory usage in GB as a function
of the number of events processed in parallel for the ATLAS Athena 1![]() ![]() ![]() ![]() ![]() 2 ![]() ![]() 3 ![]() |
![]() (a) png (a) pdf contact: RafalBielski |
![]() (b) png (b) pdf contact: RafalBielski |
![]() (c) png (c) pdf contact: RafalBielski |
Example of measured framework time, defined as the time a thread spends outside of scheduled algorithms while waiting for an algorithm to be dispatched. It includes input/output and control flow operations. For all events the framework time is stable and the mean value equals to 20 ms. The plot was created with a small 2018 data sample that was preloaded into the HLT farm and processed repeatedly. |
![]() png pdf contact: AleksandraPoreba |
Example of measured time of all the algorithms executed on the thread as fraction of the monitored event time window. On the right plot the fractional time is showed as a function of the monitored event time window. Three peaks can be observed in the plots - one when thread processes algorithms 60% of total time, which happens for short events (approximately 100 ms), when the event data doesn't fulfill the requirements to trigger execution of time consuming algorithms. The other two peaks represent long events (approximately 300 ms and 3000 ms) when algorithm processing takes majority of the total time. The peaks correlate with recorded times of algorithm execution per event. For some of the events included in the data sample the algorithm processing takes 0% of the time, which is happening when the event does not fulfill requirements to trigger any of the algorithm executions. The rest of the time is spent for so-called "framework time" when a thread is performing framework related operations, including input/output and control flow handling. The plots were created with a small 2018 data sample that was preloaded into the HLT farm and processed repeatedly. |
![]() left png pdf contact: AleksandraPoreba |
![]() right png pdf contact: AleksandraPoreba |
Example of measured framework time on one thread as fraction of the monitored event time window. On the right plot the fractional time is showed as a function of the monitored event time window. The framework time is defined as the time a thread spent outside of scheduled algorithms while waiting for an algorithm to be dispatched. It includes input/output and control flow operations. Three peaks can be observed - one when thread performs framework operations 40% of the time during short events processing (approximately 100 ms). The other two correspond to long events (approximately 300 ms and 3000 ms) when the framework time takes a minor fraction of the event time window. For some of the events included in the data sample the framework time takes 100%, what is happening when the event does not fulfill requirements to trigger any of the algorithm executions. The plots were created with a small 2018 data sample that was preloaded into the HLT farm and processed repeatedly. |
![]() left png pdf contact: AleksandraPoreba |
![]() right png pdf contact: AleksandraPoreba |
Representation of Algorithm Summary table available on TriggerCostBrowser website containing details about the algorithm executions. They include number of events in which algorithm was activated, number of algorithm calls per event, rate of algorithm calls, rate of events in which algorithm was executed, algorithm call duration or total time of algorithm execution. Created based on reprocessing 2018 proton-proton collision data with the latest HLT software. |
![]() png pdf contact: AleksandraPoreba |
Representation of Chain Summary table available on TriggerCostBrowser website containing details about the chain executions. They include groups the chain belongs to, number of events in which chain was activated, chain execution rate, number of algorithm calls that chain made and chain duration. Created based on reprocessing 2018 proton-proton collision data with the latest HLT software. |
![]() png pdf contact: AleksandraPoreba |
Representation of Chain Item Summary on TriggerCostBrowser website that lists all algorithms related to a particular chain. In this example a jet reconstruction chain is presented. Each algorithm displays its class and the number of calls that it made ("AllChains calls"). Created based on reprocessing 2018 proton-proton collision data with the latest HLT software. |
![]() png pdf contact: AleksandraPoreba |
A simplified representation of the database structure for the trigger configuration highlighting the primary keys and the relationship between a selection of tables. The primary keys (black rectangles) are associated with an index of a table (rectangles). Most of the top-level table structure is shown (red rectangles) with each of these linking to further tables (for example a trigger chain). These linked tables contain various objects (for example a chain name) and subsequent links to still further tables (for example the algorithm for the given trigger chain) as demonstrated by the blue (L1) or green (HLT) rectangles. |
![]() ![]() png ![]() ![]() contact: ATLAS TRIG conveners |
The main panel of the TriggerTool, displaying all available Super Master Keys (SMKs) which are currently contained in the connected TriggerDB with a minimal set of information such as its creator, time of creation and software release with which it was created. In the bottom left part, some of the available prescales keys are shown. In the bottom right, a visual display of the trigger chains and their corresponding L1 items is displayed. |
![]() ![]() png ![]() ![]() contact: ATLAS TRIG conveners |
|
The HLT prescale tab of the TriggerTool GUI with the list of available Prescale Set Keys for use with the selected Super Master Key. When a prescale set is chosen from the list, the related settings for each chain are shown in the table. Similar information for L1 items is available under the L1 prescales tab. |
![]() ![]() png ![]() ![]() contact: ATLAS TRIG conveners |
|
The Standard (left) and Alias (right) tabs in the Trigger Panel. The following settings are displayed: the current prescales, the bunch group set and the Super Master Key (under Trigger menu), as well as the Prescale Key Sets (PSKs) to be used for various data-taking cases (Standby, Emittance, Physics). In the alias panel, for the case of physics data-taking, the alias table is displayed with the relevant luminosity range, L1 and HLT PSK and additional comment field as a suggestion of which prescale keys to use. |
![]() ![]() left png ![]() ![]() contact: ATLAS TRIG conveners |
![]() ![]() right png ![]() ![]() contact: ATLAS TRIG conveners |
The Trigger Rate Presenter is used to monitor the rates of various triggers and streams while taking data. Shown here is an overview which includes various L1 rates and total HLT output rates, as well as rates in important physics and calibration streams during a data-taking run starting at around 15:35 and ending around 17:40. |
![]() ![]() png ![]() ![]() contact: ATLAS TRIG conveners |
|
The Data Quality Monitoring Display (DQMD) used for the HLT. The data (black) are compared with the reference (purple) using a Kolmogorov--Smirnov test to compare the shapes of the distributions. Based on the output of the comparison test, the histograms are flagged either green (good agreement with the reference), yellow (tolerable disagreement with the reference), or red (major disagreement with the reference). |
![]() ![]() png ![]() ![]() contact: ATLAS TRIG conveners |
I | Attachment | History | Action | Size | Date | Who | Comment |
---|---|---|---|---|---|---|---|
![]() |
ATL-COM-DAQ-2023-001-Fig-1a.pdf | r1 | manage | 25.5 K | 2023-01-26 - 15:09 | MarkStockton | Trigger software performance scaling |
![]() |
ATL-COM-DAQ-2023-001-Fig-1a.png | r1 | manage | 103.8 K | 2023-01-26 - 15:09 | MarkStockton | Trigger software performance scaling |
![]() |
ATL-COM-DAQ-2023-001-Fig-1b.pdf | r1 | manage | 26.4 K | 2023-01-26 - 15:09 | MarkStockton | Trigger software performance scaling |
![]() |
ATL-COM-DAQ-2023-001-Fig-1b.png | r1 | manage | 97.2 K | 2023-01-26 - 15:09 | MarkStockton | Trigger software performance scaling |
![]() |
ATL-COM-DAQ-2023-001-Fig-1c.pdf | r1 | manage | 26.1 K | 2023-01-26 - 15:09 | MarkStockton | Trigger software performance scaling |
![]() |
ATL-COM-DAQ-2023-001-Fig-1c.png | r1 | manage | 91.6 K | 2023-01-26 - 15:09 | MarkStockton | Trigger software performance scaling |
![]() |
ATL-COM-DAQ-2023-001-Fig-2a.pdf | r1 | manage | 25.8 K | 2023-01-26 - 15:09 | MarkStockton | Trigger software performance scaling |
![]() |
ATL-COM-DAQ-2023-001-Fig-2a.png | r1 | manage | 92.0 K | 2023-01-26 - 15:09 | MarkStockton | Trigger software performance scaling |
![]() |
ATL-COM-DAQ-2023-001-Fig-2b.pdf | r1 | manage | 26.7 K | 2023-01-26 - 15:09 | MarkStockton | Trigger software performance scaling |
![]() |
ATL-COM-DAQ-2023-001-Fig-2b.png | r1 | manage | 92.5 K | 2023-01-26 - 15:09 | MarkStockton | Trigger software performance scaling |
![]() |
ATL-COM-DAQ-2023-001-Fig-2c.pdf | r1 | manage | 26.6 K | 2023-01-26 - 15:09 | MarkStockton | Trigger software performance scaling |
![]() |
ATL-COM-DAQ-2023-001-Fig-2c.png | r1 | manage | 87.5 K | 2023-01-26 - 15:09 | MarkStockton | Trigger software performance scaling |