AtlasPublicTopicHeader.png

Computing and Software - Public Results

Introduction

This page contains public plots and documents related to ATLAS computing and software.

More material is available elsewhere concerning related activities, such as:

All ATLAS public results can be found here.

For referencing the ATLAS software, please cite this PUB note. The ATLAS code repository is also public, and can be found on gitlab.

ATLAS is grateful to the sites and groups that provide significant computing resources to the collaboration. Here are our Acknowledgements (last updated July 2021).

Computing TDR and related Documents

report links year
ATLAS Software and Computing HL-LHC Roadmap Plots CDS 2022
ATLAS HL-LHC Computing Conceptual Design Report Plots CDS 2020
Update of Computing Models for Run-2 CDS 2014
Computing TDR CDS 2005
Computing Model Document CDS 2004
Technical Proposal CDS 1996

Public Notes, Conference Proceedings and Slides

Type of public documents links
list of ATLAS-SOFT-PUB notes CDS
list of conference proceedings CDS
list of conference talks CDS

Recent Public Plots

Some plots are contained within public documents rather than attached directly to this page:

report links year
ATLAS HL-LHC Software and Computing Roadmap Plots CDS 2022
Performance of Multi-threaded Reconstruction in ATLAS Plots CDS 2021
ATLAS HL-LHC Computing Conceptual Design Report Plots CDS 2020

The dependency of reconstruction wall time per event on the average number of interactions per bunch crossing (<μ>) is shown for the current Inner Detector reconstruction with default tracking cuts. The plot contains a selection of reconstructed luminosity blocks of RAW data from 13 TeV pp LHC collisions in 2017. An ATLAS luminosity block typically corresponds to one minute of data-taking. Tier-0 reconstruction jobs were required to run in single-core mode on a selected sub-cluster of 16-core machines (with Intel(R) Xeon(R) CPU E5-2630 v3 of 2.40 GHz clock speed, memory of 4 GB/core, 21 HS06/core with HT off). The typical collision runs (blue scatter plot) can be only qualitatively compared with the performance in high-μ run 335302 (red boxes), which had special data-taking conditions. Furthermore, the high-μ run jobs were configured to produce only AOD outputs, whereas standard jobs proceeded with 12 additional output types (different DRAW, DESD, DAOD and HIST), which takes extra processing time. The behaviour of the tracking reconstruction, which dominates the CPU use at high pileup, for High Luminosity LHC conditions with an upgraded tracking detector has been studied in reference [1].

[1] "Technical Design Report for the ATLAS ITk Pixel Detector", ATLAS Collaboration, CERN-LHCC-2017-021, ATLAS-TDR-030, Geneva 2018.
reco_WtPerEvent_NOfit_revised.png
pdf-file, png-file
Digitization time per event, in HepSpec06 seconds, as a function of the average number of interactions per bunch crossing, with 25 ns bunch spacing. A linear fit to the times is overlaid. On a modern CPU, one second of wall clock time corresponds to about 10 HepSpec06 seconds. icpuVSmu_all.png
pdf-file, png-file
On this figure the total reconstruction time per event is shown for a top Monte Carlo simulation sample with 40 pileup at 13 TeV, 25 ns bunch spacing. An overall improvement of a factor 3 is visible comparing the 2012 Tier-0 release (17.2.7.9), the release 19.0.3.3 which is optimised for reconstruction of the Run-1 data and the release 19.1.1.1 which is optimised for reconstructing Run-2 data. The CPU time is shown as well separately for the Inner Detector reconstruction as the tracking is dominating the total resource needs. The simulation is done using a Run-1 detector geometry without the IBL. The HS06 scaling factor for the machine used for this study is quoted as 11.95. This is the updated comparison of the CPU time for reconstructing top pair events with Run-2 pileup in different releases, including the MC15 production release (candidate), showing a speedup factor exceeding 4. id_evtloop_cpu_time-CHEP2015.png
pdf-file, png-file
Event-wise fractional overlaps between derivations built from the muon stream, run 203875 (2012) using 5000 input events. Each cell of the plot displays the fraction of events accepted in the first format that are also accepted in the second. A higher number indicates that more events are shared between the two formats. Since the different formats contain very different numbers of events, a cell indicating the overlap of format A with format B may not have the same value as its counterpart in the other half of the square representing the overlap of B with A. Also it should be noted that these plots cover only event-wise overlaps: overlaps in variables are not displayed. Hence, it is possible that a pair of formats may be fully correlated in terms of the events selected, but may contain orthogonal sets of variables - in which case no information is shared. Finally, it can clearly be seen that overlaps vary strongly with the trigger stream producing the events. muonOverlaps-CHEP2015.png
pdf-file, png-file
Event-wise fractional overlaps between derivations built from the e-gamma stream, produced from run 203875 (2012) using 5000 input events. Each cell of the plot displays the fraction of events accepted in the first format that are also accepted in the second. A higher number indicates that more events are shared between the two formats. Since the different formats contain very different numbers of events, a cell indicating the overlap of format A with format B may not have the same value as its counterpart in the other half of the square representing the overlap of B with A. Also it should be noted that these plots cover only event-wise overlaps: overlaps in variables are not displayed. Hence, it is possible that a pair of formats may be fully correlated in terms of the events selected, but may contain orthogonal sets of variables - in which case no information is shared. Finally, it can clearly be seen that overlaps vary strongly with the trigger stream producing the events. egammaOverlaps-CHEP2015.png
pdf-file, png-file
Event-wise fractional overlaps between derivations built from the jet stream, run 203875 (2012) using 5000 input events. Each cell of the plot displays the fraction of events accepted in the first format that are also accepted in the second. A higher number indicates that more events are shared between the two formats. Since the different formats contain very different numbers of events, a cell indicating the overlap of format A with format B may not have the same value as its counterpart in the other half of the square representing the overlap of B with A. Also it should be noted that these plots cover only event-wise overlaps: overlaps in variables are not displayed. Hence, it is possible that a pair of formats may be fully correlated in terms of the events selected, but may contain orthogonal sets of variables - in which case no information is shared. Finally, it can clearly be seen that overlaps vary strongly with the trigger stream producing the events. jetsOverlaps-CHEP2015.png
pdf-file, png-file
Memory profile of ATLAS MC digitization and reconstruction jobs comparing total RSS of 8 serial jobs to RSS of one AthenaMP job with 8 worker processes. Memory savings at the reconstruction step of this particular job are ~45%. AthenaMP-vs-Serial-19.1.1.5-pileup-CHEP2015.png
pdf-file, png-file
AthenaMP schematic view Atlas-AthenaMP-Schematic-CHEP2015.png
pdf-file, png-file
Yoda scaling with number of parallel processors (cores). The plot shows how the event throughput of Atlas G4 simulation scales with number of parallel processors (cores) when running within Yoda system on the Edison HPC at NERSC (Berkeley). The scalability is already quite good, although there is certainly a room for improvement and we will be looking into it in the coming months. ATLAS-Yoda-Sim-Throughput-CHEP2015.png
pdf-file, png-file
Size of DxAOD (derivation) datasets as a fraction of the size of the parent xAOD datasets, evaluated for all derivation types across all runs in period B, for the three physics streams. Each entry in the histogram represents a single derived dataset, with the value being equal to the size of the dataset divided by the size of its parent (input) dataset. There are a total of 65 formats, three streams and more than 100 runs, leading to several thousand individual datasets. size-CHEP2015.png
pdf-file, png-file
Fraction of total input events written into the DxAOD (derivation) datasets, evaluated for all derivation types across all runs in period B, for the three physics streams. Each entry in the histogram represents a single derived dataset, with the value being equal to the number of selected events in the dataset divided by the number of input events. There are a total of 65 formats, three streams and more than 100 runs, leading to several thousand individual datasets. skim-CHEP2015.png
pdf-file, png-file
The rate of new data transformations added to the ATLAS production system. transformations.png
pdf-file, png-file
Monthly rate of task requests submitted to the ATLAS production system. tasks.png
pdf-file, png-file
Comparison of monthly rates of task requests in the ATLAS production systems ProdSys1 and ProdSys2. comparison.png
pdf-file, png-file
Comparison of the energy loss distributions for 1 GeV single muon tracks in the ATLAS Pixel and SCT Detectors for full simulation (based on the Geant4 toolkit) and FATRAS simulation. Muon_1GeV_DeltaP_2.png
pdf-file, png-file
Comparison of the energy loss eta distributions for 1 GeV single muon tracks in the ATLAS Pixel and SCT Detectors for full simulation (based on the Geant4 toolkit) and FATRAS simulation. Muon_1GeV_Eta_DeltaEProfile_3.png
pdf-file, png-file
Comparison of hit distribution of single muon tracks in the ATLAS Pixel and SCT Detectors using FATRAS tracking geometry from GeoModel and from XML configuration file. myplotRZ.png
pdf-file, png-file
Energy fraction deposited in the 3rd layer of the Hadronic Endcap calorimeter by charged pions. The black points show the Geant4 inputs, and the result of the longitudinal energy parametrisation is shown in light blue. A good agreement is observed. The results of a Kolmogorov (KS) and chi2 test are displayed as well. /FCS_pions_layer10_prelim.png png-file
Total cell energy deposited in the calorimeter by photons. The black points show the Geant4 inputs, and the result of the longitudinal energy parametrisation is shown in light blue. A good agreement is observed. The results of a Kolmogorov (KS) and chi2 test are displayed as well. /FCS_photons_totalE_prelim.png png-file
The ratio of the FastCaloSim energy profile and the reconstructed cells energy profile, as a function of the distance of the centre of the cell and the pion calorimeter entrance position deta(pi,cell), dphi(pi,cell), for the original hit-cell assignment with the simplified geometry. The bias in phi due to the wrong description of the accordion shape of the calorimeter in the simplified geometry is greatly reduced when using the hit displacement method. Closure_noWiggle.png png-file
The ratio of the FastCaloSim energy profile and the reconstructed cells energy profile, as a function of the distance of the centre of the cell and the pion calorimeter entrance position deta(pi,cell), dphi(pi,cell), for the modified hit-cell assignment using the wiggle hit displacement method (b). The bias in phi due to the wrong description of the accordion shape of the calorimeter in the simplified geometry is greatly reduced when using the hit displacement method. Closure_withWiggle.png png-file
Illustration of the energy normalized per bin area used as input to the NN fit. This example is for 50 GeV central (0.20<|eta|<0.25) pions in the EMB1 layer and corresponds to events included in the first bin of the PCA energy parameterisation. NNeur4_Lay1_E50000_eta0.20_PID211_reference_polar.png png-file
Illustration of the output of the NN parametrisation of Fig. 9 input. This example is for 50 GeV central (0.20<|eta|<0.25) pions in the EMB1 layer and corresponds to events included in the first bin of the PCA energy parameterisation. NNeur4_Lay1_E50000_eta0.20_PID211_NNoutput_polar.png png-file


Major updates:
-- Subject:

Edit | Attach | Watch | Print version | History: r23 < r22 < r21 < r20 < r19 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r23 - 2022-03-22 - AleDiGGi
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Atlas All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback