AtlasPublicTopicHeader.png

Computing and Software - Public Results

Introduction

On this page public results and material concerning ATLAS Computing and Software is collected...

Computing TDR and related Documents

report links year
Update of Computing Models for Run-2 CDS 2014
Computing TDR CDS 2005
Computing Model Document CDS 2004
Technical Proposal CDS 1996

Public Notes, Conference Proceedings and Slides

type of public documents links
list of ATLAS-SOFT-PUB notes CDS
list of conference proceedings CDS
list of conference talks CDS

Recent Public Plots

Estimated total disk resources (in PBytes) needed for the years 2018 to 2032 for both data and simulation processing. The plot updates the projection made in 2017 which was based on the Run-2 computing model and with updated LHC expected running conditions. The blue points show the improvements possible in two different scenarios, which require significant development work (1) top curve, reduction of AOD and DAOD size of 30% compared to the Run-2 trend, bottom curve further reduction with the inclusion of a common DAOD format to be used by most analysis, removal of previous year AODs from disk and the storage of only one DAOD version. The solid line shows the amount of resources expected to be available if a flat funding scenario is assumed, which implies an increase of 15% per year, based on the current technology trends. diskHLLHC_noold.png
pdf-file, png-file
Estimated total disk resources (in PBytes) needed for the years 2018 to 2032 for both data and simulation processing. The plot updates the projection made in 2017 which was based on the Run-2 computing model and with updated LHC expected running conditions. The brown points are estimates made in 2017 and based on the current event sizes and using the ATLAS computing model parameters from 2017. The blue points show the improvements possible in two different scenarios, which require significant development work: (1) top curve, reduction of AOD and DAOD size of 30% compared to the Run-2 trend, bottom curve further reduction with the inclusion of a common DAOD format to be used by most analysis, removal of previous year AODs from disk and the storage of only one DAOD version. The solid line shows the amount of resources expected to be available if a flat funding scenario is assumed, which implies an increase of 15% per year, based on the current technology trends. diskHLLHC_18.png
pdf-file, png-file
Fraction of disk resources needed in 2028 at the end of Run-4 for different data types. This plot: assuming a reduction in size of 30% of AOD and DAOD. disk2028_baseline.png
pdf-file, png-file
Fraction of disk resources needed in 2028 at the end of Run-4 for different data types. This plot: a further reduction with the inclusion of a common DAOD format to be used by most analysis, removal of previous year AODs from disk and the storage of only one DAOD version. . disk2028_reduced.png
pdf-file, png-file
Estimated CPU resources (in MHS06) needed for the years 2018 to 2032 for both data and simulation processing. The plot updates the projection made in 2017 (which was based on the Run-2 computing model) with updated LHC running conditions and revised scenarios for future computing models. The blue points show the improvements possible in three different scenarios, which require significant development work: (1) top curve with fast calo sim used for 75% of the Monte Carlo simulation; (2) middle curve using in addition a faster version of reconstruction, which is seeded by the event generator information; (3) bottom curve, where the time spent in event generation is halved, either by software improvements or re-using some of the events. The solid line shows the amount of resources expected to be available if a flat funding scenario is assumed, which implies an increase of 20% per year, based on the current technology trends. cpuHLLHC_noold.png
pdf-file, png-file
Estimated CPU resources (in MHS06) needed for the years 2018 to 2032 for both data and simulation processing. The plot updates the projection made in 2017 (which was based on the Run-2 computing model) with updated LHC running conditions and revised scenarios for future computing models. The brown points are estimates made in 2017, based on the current software performance estimates and using the ATLAS computing model parameters from 2017. The blue points show the improvements possible in three different scenarios, which require significant development work: (1) top curve with fast calo sim used for 75% of the Monte Carlo simulation; (2) middle curve using in addition a faster version of reconstruction, which is seeded by the event generator information; (3) bottom curve, where the time spent in event generation is halved, either by software improvements or by re-using some of the events. The solid line shows the amount of resources expected to be available if a flat funding scenario is assumed, which implies an increase of 20% per year, based on the current technology trends. cpuHLLHC_18.png
pdf-file, png-file
Fraction of CPU resources needed in 2028 at the end of Run-4 for different processing workflows.The “MC-Full” section in green is related to the fraction of time spent on the full AtlasG4 simulation and divided in a simulation part “(Sim)” for the Geant4 simulation and a reconstruction part “(Rec)” accounting the time spent reconstructing the events. Similarly, the “MC-Fast” section in red shows this distribution for the time spent running the Fast Calo simulation. This plot: using fast calo sim for 75% of the Monte Carlo simulation and standard reconstruction. cpu2028.png
pdf-file, png-file
Fraction of CPU resources needed in 2028 at the end of Run-4 for different processing workflows.The “MC-Full” section in green is related to the fraction of time spent on the full AtlasG4 simulation and divided in a simulation part “(Sim)” for the Geant4 simulation and a reconstruction part “(Rec)” accounting the time spent reconstructing the events. Similarly, the “MC-Fast” section in red shows this distribution for the time spent running the Fast Calo simulation. This plot: using in addition a faster version of reconstruction, which is seeded by the event generator information, and assuming event generation is sped up by a factor of two. cpu2028_fast.png
pdf-file, png-file
OBSOLETE: Estimated total disk resources (in PBytes) needed for the years 2018 to 2028 for both data and simulation processing. The blue points are estimates based on the current event sizes estimates and using the ATLAS computing model parameters from 2017. The solid line shows the amount of resources expected to be available if a flat funding scenario is assumed, which implies an increase of 15% per year, based on the current technology trends. diskHLLHC.png
pdf-file, png-file
OBSOLETE: Estimated CPU resources (in kHS06) needed for the years 2018 to 2028 for both data and simulation processing. The blue points are estimates based on the current software performance estimates and using the ATLAS computing model parameters from 2017. The solid line shows the amount of resources expected to be available if a flat funding scenario is assumed, which implies an increase of 20% per year, based on the current technology trends. cpuHLLHC.png
pdf-file, png-file
The dependency of reconstruction wall time per event on the average number of interactions per bunch crossing (<μ>) is shown for the current Inner Detector reconstruction with default tracking cuts. The plot contains a selection of reconstructed luminosity blocks of RAW data from 13 TeV pp LHC collisions in 2017. An ATLAS luminosity block typically corresponds to one minute of data-taking. Tier-0 reconstruction jobs were required to run in single-core mode on a selected sub-cluster of 16-core machines (with Intel(R) Xeon(R) CPU E5-2630 v3 of 2.40 GHz clock speed, memory of 4 GB/core, 21 HS06/core with HT off). The typical collision runs (blue scatter plot) can be only qualitatively compared with the performance in high-μ run 335302 (red boxes), which had special data-taking conditions. Furthermore, the high-μ run jobs were configured to produce only AOD outputs, whereas standard jobs proceeded with 12 additional output types (different DRAW, DESD, DAOD and HIST), which takes extra processing time. The behaviour of the tracking reconstruction, which dominates the CPU use at high pileup, for High Luminosity LHC conditions with an upgraded tracking detector has been studied in reference [1].

[1] "Technical Design Report for the ATLAS ITk Pixel Detector", ATLAS Collaboration, CERN-LHCC-2017-021, ATLAS-TDR-030, Geneva 2018.
reco_WtPerEvent_NOfit_revised.png
pdf-file, png-file
Digitization time per event, in HepSpec06 seconds, as a function of the average number of interactions per bunch crossing, with 25 ns bunch spacing. A linear fit to the times is overlaid. On a modern CPU, one second of wall clock time corresponds to about 10 HepSpec06 seconds. icpuVSmu_all.png
pdf-file, png-file
On this figure the total reconstruction time per event is shown for a top Monte Carlo simulation sample with 40 pileup at 13 TeV, 25 ns bunch spacing. An overall improvement of a factor 3 is visible comparing the 2012 Tier-0 release (17.2.7.9), the release 19.0.3.3 which is optimised for reconstruction of the Run-1 data and the release 19.1.1.1 which is optimised for reconstructing Run-2 data. The CPU time is shown as well separately for the Inner Detector reconstruction as the tracking is dominating the total resource needs. The simulation is done using a Run-1 detector geometry without the IBL. The HS06 scaling factor for the machine used for this study is quoted as 11.95. This is the updated comparison of the CPU time for reconstructing top pair events with Run-2 pileup in different releases, including the MC15 production release (candidate), showing a speedup factor exceeding 4. id_evtloop_cpu_time-CHEP2015.png
pdf-file, png-file
Event-wise fractional overlaps between derivations built from the muon stream, run 203875 (2012) using 5000 input events. Each cell of the plot displays the fraction of events accepted in the first format that are also accepted in the second. A higher number indicates that more events are shared between the two formats. Since the different formats contain very different numbers of events, a cell indicating the overlap of format A with format B may not have the same value as its counterpart in the other half of the square representing the overlap of B with A. Also it should be noted that these plots cover only event-wise overlaps: overlaps in variables are not displayed. Hence, it is possible that a pair of formats may be fully correlated in terms of the events selected, but may contain orthogonal sets of variables - in which case no information is shared. Finally, it can clearly be seen that overlaps vary strongly with the trigger stream producing the events. muonOverlaps-CHEP2015.png
pdf-file, png-file
Event-wise fractional overlaps between derivations built from the e-gamma stream, produced from run 203875 (2012) using 5000 input events. Each cell of the plot displays the fraction of events accepted in the first format that are also accepted in the second. A higher number indicates that more events are shared between the two formats. Since the different formats contain very different numbers of events, a cell indicating the overlap of format A with format B may not have the same value as its counterpart in the other half of the square representing the overlap of B with A. Also it should be noted that these plots cover only event-wise overlaps: overlaps in variables are not displayed. Hence, it is possible that a pair of formats may be fully correlated in terms of the events selected, but may contain orthogonal sets of variables - in which case no information is shared. Finally, it can clearly be seen that overlaps vary strongly with the trigger stream producing the events. egammaOverlaps-CHEP2015.png
pdf-file, png-file
Event-wise fractional overlaps between derivations built from the jet stream, run 203875 (2012) using 5000 input events. Each cell of the plot displays the fraction of events accepted in the first format that are also accepted in the second. A higher number indicates that more events are shared between the two formats. Since the different formats contain very different numbers of events, a cell indicating the overlap of format A with format B may not have the same value as its counterpart in the other half of the square representing the overlap of B with A. Also it should be noted that these plots cover only event-wise overlaps: overlaps in variables are not displayed. Hence, it is possible that a pair of formats may be fully correlated in terms of the events selected, but may contain orthogonal sets of variables - in which case no information is shared. Finally, it can clearly be seen that overlaps vary strongly with the trigger stream producing the events. jetsOverlaps-CHEP2015.png
pdf-file, png-file
Memory profile of ATLAS MC digitization and reconstruction jobs comparing total RSS of 8 serial jobs to RSS of one AthenaMP job with 8 worker processes. Memory savings at the reconstruction step of this particular job are ~45%. AthenaMP-vs-Serial-19.1.1.5-pileup-CHEP2015.png
pdf-file, png-file
AthenaMP schematic view Atlas-AthenaMP-Schematic-CHEP2015.png
pdf-file, png-file
Yoda scaling with number of parallel processors (cores). The plot shows how the event throughput of Atlas G4 simulation scales with number of parallel processors (cores) when running within Yoda system on the Edison HPC at NERSC (Berkeley). The scalability is already quite good, although there is certainly a room for improvement and we will be looking into it in the coming months. ATLAS-Yoda-Sim-Throughput-CHEP2015.png
pdf-file, png-file
Size of DxAOD (derivation) datasets as a fraction of the size of the parent xAOD datasets, evaluated for all derivation types across all runs in period B, for the three physics streams. Each entry in the histogram represents a single derived dataset, with the value being equal to the size of the dataset divided by the size of its parent (input) dataset. There are a total of 65 formats, three streams and more than 100 runs, leading to several thousand individual datasets. size-CHEP2015.png
pdf-file, png-file
Fraction of total input events written into the DxAOD (derivation) datasets, evaluated for all derivation types across all runs in period B, for the three physics streams. Each entry in the histogram represents a single derived dataset, with the value being equal to the number of selected events in the dataset divided by the number of input events. There are a total of 65 formats, three streams and more than 100 runs, leading to several thousand individual datasets. skim-CHEP2015.png
pdf-file, png-file
The rate of new data transformations added to the ATLAS production system. transformations.png
pdf-file, png-file
Monthly rate of task requests submitted to the ATLAS production system. tasks.png
pdf-file, png-file
Comparison of monthly rates of task requests in the ATLAS production systems ProdSys1 and ProdSys2. comparison.png
pdf-file, png-file
Comparison of the energy loss distributions for 1 GeV single muon tracks in the ATLAS Pixel and SCT Detectors for full simulation (based on the Geant4 toolkit) and FATRAS simulation. Muon_1GeV_DeltaP_2.png
pdf-file, png-file
Comparison of the energy loss eta distributions for 1 GeV single muon tracks in the ATLAS Pixel and SCT Detectors for full simulation (based on the Geant4 toolkit) and FATRAS simulation. Muon_1GeV_Eta_DeltaEProfile_3.png
pdf-file, png-file
Comparison of hit distribution of single muon tracks in the ATLAS Pixel and SCT Detectors using FATRAS tracking geometry from GeoModel and from XML configuration file. myplotRZ.png
pdf-file, png-file

/FCS_pions_layer10.png
Fig. 3: Energy fraction deposited in the 3rd layer of the Hadronic Endcap calorimeter by charged pions. The black points show the Geant4 inputs, and the result of the longitudinal energy parametrisation is shown in light blue. A good agreement is observed. The results of a Kolmogorov (KS) and chi2 test are displayed as well.

/FCS_photons_totalE.png
Fig. 4: Total cell energy deposited in the calorimeter by photons. The black points show the Geant4 inputs, and the result of the longitudinal energy parametrisation is shown in light blue. A good agreement is observed. The results of a Kolmogorov (KS) and chi2 test are displayed as well.

(a)

Closure_noWiggle.png

(b)

Closure_withWiggle.png
Fig. 4: The ratio of the FastCaloSim energy profile and the reconstructed cells energy profile, as a function of the distance of the centre of the cell and the pion calorimeter entrance position deta(pi,cell), dphi(pi,cell), for the original hit-cell assignment with the simplified geometry (a) and the modified hit-cell assignment using the wiggle hit displacement method (b). The bias in phi due to the wrong description of the accordion shape of the calorimeter in the simplified geometry is greatly reduced when using the hit displacement method.

NNeur4_Lay1_E50000_eta0.20_PID211_reference_polar.png
Fig. 5: Illustration of the energy normalized per bin area used as input to the NN fit. This example is for 50 GeV central (0.20<|eta|<0.25) pions in the EMB1 layer and corresponds to events included in the first bin of the PCA energy parameterisation.

NNeur4_Lay1_E50000_eta0.20_PID211_NNoutput_polar.png
Fig. 6: Illustration of the output of the NN parametrisation of Fig.9 input. This example is for 50 GeV central (0.20<|eta|<0.25) pions in the EMB1 layer and corresponds to events included in the first bin of the PCA energy parameterisation.

Furthermore

More material is available elsewhere concerning related activities, such as:

All ATLAS public results can be found here.


Major updates:
-- EricLancon - 2015-05-11 Responsible: MarkusElsing
Subject: public

Topic attachments
I Attachment History Action Size Date Who CommentSorted ascending
PDFpdf ATLAS-Yoda-Sim-Throughput-CHEP2015.pdf r1 manage 13.7 K 2015-04-01 - 12:18 EricLancon  
PNGpng ATLAS-Yoda-Sim-Throughput-CHEP2015.png r1 manage 8.4 K 2015-04-01 - 12:18 EricLancon  
PDFpdf AthenaMP-vs-Serial-19.1.1.5-pileup-CHEP2015.pdf r1 manage 17.9 K 2015-04-01 - 12:18 EricLancon  
PNGpng AthenaMP-vs-Serial-19.1.1.5-pileup-CHEP2015.png r1 manage 10.3 K 2015-04-01 - 12:18 EricLancon  
PDFpdf Atlas-AthenaMP-Schematic-CHEP2015.pdf r1 manage 62.5 K 2015-04-01 - 12:18 EricLancon  
PNGpng Atlas-AthenaMP-Schematic-CHEP2015.png r1 manage 67.4 K 2015-04-01 - 12:18 EricLancon  
PNGpng Closure_noWiggle.png r1 manage 99.1 K 2016-10-28 - 16:49 AndyHaas  
PNGpng Closure_withWiggle.png r1 manage 99.5 K 2016-10-28 - 16:49 AndyHaas  
PDFpdf FCS_photons_totalE_prelim.pdf r1 manage 17.4 K 2016-10-28 - 16:49 AndyHaas  
PNGpng FCS_photons_totalE_prelim.png r1 manage 36.5 K 2016-10-28 - 16:49 AndyHaas  
PDFpdf FCS_pions_layer10_prelim.pdf r1 manage 19.0 K 2016-10-28 - 16:49 AndyHaas  
PNGpng FCS_pions_layer10_prelim.png r1 manage 35.9 K 2016-10-28 - 16:49 AndyHaas  
PDFpdf Muon_1GeV_DeltaP_2.pdf r1 manage 28.8 K 2015-05-11 - 09:38 EricLancon  
PNGpng Muon_1GeV_DeltaP_2.png r1 manage 27.7 K 2015-05-11 - 09:38 EricLancon  
PDFpdf Muon_1GeV_Eta_DeltaEProfile_3.pdf r1 manage 24.0 K 2015-05-11 - 09:38 EricLancon  
PNGpng Muon_1GeV_Eta_DeltaEProfile_3.png r1 manage 23.2 K 2015-05-11 - 09:38 EricLancon  
PNGpng NNeur4_Lay1_E50000_eta0.20_PID211_NNoutput_polar.png r1 manage 261.8 K 2016-10-28 - 16:49 AndyHaas  
PNGpng NNeur4_Lay1_E50000_eta0.20_PID211_reference_polar.png r1 manage 267.2 K 2016-10-28 - 16:49 AndyHaas  
PDFpdf comparison.pdf r1 manage 203.9 K 2015-05-11 - 09:38 EricLancon  
PNGpng comparison.png r1 manage 226.6 K 2015-05-11 - 09:38 EricLancon  
PDFpdf cpuVSmu_all.pdf r1 manage 20.4 K 2016-02-17 - 00:17 EricLancon  
PNGpng cpuVSmu_all.png r1 manage 17.5 K 2016-02-17 - 00:17 EricLancon  
PDFpdf disk2028_baseline.pdf r1 manage 14.3 K 2018-11-15 - 08:29 DavideCostanzo1  
PNGpng disk2028_baseline.png r1 manage 71.8 K 2018-11-15 - 08:29 DavideCostanzo1  
PDFpdf diskHLLHC_18.pdf r1 manage 15.5 K 2018-11-15 - 07:46 DavideCostanzo1  
PDFpdf egammaOverlaps-CHEP2015.pdf r1 manage 167.6 K 2015-04-01 - 12:18 EricLancon  
PNGpng egammaOverlaps-CHEP2015.png r1 manage 221.1 K 2015-04-01 - 12:18 EricLancon  
PDFpdf id_evtloop_cpu_time-CHEP2015.pdf r1 manage 82.1 K 2015-04-01 - 12:18 EricLancon  
PNGpng id_evtloop_cpu_time-CHEP2015.png r1 manage 140.3 K 2015-04-01 - 12:18 EricLancon  
PDFpdf id_evtloop_cpu_time.pdf r1 manage 14.0 K 2014-08-01 - 16:50 MarkusElsing  
PNGpng id_evtloop_cpu_time.png r1 manage 139.2 K 2014-08-01 - 16:50 MarkusElsing  
PDFpdf jetsOverlaps-CHEP2015.pdf r1 manage 183.3 K 2015-04-01 - 12:18 EricLancon  
PNGpng jetsOverlaps-CHEP2015.png r1 manage 240.2 K 2015-04-01 - 12:19 EricLancon  
PDFpdf mu_walltime_fit_combined2.pdf r2 r1 manage 61.1 K 2018-09-17 - 14:39 JaroslavGuenther  
PDFpdf muonOverlaps-CHEP2015.pdf r1 manage 160.3 K 2015-04-01 - 12:18 EricLancon  
PNGpng muonOverlaps-CHEP2015.png r1 manage 207.4 K 2015-04-01 - 12:19 EricLancon  
PDFpdf myplotRZ.pdf r1 manage 25.1 K 2015-05-11 - 09:39 EricLancon  
PNGpng myplotRZ.png r1 manage 17.7 K 2015-05-11 - 09:39 EricLancon  
PDFpdf reco_WtPerEvent_NOfit_revised.pdf r1 manage 61.1 K 2018-09-17 - 14:35 JaroslavGuenther  
PNGpng reco_WtPerEvent_NOfit_revised.png r1 manage 64.9 K 2018-09-17 - 14:20 JaroslavGuenther  
PDFpdf reco_WtPerEvent_NOfit_revised_copy.pdf r1 manage 61.1 K 2018-09-17 - 14:38 JaroslavGuenther  
PDFpdf size-CHEP2015.pdf r1 manage 55.9 K 2015-04-01 - 12:18 EricLancon  
PNGpng size-CHEP2015.png r1 manage 60.8 K 2015-04-01 - 12:18 EricLancon  
PDFpdf skim-CHEP2015.pdf r1 manage 49.6 K 2015-04-01 - 12:18 EricLancon  
PNGpng skim-CHEP2015.png r1 manage 52.8 K 2015-04-01 - 12:18 EricLancon  
PDFpdf tasks-CHEP2015.pdf r1 manage 106.5 K 2015-04-01 - 12:18 EricLancon  
PNGpng tasks-CHEP2015.png r1 manage 128.4 K 2015-04-01 - 12:18 EricLancon  
PDFpdf tasks.pdf r1 manage 106.5 K 2015-05-11 - 09:39 EricLancon  
PNGpng tasks.png r1 manage 128.4 K 2015-05-11 - 09:39 EricLancon  
PDFpdf transformations.pdf r1 manage 51.3 K 2015-05-11 - 09:39 EricLancon  
PNGpng transformations.png r1 manage 61.8 K 2015-05-11 - 09:39 EricLancon  
Edit | Attach | Watch | Print version | History: r11 < r10 < r9 < r8 < r7 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r11 - 2018-11-15 - DavideCostanzo1
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Atlas All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback