AtlasPublicTopicHeader.png

Approved DAQ Plots

Introduction

The DAQ/HLT collision periods, single-beam, commissioning and performance plots below are approved to be shown by ATLAS speakers at conferences and similar events.

Please do not add figures on your own. Contact the DAQ project leader in case of questions and/or suggestions.

Figures

SDX_2nd_floor.png

ATLAS HLT farms and Servers. CLICK HERE TO DOWNLOAD THE LARGE FILE
The overview of ATLAS DAQ system https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/daq_view.pdf
Cosmic data since Sept 13, 2008. 216 M events. 400,000 files in 21 inclusive streams. CLICK HERE TO DOWNLOAD THE LARGE FILE
Cosmic data since Sept 13, 2008. 216 M events collected. 400,000 files in 21 inclusive streams. CLICK HERE TO DOWNLOAD THE LARGE FILE
The cosmic data in 2008, distributed across streams https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/Streams.pdf
The cosmic data in 2008, distributed across streams - Pie https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/Streams_pie.pdf
The cosmic data in 2008, details of debug stream https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/streams_d.pdf
The cosmic data in 2008, details of calibration stream https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/streams_c.pdf
The cosmic data in 2008, details of physics stream https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/streams_p.pdf
Event Building rate in 2008. 0.8 Mbyte event size. Overnight Run. Dips are because of the automatic cron jobs. CLICK HERE TO DOWNLOAD THE LARGE FILE
ROS request EB and LVL2 Rates in an ATLAS combined cosmic run. The run number is 91900 which was triggered by RPC and L1Calo. The EB request rate is 100 Hz, the ROSs of the detectors participating to LVL2 algorithms see more request rate. The high rate of ID requests is due to the various full scan tracking algorithms. The rate on TILE is due to an algorithm doing full scan to find muon MIP signal. CLICK HERE TO DOWNLOAD THE LARGE FILE
The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. The width of each bar is a measure of the stable beam (shown in gray) availability during LHC fills. Each green bar corresponds to an average efficiency calculated during a fill period. The absence of filled bars indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency calculated over the whole period is 96.5%. eff_fill.pdf
The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. The width of each bar is a measure of the stable beam (shown in gray) availability during 24 hours. Each green bar corresponds to an average efficiency calculated for a period of 24 hours. The absence of filled bars indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency, calculated every 24 hours for the last 24 hours , over the whole period is 96.5%. eff_24h.pdf:
The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. The width of each bar is a measure of the stable beam (shown in gray) availability during 24 hours. Each green bar corresponds to an average efficiency calculated for a period of 24 hours. The absence of filled bars indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency, calculated every 24 hours for the last 24 hours, over the whole period is 96.5%. filled_graphs-1.pdf
The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. Each point corresponds to an average efficiency calculated for a period of 24 hours. The size of the horizontal error bars on each data point is a measure of the stable beam availability during 24 hours. The longest bar corresponds to 24 hours. The absence of data points indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency, calculated every 24 hours for the last 24 hours, over the whole period is 96.5%. eff_error.pdf
In the online display screenshot, the beam time, defined by the presence of two circulating stable beams, the run status and the data taking efficiency are shown in blue, green and red respectively. The Data Taking Efficiency is defined as the ratio of the running time during beam time to beam time. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The blue and green curves have binary scale (on/off) indicating the presence of stable beams and ongoing ATLAS run. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. 02May_eff-1.jpg
* for the next 2 plots*
  • Run Efficiency = (RunningTimeDuringBeamDeadTime)/(BeamTime)
  • RunningTimeDuringBeam = ATLAS partition in RUNING state
  • DeadTime is the sum of dead times for all lumi blocks (LB) during beam
  • Dead Time for each LB is calculated from Central Trigger Processor data using values after prescaling:
  • DeadTime = ((L1_before_veto – L1_after_veto)/L1_before_veto)*(LB duration)
  • L1 data source: L1_MBTS_2 Trigger
  • BeamTime = Both beams and stable beams flags are set.
  • ATLAS Run efficiency is calculated for 2 independent conditions: “ATLAS running” and “ATLAS running while Ready for Physics”.
Prior to the declaration of 'stable beams' some detector systems are in a standby state (e.g. reduced low and/or high voltages). "ATLAS Ready for Physics" condition is defined by 'stable beams' and detector systems at their operational settings together with a High Level Trigger menu corresponding to these settings.
  • Covered time interval: 30 Mar 08h00 - 11 October 08h00
The weekly averages of three independent quantities, ATLAS Run Efficiency, ATLAS Run Efficiency while Ready for Physics and the stable beam availability are shown. Note that the efficiencies are not luminosity weighted. The Run Efficiency is defined as the ratio of the running time during beam time to beam time. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. The width of each bar represents a week in which the availability of stable beams are shown with the red histogram with the scale on the right hand side. The average run efficiency calculated during each week is shown by the filled green (ATLAS is running) and grey (ATLAS is running while Ready for Physics) histograms. The absence of a bar in the plot indicates a week with no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency calculated over the whole period is 96.5% and 93.0% for “ATLAS Ready for Physics” condition.

eff-30_03-12_10.pdf
Three independent quantities, LHC stable beam availability (yellow), ATLAS running (green) and ATLAS running while Ready for Physics (grey) accumulated times are shown. Note that the durations are not luminosity weighted. The beam availability is defined by the presence of two circulating stable beams. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The flat sections of the curves indicate periods of no stable beams. The lower values of accumulated run times with respect to stable beam time indicate efficiency losses during runs. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency calculated over the whole period is 96.5% and 93.0% for “ATLAS Ready for Physics” condition. runtime_30-03_30_10.pdf
Resource allocation curves for the HLT according to the TDAQ paper model, using as constraint the Level-2 latency and the global output rate. Fixing the measured latency of the EF, it is possible to choose the operating point (i.e. the number of XPU racks dedicate to each trigger level) by fixing the event building rate. paper_model.jpg
Distribution of processing time per event in the ATLAS Event Filter farm in early 2011; each color corresponds to a family of processors. The average processing time for each family is shown in the box. CPUgen.png CPUgen2.png

Links


Responsible: DAQ Project Leader
Last reviewed by: Never reviewed

  • LumiCPU.pdf: CPU Usage in the Event Filter farm, as a function of the luminosity. Each point corresponds to the peak of the CPU usage averaged over the entire EF farm, for a specific run; the peak corresponds in time to the highest luminosity at beginning of the fill. The points are indicatively grouped per different trigger menus.

  • ROS_TDAQWeek_pg12.pdf: Rate limits on ROS and their improvement with CPU update, as measured for 2 ROLs per RoI and two Gbit Ethernet links.
Topic attachments
I Attachment History Action Size DateSorted ascending Who Comment
PDFpdf daq_view.pdf r1 manage 40.9 K 2009-02-26 - 23:02 GokhanUnel 3_DAQ_Levels
PNGpng image13.png r1 manage 34.6 K 2009-02-26 - 23:03 GokhanUnel cosmic_data_days
PNGpng image14.png r1 manage 32.7 K 2009-02-26 - 23:04 GokhanUnel cosmic_data_runs
PNGpng image7.png r1 manage 243.1 K 2009-02-27 - 15:23 GokhanUnel EventBuilding Rate , overnight run with 800Kb events.
PNGpng RequestRateL2EB.png r1 manage 8.9 K 2009-03-05 - 18:15 GokhanUnel ROS request EB and LVL2 Rates in an ATLAS combined cosmic run.
PNGpng SDX_2nd_floor.png r1 manage 1585.6 K 2009-03-05 - 16:35 GokhanUnel SDX 2nd floor, CFS, LFS and XPU nodes as of November 2008
PDFpdf Streams.pdf r2 r1 manage 15.0 K 2009-03-10 - 15:11 GokhanUnel Cosmic_data_streams 2008
PDFpdf Streams_pie.pdf r1 manage 17.2 K 2009-03-10 - 15:17 GokhanUnel The cosmic data in 2008, distributed across streams
PDFpdf streams_c.pdf r2 r1 manage 50.5 K 2009-03-10 - 15:14 GokhanUnel Cosmic_data_streams 2008 Details of Calibration stream
PDFpdf streams_d.pdf r2 r1 manage 14.5 K 2009-03-10 - 15:15 GokhanUnel Cosmic_data_streams 2008 Details of Debug stream
PDFpdf streams_p.pdf r2 r1 manage 16.6 K 2009-03-10 - 15:15 GokhanUnel Cosmic_data_streams 2008 Details of Physics stream
JPEGjpg 02May_eff-1.jpg r1 manage 151.0 K 2010-05-25 - 17:31 GokhanUnel In the online display screenshot, the beam time, defined by the presence of two circulating stable beams, the run status and the data taking efficiency are shown in blue, green and red respectively. The Data Taking Efficiency is defined as the ratio of the running time during beam time to beam time. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The blue and green curves have binary scale (on/off) indicating the presence of stable beams and ongoing ATLAS run. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time.
JPEGjpg 02May_rt-1.jpg r1 manage 144.3 K 2010-05-25 - 17:32 GokhanUnel In the online display screenshot, the Level1 rate, the beam time, defined by the presence of two circulating stable beams and the run status are shown in red, blue and green, respectively. The blue and green curves have binary scale (on/off) indicating the presence of stable beams and ongoing ATLAS run. Reasons for no or low LVL1 rate are stop of the run during beam time to work on a subsystem (e.g. at 19:00 in this plot) and possible trigger holds due to a sub-system issuing busy for a brief period of time (e.g. at 17:30 in this plot).
PDFpdf eff_24h.pdf r1 manage 13.7 K 2010-05-25 - 17:28 GokhanUnel The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. The width of each bar is a measure of the stable beam (shown in gray) availability during 24 hours. Each green bar corresponds to an average efficiency calculated for a period of 24 hours. The absence of filled bars indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency, calculated every 24 hours for the last 24 hours , over the whole period is 96.5%.
PDFpdf eff_error.pdf r1 manage 13.9 K 2010-05-25 - 17:30 GokhanUnel The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. Each point corresponds to an average efficiency calculated for a period of 24 hours. The size of the horizontal error bars on each data point is a measure of the stable beam availability during 24 hours. The longest bar corresponds to 24 hours. The absence of data points indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency, calculated every 24 hours for the last 24 hours, over the whole period is 96.5%.
PDFpdf eff_fill.pdf r1 manage 13.6 K 2010-05-25 - 17:22 GokhanUnel The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. The width of each bar is a measure of the stable beam (shown in gray) availability during LHC fills. Each green bar corresponds to an average efficiency calculated during a fill period. The absence of filled bars indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency calculated over the whole period is 96.5%.
PDFpdf filled_graphs-1.pdf r1 manage 12.9 K 2010-05-25 - 17:30 GokhanUnel The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. The width of each bar is a measure of the stable beam (shown in gray) availability during 24 hours. Each green bar corresponds to an average efficiency calculated for a period of 24 hours. The absence of filled bars indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency, calculated every 24 hours for the last 24 hours, over the whole period is 96.5%.
PDFpdf eff-30_03-12_10.pdf r1 manage 47.0 K 2010-11-01 - 17:19 GokhanUnel Efficiency between 30 March and 12 October
PDFpdf runtime_30-03_30_10.pdf r1 manage 34.5 K 2010-11-01 - 17:20 GokhanUnel runtime Between 30 March and 12 OCtober
PNGpng CPUgen.png r1 manage 18.2 K 2011-09-15 - 14:50 SergioBallestrero EF event processing time for different CPU generations
PDFpdf LumiCPU.pdf r1 manage 26.6 K 2011-09-15 - 14:49 SergioBallestrero CPU Usage in the Event Filter farm, as a function of the luminosity. Each point corresponds to the peak of the CPU usage averaged over the entire EF farm, for a specific run; the peak corresponds in time to the highest luminosity at beginning of the fill. The points are indicatively grouped per different trigger menus.
PDFpdf ROS_TDAQWeek_pg12.pdf r1 manage 27.7 K 2011-09-15 - 15:03 SergioBallestrero Rate limits on ROS and their improvement with CPU update, as measured for 2 ROLs per RoI and two Gbit Ethernet links.
PNGpng CPUgen2.png r1 manage 48.6 K 2011-11-16 - 19:57 EnricoPasquqlucci EF event processing time for different CPU generations
JPEGjpg paper_model.jpg r1 manage 44.1 K 2011-11-16 - 19:58 EnricoPasquqlucci Determination of TDAQ operating point according to the TDAQ paper model
PDFpdf Run206971_LB130_EFprocTime.pdf r1 manage 17.5 K 2012-12-19 - 18:08 NicolettaGarelli Distribution of processing time per event in the ATLAS Event Filter farm in July 2012; each color corresponds to a family of processors (black: older CPU model; red: older CPU model, SMP on; green: newer CPU model). The average processing time for each family is shown in the box.
Edit | Attach | Watch | Print version | History: r22 < r21 < r20 < r19 < r18 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r21 - 2013-01-09 - NicolettaGarelli
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Atlas All webs login

  • Edit
  • Attach
This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback