Difference: ApprovedPlotsDAQ20082010 (1 vs. 22)

Revision 222013-01-09 - NicolettaGarelli

Line: 1 to 1
 
META TOPICPARENT name="Atlas.ApprovedDetectorPlots"
AtlasPublicTopicHeader.png

Revision 212013-01-09 - NicolettaGarelli

Line: 1 to 1
 
META TOPICPARENT name="Atlas.ApprovedDetectorPlots"
AtlasPublicTopicHeader.png
Line: 168 to 168
 
Deleted:
<
<
TO BE APPROVED! Distribution of processing time per event in the ATLAS Event Filter farm in July 2012; each color corresponds to a family of processors (black: older CPU model; red: older CPU model, SMP on; green: newer CPU model). The average processing time for each family is shown in the box. Run206971_LB130_EFprocTime.pdf
 
Line: 209 to 201
 
  • ROS_TDAQWeek_pg12.pdf: Rate limits on ROS and their improvement with CPU update, as measured for 2 ROLs per RoI and two Gbit Ethernet links.
Deleted:
<
<
 
META FILEATTACHMENT attachment="daq_view.pdf" attr="" comment="3_DAQ_Levels" date="1235685778" name="daq_view.pdf" path="daq_view.pdf" size="41893" stream="daq_view.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image13.png" attr="" comment="cosmic_data_days" date="1235685817" name="image13.png" path="image13.png" size="35402" stream="image13.png" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image14.png" attr="" comment="cosmic_data_runs" date="1235685853" name="image14.png" path="image14.png" size="33481" stream="image14.png" user="Main.GokhanUnel" version="1"
Line: 236 to 226
 
META FILEATTACHMENT attachment="CPUgen2.png" attr="" comment="EF event processing time for different CPU generations" date="1321469829" name="CPUgen2.png" path="CPUgen2.png" size="49813" stream="CPUgen2.png" tmpFilename="/usr/tmp/CGItemp41798" user="epasqual" version="1"
META FILEATTACHMENT attachment="paper_model.jpg" attr="" comment="Determination of TDAQ operating point according to the TDAQ paper model" date="1321469913" name="paper_model.jpg" path="paper_model.jpg" size="45144" stream="paper_model.jpg" tmpFilename="/usr/tmp/CGItemp41802" user="epasqual" version="1"
META FILEATTACHMENT attachment="Run206971_LB130_EFprocTime.pdf" attr="" comment="Distribution of processing time per event in the ATLAS Event Filter farm in July 2012; each color corresponds to a family of processors (black: older CPU model; red: older CPU model, SMP on; green: newer CPU model). The average processing time for each family is shown in the box." date="1355936896" name="Run206971_LB130_EFprocTime.pdf" path="Run206971_LB130_EFprocTime.pdf" size="17945" user="ngarelli" version="1"
Added:
>
>
META TOPICMOVED by="ngarelli" date="1357727269" from="AtlasPublic.ApprovedPlotsDAQ" to="AtlasPublic.ApprovedPlotsDAQ20082010"

Revision 202013-01-08 - NicolettaGarelli

Line: 1 to 1
 
META TOPICPARENT name="Atlas.ApprovedDetectorPlots"
AtlasPublicTopicHeader.png
Line: 219 to 219
 
META FILEATTACHMENT attachment="streams_c.pdf" attr="" comment="Cosmic_data_streams 2008 Details of Calibration stream" date="1236694474" name="streams_c.pdf" path="streams_c.pdf" size="51746" stream="streams_c.pdf" user="Main.GokhanUnel" version="2"
META FILEATTACHMENT attachment="streams_d.pdf" attr="" comment="Cosmic_data_streams 2008 Details of Debug stream" date="1236694511" name="streams_d.pdf" path="streams_d.pdf" size="14843" stream="streams_d.pdf" user="Main.GokhanUnel" version="2"
META FILEATTACHMENT attachment="streams_p.pdf" attr="" comment="Cosmic_data_streams 2008 Details of Physics stream" date="1236694549" name="streams_p.pdf" path="streams_p.pdf" size="17027" stream="streams_p.pdf" user="Main.GokhanUnel" version="2"
Changed:
<
<
META FILEATTACHMENT attachment="SDX_2nd_floor.png" attr="" comment="SDX 2nd floor, CFS, LFS and XPU nodes as of November 2008" date="1236267320" name="SDX_2nd_floor.png" path="SDX_2nd_floor.png" size="1623680" stream="SDX_2nd_floor.png" user="Main.GokhanUnel" version="1"
>
>
META FILEATTACHMENT attachment="SDX_2nd_floor.png" attr="" comment="SDX 2nd floor, CFS, LFS and XPU nodes as of November 2008" date="1236267320" name="SDX_2nd_floor.png" path="SDX_2nd_floor.png" size="1623680" user="Main.GokhanUnel" version="1"
 
META FILEATTACHMENT attachment="RequestRateL2EB.png" attr="" comment="ROS request EB and LVL2 Rates in an ATLAS combined cosmic run." date="1236273359" name="RequestRateL2EB.png" path="RequestRateL2EB.png" size="9162" stream="RequestRateL2EB.png" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="Streams_pie.pdf" attr="" comment="The cosmic data in 2008, distributed across streams" date="1236694674" name="Streams_pie.pdf" path="Streams_pie.pdf" size="17568" stream="Streams_pie.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="eff_fill.pdf" attr="" comment="The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. The width of each bar is a measure of the stable beam (shown in gray) availability during LHC fills. Each green bar corresponds to an average efficiency calculated during a fill period. The absence of filled bars indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency calculated over the whole period is 96.5%25." date="1274800932" name="eff_fill.pdf" path="eff_fill.pdf" size="13916" stream="eff_fill.pdf" tmpFilename="/usr/tmp/CGItemp46674" user="unel" version="1"

Revision 192012-12-19 - NicolettaGarelli

Line: 1 to 1
 
META TOPICPARENT name="Atlas.ApprovedDetectorPlots"
AtlasPublicTopicHeader.png
Line: 160 to 160
 
Changed:
<
<
Distribution of processing time per event in the ATLAS Event Filter farm;
>
>
Distribution of processing time per event in the ATLAS Event Filter farm in early 2011;
 each color corresponds to a family of processors. The average processing time for each family is shown in the box.
CPUgen.png
Line: 168 to 168
 
Added:
>
>
TO BE APPROVED! Distribution of processing time per event in the ATLAS Event Filter farm in July 2012; each color corresponds to a family of processors (black: older CPU model; red: older CPU model, SMP on; green: newer CPU model). The average processing time for each family is shown in the box. Run206971_LB130_EFprocTime.pdf
 
Line: 201 to 209
 
  • ROS_TDAQWeek_pg12.pdf: Rate limits on ROS and their improvement with CPU update, as measured for 2 ROLs per RoI and two Gbit Ethernet links.

Added:
>
>
 
META FILEATTACHMENT attachment="daq_view.pdf" attr="" comment="3_DAQ_Levels" date="1235685778" name="daq_view.pdf" path="daq_view.pdf" size="41893" stream="daq_view.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image13.png" attr="" comment="cosmic_data_days" date="1235685817" name="image13.png" path="image13.png" size="35402" stream="image13.png" user="Main.GokhanUnel" version="1"
Line: 226 to 235
 
META FILEATTACHMENT attachment="ROS_TDAQWeek_pg12.pdf" attr="" comment="Rate limits on ROS and their improvement with CPU update, as measured for 2 ROLs per RoI and two Gbit Ethernet links." date="1316091785" name="ROS_TDAQWeek_pg12.pdf" path="ROS_TDAQWeek_pg12.pdf" size="28358" stream="ROS_TDAQWeek_pg12.pdf" tmpFilename="/usr/tmp/CGItemp18245" user="sash" version="1"
META FILEATTACHMENT attachment="CPUgen2.png" attr="" comment="EF event processing time for different CPU generations" date="1321469829" name="CPUgen2.png" path="CPUgen2.png" size="49813" stream="CPUgen2.png" tmpFilename="/usr/tmp/CGItemp41798" user="epasqual" version="1"
META FILEATTACHMENT attachment="paper_model.jpg" attr="" comment="Determination of TDAQ operating point according to the TDAQ paper model" date="1321469913" name="paper_model.jpg" path="paper_model.jpg" size="45144" stream="paper_model.jpg" tmpFilename="/usr/tmp/CGItemp41802" user="epasqual" version="1"
Added:
>
>
META FILEATTACHMENT attachment="Run206971_LB130_EFprocTime.pdf" attr="" comment="Distribution of processing time per event in the ATLAS Event Filter farm in July 2012; each color corresponds to a family of processors (black: older CPU model; red: older CPU model, SMP on; green: newer CPU model). The average processing time for each family is shown in the box." date="1355936896" name="Run206971_LB130_EFprocTime.pdf" path="Run206971_LB130_EFprocTime.pdf" size="17945" user="ngarelli" version="1"

Revision 182011-11-17 - EnricoPasquqlucci

Line: 1 to 1
 
META TOPICPARENT name="Atlas.ApprovedDetectorPlots"
AtlasPublicTopicHeader.png
Line: 148 to 148
 
Changed:
<
<
>
>
Resource allocation curves for the HLT according to the TDAQ paper model, using as constraint the Level-2 latency and the global output rate. Fixing the measured latency of the EF, it is possible to choose the operating point (i.e. the number of XPU racks dedicate to each trigger level) by fixing the event building rate. paper_model.jpg
 

Added:
>
>
Distribution of processing time per event in the ATLAS Event Filter farm; each color corresponds to a family of processors. The average processing time for each family is shown in the box.
 
Added:
>
>
CPUgen.png CPUgen2.png
 
Line: 187 to 198
 
  • LumiCPU.pdf: CPU Usage in the Event Filter farm, as a function of the luminosity. Each point corresponds to the peak of the CPU usage averaged over the entire EF farm, for a specific run; the peak corresponds in time to the highest luminosity at beginning of the fill. The points are indicatively grouped per different trigger menus.
Deleted:
<
<
  • EF event processing time for different CPU generations:
    CPUgen.png CPUgen2.png
 
  • ROS_TDAQWeek_pg12.pdf: Rate limits on ROS and their improvement with CPU update, as measured for 2 ROLs per RoI and two Gbit Ethernet links.
Changed:
<
<
* paper_model.jpg: Determination of TDAQ operating point according to the TDAQ paper model.
>
>
 
META FILEATTACHMENT attachment="daq_view.pdf" attr="" comment="3_DAQ_Levels" date="1235685778" name="daq_view.pdf" path="daq_view.pdf" size="41893" stream="daq_view.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image13.png" attr="" comment="cosmic_data_days" date="1235685817" name="image13.png" path="image13.png" size="35402" stream="image13.png" user="Main.GokhanUnel" version="1"

Revision 172011-11-16 - EnricoPasquqlucci

Line: 1 to 1
 
META TOPICPARENT name="Atlas.ApprovedDetectorPlots"
AtlasPublicTopicHeader.png
Line: 189 to 189
 
  • EF event processing time for different CPU generations:
    CPUgen.png
Added:
>
>
CPUgen2.png
 
  • ROS_TDAQWeek_pg12.pdf: Rate limits on ROS and their improvement with CPU update, as measured for 2 ROLs per RoI and two Gbit Ethernet links.
Added:
>
>
* paper_model.jpg: Determination of TDAQ operating point according to the TDAQ paper model.
 
META FILEATTACHMENT attachment="daq_view.pdf" attr="" comment="3_DAQ_Levels" date="1235685778" name="daq_view.pdf" path="daq_view.pdf" size="41893" stream="daq_view.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image13.png" attr="" comment="cosmic_data_days" date="1235685817" name="image13.png" path="image13.png" size="35402" stream="image13.png" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image14.png" attr="" comment="cosmic_data_runs" date="1235685853" name="image14.png" path="image14.png" size="33481" stream="image14.png" user="Main.GokhanUnel" version="1"
Line: 214 to 217
 
META FILEATTACHMENT attachment="LumiCPU.pdf" attr="" comment="CPU Usage in the Event Filter farm, as a function of the luminosity. Each point corresponds to the peak of the CPU usage averaged over the entire EF farm, for a specific run; the peak corresponds in time to the highest luminosity at beginning of the fill. The points are indicatively grouped per different trigger menus." date="1316090973" name="LumiCPU.pdf" path="LumiCPU.pdf" size="27273" stream="LumiCPU.pdf" tmpFilename="/usr/tmp/CGItemp18240" user="sash" version="1"
META FILEATTACHMENT attachment="CPUgen.png" attr="" comment="EF event processing time for different CPU generations" date="1316091058" name="CPUgen.png" path="CPUgen.png" size="18607" stream="CPUgen.png" tmpFilename="/usr/tmp/CGItemp18267" user="sash" version="1"
META FILEATTACHMENT attachment="ROS_TDAQWeek_pg12.pdf" attr="" comment="Rate limits on ROS and their improvement with CPU update, as measured for 2 ROLs per RoI and two Gbit Ethernet links." date="1316091785" name="ROS_TDAQWeek_pg12.pdf" path="ROS_TDAQWeek_pg12.pdf" size="28358" stream="ROS_TDAQWeek_pg12.pdf" tmpFilename="/usr/tmp/CGItemp18245" user="sash" version="1"
Added:
>
>
META FILEATTACHMENT attachment="CPUgen2.png" attr="" comment="EF event processing time for different CPU generations" date="1321469829" name="CPUgen2.png" path="CPUgen2.png" size="49813" stream="CPUgen2.png" tmpFilename="/usr/tmp/CGItemp41798" user="epasqual" version="1"
META FILEATTACHMENT attachment="paper_model.jpg" attr="" comment="Determination of TDAQ operating point according to the TDAQ paper model" date="1321469913" name="paper_model.jpg" path="paper_model.jpg" size="45144" stream="paper_model.jpg" tmpFilename="/usr/tmp/CGItemp41802" user="epasqual" version="1"

Revision 162011-09-15 - SergioBallestrero

Line: 1 to 1
 
META TOPICPARENT name="Atlas.ApprovedDetectorPlots"
AtlasPublicTopicHeader.png
Line: 185 to 185
 
Added:
>
>
  • LumiCPU.pdf: CPU Usage in the Event Filter farm, as a function of the luminosity. Each point corresponds to the peak of the CPU usage averaged over the entire EF farm, for a specific run; the peak corresponds in time to the highest luminosity at beginning of the fill. The points are indicatively grouped per different trigger menus.

  • EF event processing time for different CPU generations:
    CPUgen.png

  • ROS_TDAQWeek_pg12.pdf: Rate limits on ROS and their improvement with CPU update, as measured for 2 ROLs per RoI and two Gbit Ethernet links.
 
META FILEATTACHMENT attachment="daq_view.pdf" attr="" comment="3_DAQ_Levels" date="1235685778" name="daq_view.pdf" path="daq_view.pdf" size="41893" stream="daq_view.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image13.png" attr="" comment="cosmic_data_days" date="1235685817" name="image13.png" path="image13.png" size="35402" stream="image13.png" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image14.png" attr="" comment="cosmic_data_runs" date="1235685853" name="image14.png" path="image14.png" size="33481" stream="image14.png" user="Main.GokhanUnel" version="1"
Line: 204 to 211
 
META FILEATTACHMENT attachment="02May_rt-1.jpg" attr="" comment="In the online display screenshot, the Level1 rate, the beam time, defined by the presence of two circulating stable beams and the run status are shown in red, blue and green, respectively. The blue and green curves have binary scale (on/off) indicating the presence of stable beams and ongoing ATLAS run. Reasons for no or low LVL1 rate are stop of the run during beam time to work on a subsystem (e.g. at 19:00 in this plot) and possible trigger holds due to a sub-system issuing busy for a brief period of time (e.g. at 17:30 in this plot)." date="1274801548" name="02May_rt-1.jpg" path="02May_rt-1.jpg" size="147773" stream="02May_rt-1.jpg" tmpFilename="/usr/tmp/CGItemp46733" user="unel" version="1"
META FILEATTACHMENT attachment="eff-30_03-12_10.pdf" attr="" comment="Efficiency between 30 March and 12 October" date="1288628394" name="eff-30_03-12_10.pdf" path="eff-30_03-12_10.pdf" size="48136" stream="eff-30_03-12_10.pdf" tmpFilename="/usr/tmp/CGItemp21756" user="unel" version="1"
META FILEATTACHMENT attachment="runtime_30-03_30_10.pdf" attr="" comment="runtime Between 30 March and 12 OCtober" date="1288628446" name="runtime_30-03_30_10.pdf" path="runtime_30-03_30_10.pdf" size="35338" stream="runtime_30-03_30_10.pdf" tmpFilename="/usr/tmp/CGItemp21746" user="unel" version="1"
Added:
>
>
META FILEATTACHMENT attachment="LumiCPU.pdf" attr="" comment="CPU Usage in the Event Filter farm, as a function of the luminosity. Each point corresponds to the peak of the CPU usage averaged over the entire EF farm, for a specific run; the peak corresponds in time to the highest luminosity at beginning of the fill. The points are indicatively grouped per different trigger menus." date="1316090973" name="LumiCPU.pdf" path="LumiCPU.pdf" size="27273" stream="LumiCPU.pdf" tmpFilename="/usr/tmp/CGItemp18240" user="sash" version="1"
META FILEATTACHMENT attachment="CPUgen.png" attr="" comment="EF event processing time for different CPU generations" date="1316091058" name="CPUgen.png" path="CPUgen.png" size="18607" stream="CPUgen.png" tmpFilename="/usr/tmp/CGItemp18267" user="sash" version="1"
META FILEATTACHMENT attachment="ROS_TDAQWeek_pg12.pdf" attr="" comment="Rate limits on ROS and their improvement with CPU update, as measured for 2 ROLs per RoI and two Gbit Ethernet links." date="1316091785" name="ROS_TDAQWeek_pg12.pdf" path="ROS_TDAQWeek_pg12.pdf" size="28358" stream="ROS_TDAQWeek_pg12.pdf" tmpFilename="/usr/tmp/CGItemp18245" user="sash" version="1"

Revision 152010-12-06 - ElmarRitsch

Line: 1 to 1
Changed:
<
<
META TOPICPARENT name="ApprovedDetectorPlots"
No permission to view Atlas.AtlasPublicTopic
<!--  
  • Set DENYTOPICVIEW =
  • Set ALLOWTOPICCHANGE = atlas-readaccess-current-physicists
-->
>
>
META TOPICPARENT name="Atlas.ApprovedDetectorPlots"
AtlasPublicTopicHeader.png
 

Approved DAQ Plots

Revision 142010-11-01 - GokhanUnel

Line: 1 to 1
 
META TOPICPARENT name="ApprovedDetectorPlots"
Warning
Can't INCLUDE Atlas.AtlasPublicTopic repeatedly, topic is already included.
Line: 110 to 110
 
Added:
>
>
 
Changed:
<
<
In the online display screenshot, the Level1 rate, the beam time, defined by the presence of two circulating stable beams and the run status are shown in red, blue and green, respectively. The blue and green curves have binary scale (on/off) indicating the presence of stable beams and ongoing ATLAS run. Reasons for no or low LVL1 rate are stop of the run during beam time to work on a subsystem (e.g. at 19:00 in this plot) and possible trigger holds due to a sub-system issuing busy for a brief period of time (e.g. at 17:30 in this plot).
>
>
* for the next 2 plots*
  • Run Efficiency = (RunningTimeDuringBeamDeadTime)/(BeamTime)
  • RunningTimeDuringBeam = ATLAS partition in RUNING state
  • DeadTime is the sum of dead times for all lumi blocks (LB) during beam
  • Dead Time for each LB is calculated from Central Trigger Processor data using values after prescaling:
  • DeadTime = ((L1_before_veto – L1_after_veto)/L1_before_veto)*(LB duration)
  • L1 data source: L1_MBTS_2 Trigger
  • BeamTime = Both beams and stable beams flags are set.
  • ATLAS Run efficiency is calculated for 2 independent conditions: “ATLAS running” and “ATLAS running while Ready for Physics”.
Prior to the declaration of 'stable beams' some detector systems are in a standby state (e.g. reduced low and/or high voltages). "ATLAS Ready for Physics" condition is defined by 'stable beams' and detector systems at their operational settings together with a High Level Trigger menu corresponding to these settings.
  • Covered time interval: 30 Mar 08h00 - 11 October 08h00
 
Deleted:
<
<
02May_rt-1.jpg
 
Line: 118 to 128
 
Deleted:
<
<
 
Added:
>
>
The weekly averages of three independent quantities, ATLAS Run Efficiency, ATLAS Run Efficiency while Ready for Physics and the stable beam availability are shown. Note that the efficiencies are not luminosity weighted. The Run Efficiency is defined as the ratio of the running time during beam time to beam time. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. The width of each bar represents a week in which the availability of stable beams are shown with the red histogram with the scale on the right hand side. The average run efficiency calculated during each week is shown by the filled green (ATLAS is running) and grey (ATLAS is running while Ready for Physics) histograms. The absence of a bar in the plot indicates a week with no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency calculated over the whole period is 96.5% and 93.0% for “ATLAS Ready for Physics” condition.
 
Added:
>
>
eff-30_03-12_10.pdf

Three independent quantities, LHC stable beam availability (yellow), ATLAS running (green) and ATLAS running while Ready for Physics (grey) accumulated times are shown. Note that the durations are not luminosity weighted. The beam availability is defined by the presence of two circulating stable beams. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The flat sections of the curves indicate periods of no stable beams. The lower values of accumulated run times with respect to stable beam time indicate efficiency losses during runs. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency calculated over the whole period is 96.5% and 93.0% for “ATLAS Ready for Physics” condition. runtime_30-03_30_10.pdf
 
Added:
>
>

 
Line: 151 to 186
 

Added:
>
>

 
META FILEATTACHMENT attachment="daq_view.pdf" attr="" comment="3_DAQ_Levels" date="1235685778" name="daq_view.pdf" path="daq_view.pdf" size="41893" stream="daq_view.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image13.png" attr="" comment="cosmic_data_days" date="1235685817" name="image13.png" path="image13.png" size="35402" stream="image13.png" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image14.png" attr="" comment="cosmic_data_runs" date="1235685853" name="image14.png" path="image14.png" size="33481" stream="image14.png" user="Main.GokhanUnel" version="1"
Line: 168 to 207
 
META FILEATTACHMENT attachment="eff_error.pdf" attr="" comment="The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. Each point corresponds to an average efficiency calculated for a period of 24 hours. The size of the horizontal error bars on each data point is a measure of the stable beam availability during 24 hours. The longest bar corresponds to 24 hours. The absence of data points indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency, calculated every 24 hours for the last 24 hours, over the whole period is 96.5%25." date="1274801458" name="eff_error.pdf" path="eff_error.pdf" size="14280" stream="eff_error.pdf" tmpFilename="/usr/tmp/CGItemp46430" user="unel" version="1"
META FILEATTACHMENT attachment="02May_eff-1.jpg" attr="" comment="In the online display screenshot, the beam time, defined by the presence of two circulating stable beams, the run status and the data taking efficiency are shown in blue, green and red respectively. The Data Taking Efficiency is defined as the ratio of the running time during beam time to beam time. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The blue and green curves have binary scale (on/off) indicating the presence of stable beams and ongoing ATLAS run. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time." date="1274801512" name="02May_eff-1.jpg" path="02May_eff-1.jpg" size="154632" stream="02May_eff-1.jpg" tmpFilename="/usr/tmp/CGItemp46812" user="unel" version="1"
META FILEATTACHMENT attachment="02May_rt-1.jpg" attr="" comment="In the online display screenshot, the Level1 rate, the beam time, defined by the presence of two circulating stable beams and the run status are shown in red, blue and green, respectively. The blue and green curves have binary scale (on/off) indicating the presence of stable beams and ongoing ATLAS run. Reasons for no or low LVL1 rate are stop of the run during beam time to work on a subsystem (e.g. at 19:00 in this plot) and possible trigger holds due to a sub-system issuing busy for a brief period of time (e.g. at 17:30 in this plot)." date="1274801548" name="02May_rt-1.jpg" path="02May_rt-1.jpg" size="147773" stream="02May_rt-1.jpg" tmpFilename="/usr/tmp/CGItemp46733" user="unel" version="1"
Added:
>
>
META FILEATTACHMENT attachment="eff-30_03-12_10.pdf" attr="" comment="Efficiency between 30 March and 12 October" date="1288628394" name="eff-30_03-12_10.pdf" path="eff-30_03-12_10.pdf" size="48136" stream="eff-30_03-12_10.pdf" tmpFilename="/usr/tmp/CGItemp21756" user="unel" version="1"
META FILEATTACHMENT attachment="runtime_30-03_30_10.pdf" attr="" comment="runtime Between 30 March and 12 OCtober" date="1288628446" name="runtime_30-03_30_10.pdf" path="runtime_30-03_30_10.pdf" size="35338" stream="runtime_30-03_30_10.pdf" tmpFilename="/usr/tmp/CGItemp21746" user="unel" version="1"

Revision 132010-05-25 - GokhanUnel

Line: 1 to 1
 
META TOPICPARENT name="ApprovedDetectorPlots"
Warning
Can't INCLUDE Atlas.AtlasPublicTopic repeatedly, topic is already included.
Line: 71 to 71
 CLICK HERE TO DOWNLOAD THE LARGE FILE
Added:
>
>

The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. The width of each bar is a measure of the stable beam (shown in gray) availability during LHC fills. Each green bar corresponds to an average efficiency calculated during a fill period. The absence of filled bars indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency calculated over the whole period is 96.5%. eff_fill.pdf
The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. The width of each bar is a measure of the stable beam (shown in gray) availability during 24 hours. Each green bar corresponds to an average efficiency calculated for a period of 24 hours. The absence of filled bars indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency, calculated every 24 hours for the last 24 hours , over the whole period is 96.5%. eff_24h.pdf:
The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. The width of each bar is a measure of the stable beam (shown in gray) availability during 24 hours. Each green bar corresponds to an average efficiency calculated for a period of 24 hours. The absence of filled bars indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency, calculated every 24 hours for the last 24 hours, over the whole period is 96.5%. filled_graphs-1.pdf
The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. Each point corresponds to an average efficiency calculated for a period of 24 hours. The size of the horizontal error bars on each data point is a measure of the stable beam availability during 24 hours. The longest bar corresponds to 24 hours. The absence of data points indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency, calculated every 24 hours for the last 24 hours, over the whole period is 96.5%. eff_error.pdf
In the online display screenshot, the beam time, defined by the presence of two circulating stable beams, the run status and the data taking efficiency are shown in blue, green and red respectively. The Data Taking Efficiency is defined as the ratio of the running time during beam time to beam time. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The blue and green curves have binary scale (on/off) indicating the presence of stable beams and ongoing ATLAS run. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. 02May_eff-1.jpg
In the online display screenshot, the Level1 rate, the beam time, defined by the presence of two circulating stable beams and the run status are shown in red, blue and green, respectively. The blue and green curves have binary scale (on/off) indicating the presence of stable beams and ongoing ATLAS run. Reasons for no or low LVL1 rate are stop of the run during beam time to work on a subsystem (e.g. at 19:00 in this plot) and possible trigger holds due to a sub-system issuing busy for a brief period of time (e.g. at 17:30 in this plot). 02May_rt-1.jpg

 

Links

Line: 104 to 162
 
META FILEATTACHMENT attachment="SDX_2nd_floor.png" attr="" comment="SDX 2nd floor, CFS, LFS and XPU nodes as of November 2008" date="1236267320" name="SDX_2nd_floor.png" path="SDX_2nd_floor.png" size="1623680" stream="SDX_2nd_floor.png" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="RequestRateL2EB.png" attr="" comment="ROS request EB and LVL2 Rates in an ATLAS combined cosmic run." date="1236273359" name="RequestRateL2EB.png" path="RequestRateL2EB.png" size="9162" stream="RequestRateL2EB.png" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="Streams_pie.pdf" attr="" comment="The cosmic data in 2008, distributed across streams" date="1236694674" name="Streams_pie.pdf" path="Streams_pie.pdf" size="17568" stream="Streams_pie.pdf" user="Main.GokhanUnel" version="1"
Added:
>
>
META FILEATTACHMENT attachment="eff_fill.pdf" attr="" comment="The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. The width of each bar is a measure of the stable beam (shown in gray) availability during LHC fills. Each green bar corresponds to an average efficiency calculated during a fill period. The absence of filled bars indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency calculated over the whole period is 96.5%25." date="1274800932" name="eff_fill.pdf" path="eff_fill.pdf" size="13916" stream="eff_fill.pdf" tmpFilename="/usr/tmp/CGItemp46674" user="unel" version="1"
META FILEATTACHMENT attachment="eff_24h.pdf" attr="" comment="The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. The width of each bar is a measure of the stable beam (shown in gray) availability during 24 hours. Each green bar corresponds to an average efficiency calculated for a period of 24 hours. The absence of filled bars indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency, calculated every 24 hours for the last 24 hours , over the whole period is 96.5%25." date="1274801328" name="eff_24h.pdf" path="eff_24h.pdf" size="14009" stream="eff_24h.pdf" tmpFilename="/usr/tmp/CGItemp46776" user="unel" version="1"
META FILEATTACHMENT attachment="filled_graphs-1.pdf" attr="" comment="The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. The width of each bar is a measure of the stable beam (shown in gray) availability during 24 hours. Each green bar corresponds to an average efficiency calculated for a period of 24 hours. The absence of filled bars indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency, calculated every 24 hours for the last 24 hours, over the whole period is 96.5%25." date="1274801419" name="filled_graphs-1.pdf" path="filled_graphs-1.pdf" size="13187" stream="filled_graphs-1.pdf" tmpFilename="/usr/tmp/CGItemp46675" user="unel" version="1"
META FILEATTACHMENT attachment="eff_error.pdf" attr="" comment="The Data Taking Efficiency, defined as the ratio of the running time during beam time to beam time, is shown. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The beam time is defined by the presence of two circulating stable beams. Each point corresponds to an average efficiency calculated for a period of 24 hours. The size of the horizontal error bars on each data point is a measure of the stable beam availability during 24 hours. The longest bar corresponds to 24 hours. The absence of data points indicates a period of no stable beams. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time. Average efficiency, calculated every 24 hours for the last 24 hours, over the whole period is 96.5%25." date="1274801458" name="eff_error.pdf" path="eff_error.pdf" size="14280" stream="eff_error.pdf" tmpFilename="/usr/tmp/CGItemp46430" user="unel" version="1"
META FILEATTACHMENT attachment="02May_eff-1.jpg" attr="" comment="In the online display screenshot, the beam time, defined by the presence of two circulating stable beams, the run status and the data taking efficiency are shown in blue, green and red respectively. The Data Taking Efficiency is defined as the ratio of the running time during beam time to beam time. The running time incorporates the dead time fraction during each Luminosity Block reported by the Central Trigger Processor. The blue and green curves have binary scale (on/off) indicating the presence of stable beams and ongoing ATLAS run. Reasons for lower efficiency are stop of the run during beam time to work on a subsystem and possible trigger holds due to a sub-system issuing busy for a brief period of time." date="1274801512" name="02May_eff-1.jpg" path="02May_eff-1.jpg" size="154632" stream="02May_eff-1.jpg" tmpFilename="/usr/tmp/CGItemp46812" user="unel" version="1"
META FILEATTACHMENT attachment="02May_rt-1.jpg" attr="" comment="In the online display screenshot, the Level1 rate, the beam time, defined by the presence of two circulating stable beams and the run status are shown in red, blue and green, respectively. The blue and green curves have binary scale (on/off) indicating the presence of stable beams and ongoing ATLAS run. Reasons for no or low LVL1 rate are stop of the run during beam time to work on a subsystem (e.g. at 19:00 in this plot) and possible trigger holds due to a sub-system issuing busy for a brief period of time (e.g. at 17:30 in this plot)." date="1274801548" name="02May_rt-1.jpg" path="02May_rt-1.jpg" size="147773" stream="02May_rt-1.jpg" tmpFilename="/usr/tmp/CGItemp46733" user="unel" version="1"

Revision 122010-05-25 - DjF

Line: 1 to 1
 
META TOPICPARENT name="ApprovedDetectorPlots"
Warning
Can't INCLUDE Atlas.AtlasPublicTopic repeatedly, topic is already included.
Line: 14 to 14
 

Introduction

Changed:
<
<
The DAQ commissioning and performance plots below are approved to be shown by ATLAS speakers at conferences and similar events.
>
>
The DAQ/HLT collision periods, single-beam, commissioning and performance plots below are approved to be shown by ATLAS speakers at conferences and similar events.
  Please do not add figures on your own. Contact the DAQ project leader in case of questions and/or suggestions.

Revision 112010-05-10 - PatrickJussel

Line: 1 to 1
 
META TOPICPARENT name="ApprovedDetectorPlots"
Warning
Can't INCLUDE Atlas.AtlasPublicTopic repeatedly, topic is already included.
<!--  
  • Set DENYTOPICVIEW =
Added:
>
>
  • Set ALLOWTOPICCHANGE = atlas-readaccess-current-physicists
 
-->
Added:
>
>
 

Approved DAQ Plots

<!--optional-->

Revision 102009-11-24 - PatrickJussel

Line: 1 to 1
 
META TOPICPARENT name="ApprovedDetectorPlots"
Added:
>
>
Warning
Can't INCLUDE Atlas.AtlasPublicTopic repeatedly, topic is already included.
<!--  
  • Set DENYTOPICVIEW =
-->
 

Approved DAQ Plots

<!--optional-->

Revision 92009-03-10 - GokhanUnel

Line: 1 to 1
 
META TOPICPARENT name="ApprovedDetectorPlots"

Approved DAQ Plots

Line: 37 to 37
 
The cosmic data in 2008, distributed across streams https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/Streams.pdf
Added:
>
>
The cosmic data in 2008, distributed across streams - Pie https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/Streams_pie.pdf
 
The cosmic data in 2008, details of debug stream https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/streams_d.pdf
Line: 87 to 90
 
META FILEATTACHMENT attachment="image13.png" attr="" comment="cosmic_data_days" date="1235685817" name="image13.png" path="image13.png" size="35402" stream="image13.png" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image14.png" attr="" comment="cosmic_data_runs" date="1235685853" name="image14.png" path="image14.png" size="33481" stream="image14.png" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image7.png" attr="" comment="EventBuilding Rate , overnight run with 800Kb events." date="1235744609" name="image7.png" path="image7.png" size="248955" stream="image7.png" user="Main.GokhanUnel" version="1"
Changed:
<
<
META FILEATTACHMENT attachment="Streams.pdf" attr="" comment="Cosmic_data_streams 2008" date="1236261822" name="Streams.pdf" path="Streams.pdf" size="15401" stream="Streams.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="streams_c.pdf" attr="" comment="Cosmic_data_streams 2008 Details of Calibration stream" date="1236262006" name="streams_c.pdf" path="streams_c.pdf" size="52016" stream="streams_c.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="streams_d.pdf" attr="" comment="Cosmic_data_streams 2008 Details of Debug stream" date="1236262035" name="streams_d.pdf" path="streams_d.pdf" size="14952" stream="streams_d.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="streams_p.pdf" attr="" comment="Cosmic_data_streams 2008 Details of Physics stream" date="1236262060" name="streams_p.pdf" path="streams_p.pdf" size="17280" stream="streams_p.pdf" user="Main.GokhanUnel" version="1"
>
>
META FILEATTACHMENT attachment="Streams.pdf" attr="" comment="Cosmic_data_streams 2008" date="1236694282" name="Streams.pdf" path="Streams.pdf" size="15404" stream="Streams.pdf" user="Main.GokhanUnel" version="2"
META FILEATTACHMENT attachment="streams_c.pdf" attr="" comment="Cosmic_data_streams 2008 Details of Calibration stream" date="1236694474" name="streams_c.pdf" path="streams_c.pdf" size="51746" stream="streams_c.pdf" user="Main.GokhanUnel" version="2"
META FILEATTACHMENT attachment="streams_d.pdf" attr="" comment="Cosmic_data_streams 2008 Details of Debug stream" date="1236694511" name="streams_d.pdf" path="streams_d.pdf" size="14843" stream="streams_d.pdf" user="Main.GokhanUnel" version="2"
META FILEATTACHMENT attachment="streams_p.pdf" attr="" comment="Cosmic_data_streams 2008 Details of Physics stream" date="1236694549" name="streams_p.pdf" path="streams_p.pdf" size="17027" stream="streams_p.pdf" user="Main.GokhanUnel" version="2"
 
META FILEATTACHMENT attachment="SDX_2nd_floor.png" attr="" comment="SDX 2nd floor, CFS, LFS and XPU nodes as of November 2008" date="1236267320" name="SDX_2nd_floor.png" path="SDX_2nd_floor.png" size="1623680" stream="SDX_2nd_floor.png" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="RequestRateL2EB.png" attr="" comment="ROS request EB and LVL2 Rates in an ATLAS combined cosmic run." date="1236273359" name="RequestRateL2EB.png" path="RequestRateL2EB.png" size="9162" stream="RequestRateL2EB.png" user="Main.GokhanUnel" version="1"
Added:
>
>
META FILEATTACHMENT attachment="Streams_pie.pdf" attr="" comment="The cosmic data in 2008, distributed across streams" date="1236694674" name="Streams_pie.pdf" path="Streams_pie.pdf" size="17568" stream="Streams_pie.pdf" user="Main.GokhanUnel" version="1"

Revision 82009-03-05 - GokhanUnel

Line: 1 to 1
 
META TOPICPARENT name="ApprovedDetectorPlots"

Approved DAQ Plots

Line: 12 to 12
 Please do not add figures on your own. Contact the DAQ project leader in case of questions and/or suggestions.

Figures

Changed:
<
<
>
>
SDX_2nd_floor.png
 
Added:
>
>
ATLAS HLT farms and Servers. CLICK HERE TO DOWNLOAD THE LARGE FILE
 
The overview of ATLAS DAQ system https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/daq_view.pdf
Cosmic data since Sept 13, 2008. 216 M events.

Revision 72009-03-05 - GokhanUnel

Line: 1 to 1
 
META TOPICPARENT name="ApprovedDetectorPlots"

Approved DAQ Plots

Line: 46 to 46
 CLICK HERE TO DOWNLOAD THE LARGE FILE
Changed:
<
<
Event Building rate in 2008. 0.8 Mbyte event size. Overnight Run. Dips are because of the automatic cron jobs. CLICK HERE TO DOWNLOAD THE LARGE FILE
>
>
ROS request EB and LVL2 Rates in an ATLAS combined cosmic run. The run number is 91900 which was triggered by RPC and L1Calo. The EB request rate is 100 Hz, the ROSs of the detectors participating to LVL2 algorithms see more request rate. The high rate of ID requests is due to the various full scan tracking algorithms. The rate on TILE is due to an algorithm doing full scan to find muon MIP signal. CLICK HERE TO DOWNLOAD THE LARGE FILE
 
Line: 82 to 87
 
META FILEATTACHMENT attachment="streams_d.pdf" attr="" comment="Cosmic_data_streams 2008 Details of Debug stream" date="1236262035" name="streams_d.pdf" path="streams_d.pdf" size="14952" stream="streams_d.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="streams_p.pdf" attr="" comment="Cosmic_data_streams 2008 Details of Physics stream" date="1236262060" name="streams_p.pdf" path="streams_p.pdf" size="17280" stream="streams_p.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="SDX_2nd_floor.png" attr="" comment="SDX 2nd floor, CFS, LFS and XPU nodes as of November 2008" date="1236267320" name="SDX_2nd_floor.png" path="SDX_2nd_floor.png" size="1623680" stream="SDX_2nd_floor.png" user="Main.GokhanUnel" version="1"
Added:
>
>
META FILEATTACHMENT attachment="RequestRateL2EB.png" attr="" comment="ROS request EB and LVL2 Rates in an ATLAS combined cosmic run." date="1236273359" name="RequestRateL2EB.png" path="RequestRateL2EB.png" size="9162" stream="RequestRateL2EB.png" user="Main.GokhanUnel" version="1"

Revision 62009-03-05 - GokhanUnel

Line: 1 to 1
 
META TOPICPARENT name="ApprovedDetectorPlots"

Approved DAQ Plots

Line: 29 to 29
 CLICK HERE TO DOWNLOAD THE LARGE FILE
Added:
>
>
The cosmic data in 2008, distributed across streams https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/Streams.pdf
The cosmic data in 2008, details of debug stream https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/streams_d.pdf
The cosmic data in 2008, details of calibration stream https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/streams_c.pdf
The cosmic data in 2008, details of physics stream https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/streams_p.pdf
Event Building rate in 2008. 0.8 Mbyte event size. Overnight Run. Dips are because of the automatic cron jobs. CLICK HERE TO DOWNLOAD THE LARGE FILE
 
Event Building rate in 2008. 0.8 Mbyte event size. Overnight Run. Dips are because of the automatic cron jobs. CLICK HERE TO DOWNLOAD THE LARGE FILE
Line: 59 to 77
 
META FILEATTACHMENT attachment="image13.png" attr="" comment="cosmic_data_days" date="1235685817" name="image13.png" path="image13.png" size="35402" stream="image13.png" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image14.png" attr="" comment="cosmic_data_runs" date="1235685853" name="image14.png" path="image14.png" size="33481" stream="image14.png" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image7.png" attr="" comment="EventBuilding Rate , overnight run with 800Kb events." date="1235744609" name="image7.png" path="image7.png" size="248955" stream="image7.png" user="Main.GokhanUnel" version="1"
Added:
>
>
META FILEATTACHMENT attachment="Streams.pdf" attr="" comment="Cosmic_data_streams 2008" date="1236261822" name="Streams.pdf" path="Streams.pdf" size="15401" stream="Streams.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="streams_c.pdf" attr="" comment="Cosmic_data_streams 2008 Details of Calibration stream" date="1236262006" name="streams_c.pdf" path="streams_c.pdf" size="52016" stream="streams_c.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="streams_d.pdf" attr="" comment="Cosmic_data_streams 2008 Details of Debug stream" date="1236262035" name="streams_d.pdf" path="streams_d.pdf" size="14952" stream="streams_d.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="streams_p.pdf" attr="" comment="Cosmic_data_streams 2008 Details of Physics stream" date="1236262060" name="streams_p.pdf" path="streams_p.pdf" size="17280" stream="streams_p.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="SDX_2nd_floor.png" attr="" comment="SDX 2nd floor, CFS, LFS and XPU nodes as of November 2008" date="1236267320" name="SDX_2nd_floor.png" path="SDX_2nd_floor.png" size="1623680" stream="SDX_2nd_floor.png" user="Main.GokhanUnel" version="1"

Revision 52009-02-27 - GokhanUnel

Line: 1 to 1
 
META TOPICPARENT name="ApprovedDetectorPlots"

Approved DAQ Plots

Line: 16 to 16
 
The overview of ATLAS DAQ system
Changed:
<
<
https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/daq_view.pdf
Cosmic data since Sept 13, 2008. 216 M events.
>
>
https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/daq_view.pdf
Cosmic data since Sept 13, 2008. 216 M events.
 400,000 files in 21 inclusive streams.
Changed:
<
<
image13.png
Cosmic data since Sept 13, 2008. 216 M events.
>
>
CLICK HERE TO DOWNLOAD THE LARGE FILE
Cosmic data since Sept 13, 2008. 216 M events collected.
 400,000 files in 21 inclusive streams.
Changed:
<
<
image14.png
>
>
CLICK HERE TO DOWNLOAD THE LARGE FILE
Event Building rate in 2008. 0.8 Mbyte event size. Overnight Run. Dips are because of the automatic cron jobs. CLICK HERE TO DOWNLOAD THE LARGE FILE
 

Links

Line: 57 to 58
 
META FILEATTACHMENT attachment="daq_view.pdf" attr="" comment="3_DAQ_Levels" date="1235685778" name="daq_view.pdf" path="daq_view.pdf" size="41893" stream="daq_view.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image13.png" attr="" comment="cosmic_data_days" date="1235685817" name="image13.png" path="image13.png" size="35402" stream="image13.png" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image14.png" attr="" comment="cosmic_data_runs" date="1235685853" name="image14.png" path="image14.png" size="33481" stream="image14.png" user="Main.GokhanUnel" version="1"
Added:
>
>
META FILEATTACHMENT attachment="image7.png" attr="" comment="EventBuilding Rate , overnight run with 800Kb events." date="1235744609" name="image7.png" path="image7.png" size="248955" stream="image7.png" user="Main.GokhanUnel" version="1"

Revision 42009-02-26 - GokhanUnel

Line: 1 to 1
 
META TOPICPARENT name="ApprovedDetectorPlots"

Approved DAQ Plots

Line: 15 to 15
 
Changed:
<
<
Figure description ...
>
>
The overview of ATLAS DAQ system
 
Changed:
<
<
[the figure]
>
>
https://twiki.cern.ch/twiki/pub/AtlasPublic/ApprovedPlotsDAQ20082010/daq_view.pdf
 
Changed:
<
<
Figure description ...
>
>
Cosmic data since Sept 13, 2008. 216 M events. 400,000 files in 21 inclusive streams.
 
Changed:
<
<
[the figure]
>
>
image13.png
 
Changed:
<
<
Figure description ...
>
>
Cosmic data since Sept 13, 2008. 216 M events. 400,000 files in 21 inclusive streams.
 
Changed:
<
<
[the figure]
>
>
image14.png
 
Line: 49 to 51
 Last reviewed by: Never reviewed

\ No newline at end of file

Added:
>
>

META FILEATTACHMENT attachment="daq_view.pdf" attr="" comment="3_DAQ_Levels" date="1235685778" name="daq_view.pdf" path="daq_view.pdf" size="41893" stream="daq_view.pdf" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image13.png" attr="" comment="cosmic_data_days" date="1235685817" name="image13.png" path="image13.png" size="35402" stream="image13.png" user="Main.GokhanUnel" version="1"
META FILEATTACHMENT attachment="image14.png" attr="" comment="cosmic_data_runs" date="1235685853" name="image14.png" path="image14.png" size="33481" stream="image14.png" user="Main.GokhanUnel" version="1"

Revision 32009-02-26 - GokhanUnel

Line: 1 to 1
 
META TOPICPARENT name="ApprovedDetectorPlots"

Approved DAQ Plots

Line: 9 to 9
  The DAQ commissioning and performance plots below are approved to be shown by ATLAS speakers at conferences and similar events.
Changed:
<
<
Please do not add figures on your own. Contact the DAQ project leader in case of questions and/or suggestions.
>
>
Please do not add figures on your own. Contact the DAQ project leader in case of questions and/or suggestions.
 

Figures

Revision 22008-10-08 - StephenHaywood

Line: 1 to 1
 
META TOPICPARENT name="ApprovedDetectorPlots"

Approved DAQ Plots

Line: 44 to 44
 
<!--Please add the name of someone who is responsible for this page so that he/she can be contacted if changes are needed.
The creator's name will be added by default, but this can be replaced if appropriate.
Put the name first, without dashes.-->
Changed:
<
<
Responsible: %REVINFO{Main.ProjectLeader}%
>
>
Responsible: DAQ Project Leader
 
<!--Once this page has been reviewed, please add the name and the date e.g. StephenHaywood - 31 Oct 2006 -->
Last reviewed by: Never reviewed

Revision 12008-10-01 - AndreasHoecker

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="ApprovedDetectorPlots"

Approved DAQ Plots

<!--optional-->

Introduction

The DAQ commissioning and performance plots below are approved to be shown by ATLAS speakers at conferences and similar events.

Please do not add figures on your own. Contact the DAQ project leader in case of questions and/or suggestions.

Figures

Figure description ... [the figure]
Figure description ... [the figure]
Figure description ... [the figure]

Links

<!--***********************************************************-->
<!--Do NOT remove the remaining lines, but add requested info as appropriate-->
<!--***********************************************************-->


<!--Please add the name of someone who is responsible for this page so that he/she can be contacted if changes are needed.
The creator's name will be added by default, but this can be replaced if appropriate.
Put the name first, without dashes.-->
Responsible: ProjectLeader
<!--Once this page has been reviewed, please add the name and the date e.g. StephenHaywood - 31 Oct 2006 -->
Last reviewed by: Never reviewed
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback