-- HarryRenshall - 06 Mar 2006

ATLAS Tier 1 Resource Requirements Timetable for 2006/2007/2008

Last Updated 1.11.2007: Start to add 2008 plans and clarify site offers compared to experiment requirements.

Updated 21.08.2007: Add placeholders for the M5 and M6 Cosmics runs for end October and end December.

Updated 26.07.2007: Add details on the M4 Cosmics run scheduled for 23 August to 2 September.

Updated 25.06.2007: Split off 2006 plans into a separate linked page and remove LHC engineering run.

Updated 14.05.2007: Drop RAL from ESD pairing with ASGC and TRIUMF mirroring to SARA.

Updated 23.02.2007: Add plans for data distribution tests in Feb/March and May, full scale dress rehearsal from July to October and MC event generation for the rest of 2007.

Updated 15.01.2007: Move the Tier0 export tests from 15 Jan to new preliminary date of end Feb.

Updated 17.11.2006: Revise (downwards esp. disk space) MC requirements for first half of 2007.

Updated 31.10.2006: add Monte-Carlo plans up to mid-2007 and January 2007 Tier-0 and export exercise.

Updated 4.8.2006 to add repeat of June T0 to T1 export for last half of September.

Updated 27.07.2006 to extend Atllas T0-T1 data export through August.

Updated 29 June to extend Atlas T0-T1 data export to the end of July.

Updated 12 June to add request for sites to report data rates to tape.

Updated 2 June to add 3 week rampup plans leading to the 19 June tests.

Updated 8 May to reflect date change of T0-T1 exercise from last 3 weeks of June to 19 June to 7 July. This has not yet been reflected to monthly site plans.

Following the Mumbai meeting in January 2006 Atlas Tier-1 shares were defined as follows and used for the 2006 site resource estimates:

ASGC to provide 7.7% of resources BNL to provide 24% of resources CNAF to provide 7.5% of resources FZK to provide 10.5% of resources IN2P3 to provide 13.5% of resources NDGF to provide 5.5% of resources NIKHEF to provide 13% of resources PIC to provide 5.5% of resources RAL to provide 7.5% of resources TRIUMF to provide 5.3% of resources

These were refined after the December 2006 GDB meeting and with new site pledges to be used for the 2007/8 site resource estimates and which add up to 100% of the requirements. These percentages are used to calculate the per site resource requirement spreadsheets:

ASGC to provide 6.2% of resources BNL to provide 23% of resources CNAF to provide 10.0% of resources FZK to provide 10.0% of resources IN2P3 to provide 13.0% of resources NDGF to provide 4.5% of resources NIKHEF to provide 12.5% of resources PIC to provide 4.5% of resources RAL to provide 12% of resources TRIUMF to provide 4.3% of resources

For 2008/9 the Tier-1 site offers in cpu+disk (we take the average) exceed the requirements by about 11%:

ASGC offers 9% of cpu+disk resources BNL offers 28% of cpu+disk resources CNAF offers 4.5% of cpu+disk resources FZK offers 10.0% of cpu+disk resources IN2P3 offers 11.0% of cpu+disk resources NDGF offers 5.5% of cpu+disk resources NIKHEF offers 17% of cpu+disk resources PIC offers 5.0% of cpu+disk resources RAL offers 16.5% of cpu+disk resources TRIUMF offers 5% of cpu+disk resources

Relationships between ATLAS Tier1 Sites

Given the different amount of resources that each Tier1 site will provide to ATLAS, the following 'pairing' has been defined for the mirroring of (primarily) ESD data that is produced by the reprocessing at the different sites showing the percentages of the total ESD that are to be exchanged between the mirrored groups.

FZK(10%) + CCIN2P3(13%) BNL(23%)
CNAF(10%) RAL(10%)
NIKHEF/SARA(12.5%) TRIUMF(5.1%) + ASGC(7.4%)
NDGF(4.5%) PIC(4.5%)

BNL will in addition take a complete copy of the ESD.

Distribution of activities over 2006

AtlasTimeTable2006

Distribution of activities over 2007/8

ATLAS Distribution of activities over 2007

The assumptions behind the 2007/8 data distribution tests are a 200 Hz trigger rate, a raw event size of 1.6 MB, an ESD event size of 1 MB and an AOD event size of 0.1 MB. Atlas want to demonstrate that sites can run stably at the resulting instantaneous rates but also assume an accelerator efficiency of 50000 seconds per 24 hour day so define the success metric of these tests as reaching a daily average of 5/8 of the instantaneous rates. Tier1 sites share raw data and first pass ESD data from the Tier 0 according to their pledges but each site gets another share of ESD data according to the relationships above and in addition BNL will get a complete copy of the ESD. The model is that raw data is to be stored on tape at the tier 1 and the ESD on permanent disk until replaced by later reconstruction. In addition each Tier 1 will receive a complete copy of the Tier 0 AOD to be redistributed to their associated Tier 2 sites. For the tests raw and ESD data can be recycled as the sites wish but sites should have a disk buffer of 3 days worth of AOD data and use this to redistribute AODs to their T2 sites before deleting the AOD. The target data rates out of Tier 0 will hence be raw at 320 MB/s (average of 185 MB/s), ESD at 508 MB/s (average of 294 MB/s) and AOD at 200 MB/s (average of 116 MB/s). The Atlas planning Twiki for this activity is at https://twiki.cern.ch/twiki/bin/view/Atlas/TierZero20071

Month Atlas Requirements
January 2007 Continuous distributed production of 15M MC events/month requiring 4328 KSi2K of which 40% to be at T1 sites. An additional 29 TB of permanent disk plus an additional 43 TB of tape storage are required over the T1 sites.
February Continuous distributed production of 15M MC events/month requiring 4328 KSi2K of which 40% to be at T1 sites. An additional 29 TB of permanent disk plus an additional 43 TB of tape storage are required over the T1 sites. From 26 Feb begin 4 week data distribution tests. Ramp up to 2008 rates from Tier 0 to IN2P3, BNL and FZK in first week.
March Continuous distributed production of 15M MC events/month requiring 4328 KSi2K of which 40% to be at T1 sites. An additional 29 TB of permanent disk plus an additional 43 TB of tape storage are required over the T1 sites. From 5 March Tier 0 export ramp up to RAL, SARA and CNAF in second week and PIC, TRIUMF and ASGC in week 3 (NDGF to be negotiated). Raw from Tier 0 to reach 320 MB/s, ESD to reach 508 MB/s and AOD to reach 200 MB/s. Raw data to go to tape then can be recycled. ESD and AOD to go to disk and can be recycled but during the two weeks from 12 March AOD should be distributed to associated Tier 2 before being recycled. Sites should have available a disk buffer of 3 days worth of AOD (i.e. 5.2 TB for the full rate at each site) to ensure this.
April Continuous distributed production of 30M MC events/month requiring 8656 KSi2K of which 40% to be at T1 sites. An additional 59 TB of permanent disk plus an additional 85 TB of tape storage are required over the T1 sites.
May Continuous distributed production of 30M MC events/month requiring 8656 KSi2K of which 40% to be at T1 sites. An additional 59 TB of permanent disk plus an additional 85 TB of tape storage are required over the T1 sites. Repeat the Feb/March data distribution exercise.
June Continuous distributed production of 30M MC events/month requiring 8656 KSi2K of which 40% to be at T1 sites. An additional 59 TB of permanent disk plus an additional 85 TB of tape storage are required over the T1 sites.
July Continuous distributed production of 45M MC events/month requiring 12984 KSi2K of which 40% to be at T1 sites. An additional 88.5 TB of permanent disk plus an additional 127.5 TB of tape storage are required over the T1 sites. Start preparations and testing for full scale (2008 running) dress rehearsal in October.
August Continuous distributed production of 45M MC events/month requiring 12984 KSi2K of which 40% to be at T1 sites. An additional 88.5 TB of permanent disk plus an additional 127.5 TB of tape storage are required over the T1 sites. Continue preparation and testing for full scale dress rehearsal. From 23 August to 2 September perform the M4 Cosmic ray detector run at a raw data rate of 140 MB/s for 50% of this time. Peak rates of raw data at 250 MB/s out of Tier0, ESD at 20 MB/s out of Tier0 and AOD at 40 MB/s out of Tier0 to be distributed to Tier 1 sites. Raw data to go to tape for recall in September reprocessing at Tier 1. ESD+AOD to go to permanent disk and AOD to be redistributed to requesting Tier2. All data to be kept until M6 Cosmic run scheduled for end Dec 2007. See PlanningM4
September Continuous distributed production of 45M MC events/month requiring 12984 KSi2K of which 40% to be at T1 sites. An additional 88.5 TB of permanent disk plus an additional 127.5 TB of tape storage are required over the T1 sites. Finalise preparations and testing of full scale dress rehearsal.
October Continuous distributed production of 60M MC events/month requiring 17312 KSi2K of which 40% to be at T1 sites. An additional 118 TB of permanent disk plus an additional 170 TB of tape storage are required over the T1 sites. Stable running of full scale dress rehearsal. End of month perform M5 cosmic ray detector run - scheduled from 22 October till 5 November.
November Continuous distributed production of 60M MC events/month requiring 17312 KSi2K of which 40% to be at T1 sites. An additional 118 TB of permanent disk plus an additional 170 TB of tape storage are required over the T1 sites.
December Continuous distributed production of 60M MC events/month requiring 17312 KSi2K of which 40% to be at T1 sites. An additional 118 TB of permanent disk plus an additional 170 TB of tape storage are required over the T1 sites. End of month perforn M6 cosmic ray detector run.
January 2008  
February Participate in the CCRC'08 functional tests (2 weeks planned).
March  
April For 2008 running require 18120 KSi2K cpu, 10730 TB disk and 8070 TB tape over the 10 Tier-1.
May Participate in the CCRC'08 full nominal p-p rates running (4 weeks planned)
June  
July Start of Pilot Physics Run
Edit | Attach | Watch | Print version | History: r40 < r39 < r38 < r37 < r36 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r40 - 2007-11-01 - HarryRenshall
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback