-- HarryRenshall - 06 Mar 2006

Last Updated 05.06.2007: Add in 3D database disk and server requirements and ATLAS quantitative requirements for 3Q.

Updated 25.05.2007: Change date of CMS CSA07 from July to September and precise the expected data rates.

Updated 6.3.2007: Add plans for CMS 5-week cycles and CSA07.

Updated 27.02.2007: Precise plans for Atlas February/March Data Distribution tests (see https://twiki.cern.ch/twiki/bin/view/Atlas/TierZero20071). Change Atlas share from 7.7% to 6.2%.

Updated 15.01.2007: Move the ATLAS Tier0 export tests from 15 Jan to new preliminary date of end Feb.

Updated: 28.11.2006: For CMS request backup to tape by end of year of CSA06 data and add activity plans for December and preliminary plans for the first 6 months of 2007. CMS expect to use up to the MoU pledged resources per site in 2007.

Updated 17.11.2006: For ATLAS revise (downwards, especially in disk) MC requirements for first half of 2007.

Updated 2.11.2006: For ATLAS revise 4Q2006 MC requirements, add MC plans up to mid-2007 and add January 2007 Tier-0 and export exercise.

Updated: 18 August to continue ATLAS data export till end September, move CMS raw data export to second half of August and clarify resource requirements and mid-November end date for CMS CSA06.

Updated 12 June to update Atlas June and CMS July plans.

ASGC-Taiwan Site Resource Requirements Timetable for 2006/2007

Tier 1 ASGC-Taiwan. To provide 6.2% of Atlas resources To provide 12% of CMS resources.  
Month Atlas Requirements CMS Requirements Tier 0 Requirements
April 2006 Provide 83 KSi2K of cpu for MC event generation and 4TB of disk and 10TB of tape for this data for this quarter 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Data to tape from Tier 0 at 10 MB/sec (may be part of SC4) 3rd to 16th CERN disk-disk at 100 MB/sec. 18th to 24th CERN disk-tape at 75 MB/sec
May Provide 83 KSi2K of cpu for MC event generation 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk CERN background disk-disk top up to 100 MB/sec
June Provide 83 KSi2K of cpu for MC event generation. From 19 June to 7 July T0 to T1 tests take 24.6 MB/sec "Raw" to tape (rate to be reported), ESD at 15.4 MB/s to disk and AOD at 20 MB/s to disk from Tier 0 (total rate 60 MB/s). These data can be deleted after 24 hours 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. SC3 functionality rerun. Run 2500 jobs/day at end of June CERN background disk-disk top up to 100 MB/sec
July Provide 89 KSi2K of cpu for MC event generation and 5TB of disk and 13TB of tape for this data for this quarter. "Raw" reconstruction setting up - stagein from tape using 1-2 drives. T0 to T1 export take 24.6 MB/sec "Raw" to tape (rate to be reported), ESD at 15.4 MB/s to disk and AOD at 20 MB/s to disk from Tier 0 (total rate 60 MB/s). These data can be deleted after 24 hours 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Monte Carlo from Tier 2 incoming sent on to CERN. Test Tier 2 to Tier 1 transfers at 10 MB/sec per Tier 2. Last 2 weeks take 'raw' data from CERN to tape at 10 MB/s CERN background disk-disk top up to 100 MB/sec
August Provide 89 KSi2K of cpu for MC event generation. Two slots of 3 days of "raw" reconstruction - stagein from tape using 1-2 drives. Analysis tests - 20 MB/sec incoming - will include scalability tests and prefers to be only Atlas grid activity. T0 to T1 export take 24.6 MB/sec "Raw" to tape (rate to be reported), ESD at 15.4 MB/s to disk and AOD at 20 MB/s to disk from Tier 0 (total rate 60 MB/s). These data can be deleted after 24 hours 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Monte Carlo from Tier 2 incoming sent on to CERN. Test Tier 2 to Tier 1 transfers at 10 MB/sec per Tier 2. Last 2 weeks (after high rate T0-T1disk-disk tests) take 'raw' data from CERN to tape at 10 MB/s (data can be deleted after 24 hours). CERN background disk-disk top up to 100 MB/sec
September Provide 89 KSi2K of cpu for MC event generation. T0 to T1 export take 24.6 MB/sec "Raw" to tape (rate to be reported), ESD at 15.4 MB/s to disk and AOD at 20 MB/s to disk from Tier 0 (total rate 60 MB/s). These data can be deleted after 24 hours 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Till mid-September take 'raw' data from CERN to tape at 10 MB/s (data can be deleted after 24 hours). From mid-September ramp up to 1 October start of CSA06 at 750 jobs/day (requiring 180 KSi2K of cpu and a total of 70 TB of disk storage). CERN background disk-disk top up to 100 MB/sec.
October Reprocessing tests - 20 MB/sec incoming 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. CSA06 at 750 jobs/day (requiring 180 KSi2K of cpu and a total of 70 TB of disk storage) CERN background disk-disk top up to 100MB/sec
November Provide 100 KSi2K of cpu and an additional 1.6 TB of permanent disk and 0.9 TB of temporary (till reconstruction is run) disk plus an additional 1.5 TB of permanent tape storage for MC event generation. Analysis tests - 20 MB/sec incoming at the same time as reprocessing continues 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Demonstrate 20 MB/sec from Tier 0 to tape. Would like this to be an SC4 activity. CSA06 at 750 jobs/day (requiring 180 KSi2K of cpu and a total of 70 TB of disk storage) till mid-November CERN background disk-disk top up to 100MB/sec
December Provide 100 KSi2K of cpu and an additional 1.6 TB of permanent disk and 0.9 TB of temporary (till reconstruction is run) disk plus an additional 1.5 TB of permanent tape storage for MC event generation. Backup the October CSA06 disk files of 70TB to new permanent tape storage. Provide 32 KSi2K of cpu and an additional 2.5 TB of permanent tape storage for MC event generation. CERN background disk-disk top up to 100MB/sec
January 2007 Provide 133 KSi2K of cpu and an additional 6.2 TB of permanent disk plus an additional 5.9 TB of permanent tape storage for this quarter for MC event generation. Provide 96 KSi2K of cpu per month and an additional 23 TB of permanent tape storage for this quarter for MC event generation. CERN background disk-disk top up to 100MB/sec
February Provide 133 KSi2K of cpu for MC event generation. Provide 96 KSi2K of cpu for MC event generation. On 12 Feb begin first LoadTest07 5-week cycle (see CMS plans). CERN background disk-disk top up to 100MB/sec
March Provide 133 KSi2K of cpu for MC event generation. From 12 March begin 2 week data distribution tests. Rampup to full 2008 rate from Tier 0 during first week. Raw from Tier 0 to reach 20 MB/s, ESD to reach 25 MB/s and AOD to reach 20 MB/s. Raw data to go to tape then can be recycled. ESD and AOD to go to disk and can be recycled but during last two weeks AOD should be distributed to associated Tier 2, requiring up to 5.2 TB of disk buffer, before being recycled. From 26 March participate in all-experiment service challenge milestone taking 65% of the average 2008 rate as above but without AOD redistribution for the next 7 days. Provide 96 KSi2K of cpu for MC event generation. On 19 March begin second LoadTest07 5-week cycle (see CMS plans). From 26 March participate in WLCG multi-VO 65% milestone so import at 17 MB/s from CERN. CERN background disk-disk top up to 100MB/sec
April Provide 267 KSi2K of cpu and an additional 12.4 TB of permanent disk plus an additional 11.7 TB of permanent tape storage for this quarter for MC event generation. Provide a permanent 300 GB of disk space and 3 DB servers for ATLAS conditions and event tag databases. Provide 115 KSi2K of cpu per month and an additional 9 TB of permanent tape storage for MC event generation. Provide a permanent 300 GB of disk space for CMS conditions databases. CERN background disk-disk top up to 100MB/sec
May Provide 267 KSi2K of cpu for MC event generation. Repeat February/March data distribution tests. Provide 154 KSi2K of cpu and an additional 12 TB of permanent tape storage for MC event generation. CERN background disk-disk top up to 100MB/sec
June Provide 267 KSi2K of cpu for MC event generation Provide 192 KSi2K of cpu and an additional 15 TB of permanent tape storage for MC event generation. CERN background disk-disk top up to 100MB/sec
July Start full scale (2008 running) dress rehearsal. Provide 192 KSi2K of cpu and an additional 15 TB of permanent tape storage for MC event generation. CERN background disk-disk top up to 100MB/sec
August Continue rampup of full scale dress rehearsal. Provide 192 KSi2K of cpu and an additional 15 TB of permanent tape storage for MC event generation. CERN background disk-disk top up to 100MB/sec
September Reach rates of full scale dress rehearsal. Take raw data from CERN (raw is to go to tape) at 19.8 MB/sec, ESD at 24.8 MB/sec and AOD at 20 MB/sec. Send and receive data from Tier-1 and Tier-2 according to the Megatable spreadsheet values (see link on first page of this Twiki). Starting 10 September perform 30-day run of CSA07 at twice the rate of CSA06 and adding Tier-1 to Tier-1 and to Tier-2 transfers. Import prompt reco events from Tier-0 at 26 MB/s to go to tape to be deleted when site requires. Run 2500 jobs/day including re-reconstruction and store these data on disk until they have been exported to other Tier-1 at 24 MB/s. Import similar data from other Tier-1 at 40 MB/s. Export samples to Tier-2 at 60 MB/s and import Monte-Carlo from Tier-2 to Tape1Disk0 class storage at 30 MB/s. CERN background disk-disk top up to 100MB/sec
October Stable running of full scale dress rehearsal. Continue and finish CSA07. CERN background disk-disk top up to 100MB/sec
November Engineering run. Provide a permanent 1000 GB of disk space and add DB servers if needed for ATLAS conditions and event tag databases.   CERN background disk-disk top up to 100MB/sec
December Engineering run.   CERN background disk-disk top up to 100MB/sec
Edit | Attach | Watch | Print version | History: r38 | r36 < r35 < r34 < r33 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r34 - 2007-06-05 - HarryRenshall
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback