-- HarryRenshall - 06 Mar 2006

Last Updated 04.06.2007: Extend LHCb requirements to the end of 2007.

Updated 31.05.2007: Add in 3D database disk and server requirements and LHCb and ATLAS quantitative requirements for 3Q.

Updated 25.05.2007: Change date of CMS CSA07 from July to September and precise the expected data rates.

Updated 6.3.2007: Add plans for CMS 5-week cycles and CSA07 and indicators of ALICE p-p and LHCb dress-rehearsals.

Updated 27.02.2007: Precise plans for Atlas February/March Data Distribution tests (see https://twiki.cern.ch/twiki/bin/view/Atlas/TierZero20071). Change Atlas share from 7.5% to 10%.

Updated 15.01.2007: Move the ATLAS Tier0 export tests from 15 Jan to new preliminary date of end Feb.

Updated: 28.11.2006: For CMS request backup to tape by end of year of CSA06 data and add activity plans for December and preliminary plans for the first 6 months of 2007. CMS expect to use up to the MoU pledged resources per site in 2007.

Updated 17.11.2006: For ATLAS revise (downwards, especially in disk) MC requirements for first half of 2007.

Updated 2.11.2006: For ATLAS revise 4Q2006 MC requirements, add MC plans up to mid-2007 and add January 2007 Tier-0 and export exercise.

Updated 27.10.2006: for ALICE continue the data export tests till end 2006 and add resource requirements for all of 2007.

Updated 23.10.2006: add/change LHCB requirements for Oct to April 2007 from the spreadsheet of 26 Sep 2006.

Updated 01.09.2006: add LHCB requirements for Oct/Nov/Dec from the July spreadsheet.

Updated 18.08.2006: Extend ALICE data export to August, continue ATLAS data export till end September, move CMS raw data export to second half of August and clarify resource requirements and mid-November end date for CMS CSA06.

Updated 10.07.2006: replace LHCB spreadsheet with version of 7 July 2006

Updated 12 June to update Atlas June and CMS and ALICE July plans.

Updated 22.05.2006: replace LHCB spreadsheet with version of 11 May 2006

Updated 8 May to add link to LHCB detailed planning spreadsheet to the header of the site LHCB Requirements.

CNAF-Bologna Site Resource Requirements Timetable for 2006/2007

Tier 1 CNAF-Bologna. To provide 9% of ALICE Resources To provide 10% of ATLAS resources To provide 16% of CMS resources To provide 11% of LHCB resources  
Month ALICE Requirements ATLAS Requirements CMS Requirements LHCB Requirements (See LHCb070529.xls) Tier 0 Requirements
March 2006          
April Run Monte Carlo jobs on 400 KSi2K of cpu with average rate of 12 MB/sec sending these data back to CERN. Network/reconstruction stress test: run 4000 jobs/day on 400 KSi2K of cpu with 12 MB/sec rate from Tier 0 Provide 77 KSi2K of cpu for MC event generation and 4 TB of disk and 9 TB of tape for this data for this quarter 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk at each Tier 1. Data to tape from Tier 0 at 25 MB/sec (may be part of SC4) Provide 130 KSi2K of cpu for MC event generation 3rd to 16th CERN disk-disk at 200 MB/sec. 18th to 24th CERN disk-tape at 75 MB/sec
May   Provide 77 KSi2K of cpu for MC event generation 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk at each Tier 1 Provide 130 KSi2K of cpu for MC event generation CERN background disk-disk top up to 200 MB/sec
June   Provide 77 KSi2K of cpu for MC event generation. From 19 June to 7 July T0 to T1 tests take 24.0 MB/sec "Raw" to tape (rate to be reported), ESD at 15.0 MB/s to disk and AOD at 20 MB/s to disk from Tier 0 (total rate 59 MB/s). These data can be deleted after 24 hours 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk at each Tier 1. SC3 functionality rerun. Run 3250 jobs/day at end June Get 2.5 MB/sec of "raw" data from CERN and store 5 TB on tape. Reconstruct and strip these data on 21.5 KSi2K of cpu. Provide 108.5 KSi2K of cpu for MC event generation with 4 TB to tape CERN background disk-disk top up to 200 MB/sec
July From 24 July to 6 August take 60 MB/s of raw and ESD data (20% of total) from CERN. These data can be deleted immediately. Tier 1 to Tier 1 and Tier 2 tests. Repeat April network/reconstruction stress test. Provide 83 KSi2K of cpu for MC event generation and 5 TB of disk and 12 TB of tape for this data for this quarter. "Raw" reconstruction setting up - stagein from tape using 1-2 drives 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk at each Tier 1. Monte Carlo from Tier 2 incoming sent on to CERN. Test Tier 2 to Tier 1 transfers at 10 MB/sec per Tier 2. Last 2 weeks take 'raw' data from CERN to tape at 25 MB/s Get 2.5 MB/sec of "raw" data from CERN and store 5 TB on tape. Reconstruct and strip these data on 21.5 KSi2K of cpu. Provide 108.5 KSi2K of cpu for MC event generation with 4 TB to tape CERN background disk-disk top up to 200 MB/sec
August Continue the July export tests until the 60 MB/s rate has been reached for a sufficient period. Provide 83 KSi2K of cpu for MC event generation. Two slots of 3 days of "raw" reconstruction - stagein from tape using 1-2 drives. Analysis tests - 20 MB/sec incoming - will include scalability tests and prefers to be only Atlas grid activity.Take 24.0 MB/sec "Raw" to tape (rate to be reported), ESD at 15.0 MB/s to disk and AOD at 20 MB/s to disk from Tier 0 (total rate 59 MB/s). These data can be deleted after 24 hours 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk at each Tier 1. Monte Carlo from Tier 2 incoming sent on to CERN. Test Tier 2 to Tier 1 transfers at 10 MB/sec per Tier 2. Last 2 weeks (after high rate T0-T1disk-disk tests) take 'raw' data from CERN to tape at 25 MB/s (data can be deleted after 24 hours). Analysis of reconstructed data. Provide 130 KSi2K of cpu for MC event generation with 4TB to tape CERN background disk-disk top up to 200 MB/sec
September Scheduled analysis tests. Provide 83 KSi2K of cpu for MC event generation. Take 24.0 MB/sec "Raw" to tape (rate to be reported), ESD at 15.0 MB/s to disk and AOD at 20 MB/s to disk from Tier 0 (total rate 59 MB/s). 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk at each Tier 1. Till mid-September take 'raw' data from CERN to tape at 25 MB/s (data can be deleted after 24 hours). From mid-September ramp up to 1 October start of CSA06 at 1750 jobs/day (requiring 420 KSi2K of cpu and a total of 160 TB of disk storage). Provide 130 KSi2K of cpu for analysis of reconstructed data and MC event generation with an additional 4 TB to tape CERN background disk-disk top up to 200 MB/sec.
October Continue the data export tests until the 60 MB/s rate has been reached for a sufficient period. Scheduled analysis tests. Reprocessing tests - 20 MB/sec incoming 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk at each Tier 1. Continue CSA06 at 1750 jobs/day (requiring 420 KSi2K of cpu and a total of 160 TB of disk storage). Provide 133 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 1.4 TB of tape and 0.3 TB of disk. CERN background disk-disk top up to 200MB/sec
November Continue the data export tests until the 60 MB/s rate has been reached for a sufficient period. Scheduled analysis tests. Provide 97 KSi2K of cpu and an additional 2.0 TB of permanent disk and 1.6 TB of temporary (till reconstruction is run) disk plus an additional 2.6 TB of permanent tape storage for MC event generation. Analysis tests - 20 MB/sec incoming at the same time as reprocessing continues 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk at each Tier 1. Continue CSA06 at 1750 jobs/day (requiring 420 KSi2K of cpu and a total of 160 TB of disk storage) till mid-November. Demonstrate 50 MB/sec from Tier 0 to tape. Would like it to be an SC4 activity. Provide 136 KSi2K of cpu for analysis of reconstructed data and MC event generation with an additional 2.7 TB of tape and 0.9 TB of disk CERN background disk-disk top up to 200MB/sec
December Continue the data export tests until the 60 MB/s rate has been reached for a sufficient period. Scheduled analysis tests. Provide 97 KSi2K of cpu and an additional 2.0 TB of permanent disk and 1.6 TB of temporary (till reconstruction is run) disk plus an additional 2.6 TB of permanent tape storage for MC event generation. Backup the October CSA06 disk files of 160TB to new permanent tape storage. Provide 42 KSi2K of cpu and an additional 3.3 TB of permanent tape storage for MC event generation. Provide 218 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 3.1 TB of tape and 10.3 TB of disk CERN background disk-disk top up to 200MB/sec
January 2007 During first quarter build up to a data challenge of 75% of the last quarter (data taking) capacity using new site capacity as and when available. Require up to 433 KSi2K cpu, 161 TB disk and 213 TB tape at CNAF. Export rate from CERN to CNAF will be 38 MB/s. Provide 130 KSi2K of cpu each month and an additional 7.8 TB of permanent disk plus an additional 10.1 TB of permanent tape storage for this quarter for MC event generation. Provide 125 KSi2K of cpu per month and an additional 29 TB of permanent tape storage for this quarter for MC event generation. Provide 219 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 3.1 TB of tape and 12.1 TB of disk. CERN background disk-disk top up to 200MB/sec
February During first quarter build up to a data challenge of 75% of the last quarter (data taking) capacity using new site capacity as and when available. Require up to 433 KSi2K cpu, 161 TB disk and 213 TB tape at CNAF. Export rate from CERN to CNAF will be 38 MB/s. Provide 130 KSi2K of cpu for MC event generation. Provide 125 KSi2K of cpu for MC event generation. On 12 Feb begin first LoadTest07 5-week cycle (see CMS plans). Provide 219 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 3.1 TB of tape and 12.1 TB of disk. CERN background disk-disk top up to 200MB/sec
March During first quarter build up to a data challenge of 75% of the last quarter (data taking) capacity using new site capacity as and when available. Require up to 433 KSi2K cpu, 161 TB disk and 213 TB tape at CNAF. From 26 March participate in WLCG multi-VO 65% milestone so import at 6.5 MB/s from CERN. Provide 130 KSi2K of cpu for MC event generation. From 5 March begin 3 week data distribution tests. Rampup to full 2008 rate from Tier 0 during first week. Raw from Tier 0 to reach 32 MB/s, ESD to reach 40 MB/s and AOD to reach 20 MB/s. Raw data to go to tape then can be recycled. ESD and AOD to go to disk and can be recycled but during last two weeks AOD should be distributed to associated Tier 2, requiring up to 5.2 TB of disk buffer, before being recycled. From 26 March participate in all-experiment service challenge milestone taking 65% of the average 2008 rate as above but without AOD redistribution for the next 7 days. Provide 125 KSi2K of cpu for MC event generation. On 19 March begin second LoadTest07 5-week cycle (see CMS plans).From 26 March participate in WLCG multi-VO 65% milestone so import at 24 MB/s from CERN. Provide 211 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 1.4 TB of tape and 10.3 TB of disk. CERN background disk-disk top up to 200MB/sec
April Require up to 433 KSi2K cpu, 161 TB disk and 213 TB tape at CNAF. Starting in April and continuing throughout the year build up to full-scale dress rehearsal of p-p running with raw data (at 10 MB/s) and ESD (an additional 10% of the raw) import from CERN, reconstruction at Tier-1 and user analysis and simulation at Tier-2. The data are to be stored in a Tape1Disk1 class storage but where ALICE will manage the disk space. Provide 260 KSi2K of cpu each month and an additional 15.6 TB of permanent disk plus an additional 20.3 TB of permanent tape storage for this quarter for MC event generation. Provide a permanent 300 GB of disk space and 3 DB servers for ATLAS conditions and event tag databases. Provide 125 KSi2K of cpu and an additional 12 TB of permanent tape storage for MC event generation. Provide a permanent 300 GB of disk space and 2 squid server nodes for CMS conditions databases. Provide 211 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 1.4 TB of tape and 10.3 TB of disk. Provide a permanent 100 GB of disk space and 2 DB servers for LHCb conditions and LFC replica databases. CERN background disk-disk top up to 200MB/sec
May Require up to 433 KSi2K cpu, 161 TB disk and 213 TB tape at CNAF. Export rate from CERN to CNAF will be 38 MB/s. Provide 260 KSi2K of cpu for MC event generation. Repeat February/March data distribution tests. Provide 205 KSi2K of cpu and an additional 16 TB of permanent tape storage for MC event generation. Provide 25 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 5.3 TB of disk. CERN background disk-disk top up to 200MB/sec
June Require up to 433 KSi2K cpu, 161 TB disk and 213 TB tape at CNAF. Export rate from CERN to CNAF will be 38 MB/s. Provide 260 KSi2K of cpu for MC event generation. Provide 256 KSi2K of cpu and an additional 20 TB of permanent tape storage for MC event generation. Start import of simulated raw data from CERN at 6 MB/s. Provide 25 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 5.3 TB of disk. CERN background disk-disk top up to 200MB/sec
July Require up to 433 KSi2K cpu, 161 TB disk and 213 TB tape at CNAF. Export rate from CERN to CNAF will be 38 MB/s. Start full scale (2008 running) dress rehearsal. Provide 256 KSi2K of cpu and an additional 20 TB of permanent tape storage for MC event generation. Continue import of simulated raw data from CERN at 6 MB/s. Provide 33 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 0.3 TB of disk plus 3.1 TB of temporary disk. CERN background disk-disk top up to 200MB/sec
August Require up to 433 KSi2K cpu, 161 TB disk and 213 TB tape at CNAF. Export rate from CERN to CNAF will be 38 MB/s. Continue rampup of full scale dress rehearsal. Provide 256 KSi2K of cpu and an additional 20 TB of permanent tape storage for MC event generation. .Provide 17 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 0.3 TB of disk. CERN background disk-disk top up to 200MB/sec
September Require up to 433 KSi2K cpu, 161 TB disk and 213 TB tape at CNAF. Export rate from CERN to CNAF will be 38 MB/s. Reach rates of full scale dress rehearsal. Take raw data from CERN (raw is to go to tape) at 32 MB/sec, ESD at 40 MB/sec and AOD at 20 MB/sec. Send and receive data from Tier-1 and Tier-2 according to the Megatable spreadsheet values (see link on first page of this Twiki). Starting 10 September perform 30-day run of CSA07 at twice the rate of CSA06 and adding Tier-1 to Tier-1 and to Tier-2 transfers. Import prompt reco events from Tier-0 at 37 MB/s to go to tape to be deleted when site requires. Run 3750 jobs/day including re-reconstruction and store these data on disk until they have been exported to other Tier-1 at 36 MB/s. Import similar data from other Tier-1 at 38 MB/s. Export samples to Tier-2 at 80 MB/s and import Monte-Carlo from Tier-2 to Tape1Disk0 class storage at 40 MB/s. .Provide 41 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.7 TB of tape and 4 TB of disk CERN background disk-disk top up to 200MB/sec
October Require up to 433 KSi2K cpu, 161 TB disk and 213 TB tape at CNAF. Export rate from CERN to CNAF will be 38 MB/s. Stable running of full scale dress rehearsal. Continue and finish CSA07. Provide 25 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.5 TB of tape and 5.3 TB of disk. CERN background disk-disk top up to 200MB/sec
November For data taking startup require 578 KSi2K cpu, 215 TB disk and 284 TB tape at CNAF. Export rate from CERN to CNAF will be 50 MB/s. Engineering run. Provide a permanent 1000 GB of disk space and add DB servers if needed for ATLAS conditions and event tag databases.   Provide a permanent 300 GB of disk space and add DB servers if needed for LHCb conditions and LFC replica databases. Provide 17 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 0.3 TB of disk. CERN background disk-disk top up to 200MB/sec
December For data taking startup require 578 KSi2K cpu, 215 TB disk and 284 TB tape at CNAF. Export rate from CERN to CNAF will be 50 MB/s. Engineering run.   Provide 17 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 0.3 TB of disk. CERN background disk-disk top up to 200MB/sec
Edit | Attach | Watch | Print version | History: r42 < r41 < r40 < r39 < r38 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r39 - 2007-06-05 - HarryRenshall
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback