-- HarryRenshall - 06 Mar 2006

Last Updated 25.06.2007: Split off 2006 plans into a separate linked page and remove LHC engineering run.

Updated 04.06.2007: Extend LHCb requirements to the end of 2007.

Updated 31.05.2007: Add in 3D database disk and server requirements and LHCb and ATLAS quantitative requirements for 3Q.

Updated 25.05.2007: Change date of CMS CSA07 from July to September and precise the expected data rates.

Updated 6.3.2007: Add plans for CMS 5-week cycles and CSA07 and indicators of ALICE p-p and LHCb dress-rehearsals.

Updated 27.02.2007: Precise plans for Atlas February/March Data Distribution tests (see https://twiki.cern.ch/twiki/bin/view/Atlas/TierZero20071). Change Atlas share from 10.5% to 10%.

Updated 15.01.2007: Move the ATLAS Tier0 export tests from 15 Jan to new preliminary date of end Feb.

Updated: 28.11.2006: For CMS request backup to tape by end of year of CSA06 data and add activity plans for December and preliminary plans for the first 6 months of 2007. CMS expect to use up to the MoU pledged resources per site in 2007.

Updated 17.11.2006: For ATLAS revise (downwards, especially in disk) MC requirements for first half of 2007.

Updated 2.11.2006: For ATLAS revise 4Q2006 MC requirements, add MC plans up to mid-2007 and add January 2007 Tier-0 and export exercise.

Updated 27.10.2006: for ALICE continue the data export tests till end 2006 and add resource requirements for all of 2007.

Updated 23.10.2006: add/change LHCB requirements for Oct to April 2007 from the spreadsheet of 26 Sep 2006.

Updated 01.09.2006: add LHCB requirements for Oct/Nov/Dec from the July spreadsheet.

Updated 18.08.2006 to extend ALICE data export till August, continue ATLAS data export till end September, move CMS raw data export to second half of August and clarify resource requirements and mid-November end date for CMS CSA06.

Updated 10.07.2006: replace LHCB spreadsheet with version of 7 July 2006

Updated 12 June to update Atlas June and CMS and ALICE July plans.

Updated 22.05.2006: replace LHCB spreadsheet with version of 11 May 2006

Updated 8 May to add link to LHCB detailed planning spreadsheet to the header of the site LHCB Requirements.

FZK-Karlsruhe Site Resource Requirements Timetable for 2006

FZKTimeTable2006

FZK-Karlsruhe Site Resource Requirements Timetable for 2007

Tier 1 FZK-Karlsruhe. To provide 20% of ALICE Resources To provide 10% of ATLAS resources To provide 12% of CMS resources To provide 10% of LHCB resources  
Month ALICE Requirements ATLAS Requirements CMS Requirements LHCB Requirements (See LHCb070529.xls) Tier 0 Requirements
January 2007 During first quarter build up to a data challenge of 75% of the last quarter (data taking) capacity using new site capacity as and when available. Require up to 670 KSi2K cpu, 167 TB disk and 532 TB tape at FZK. Export rate from CERN to FZK will be 60 MB/s. Provide 182 KSi2K of cpu each month and an additional 11.6 TB of permanent disk plus an additional 20.3 TB of permanent tape storage for this quarter for MC event generation. Provide 77 KSi2K of cpu per month and an additional 18 TB of permanent tape storage for this quarter for MC event generation. Provide 235 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 3.2 TB of tape and 12.1 TB of disk. CERN background disk-disk top up to 200MB/sec
February During first quarter build up to a data challenge of 75% of the last quarter (data taking) capacity using new site capacity as and when available. Require up to 670 KSi2K cpu, 167 TB disk and 532 TB tape at FZK. Export rate from CERN to FZK will be 60 MB/s. Provide 182 KSi2K of cpu for MC event generation. From 26 Feb begin 4 week data distribution tests. Rampup to full 2008 rate from Tier 0 during first week. Raw from Tier 0 to reach 32 MB/s, ESD to reach 40 MB/s and AOD to reach 20 MB/s. Raw data to go to tape then can be recycled. ESD and AOD to go to disk and can be recycled but during last two weeks AOD should be distributed to associated Tier 2, requiring up to 5.2 TB of disk buffer, before being recycled. Provide 77 KSi2K of cpu for MC event generation. On 12 Feb begin first LoadTest07 5-week cycle (see CMS plans). Provide 235 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 3.2 TB of tape and 12.1 TB of disk. CERN background disk-disk top up to 200MB/sec
March During first quarter build up to a data challenge of 75% of the last quarter (data taking) capacity using new site capacity as and when available. Require up to 670 KSi2K cpu, 167 TB disk and 532 TB tape at FZK. From 26 March for 7 days participate in WLCG multi-VO 65% milestone so import at 10 MB/s from CERN. Provide 182 KSi2K of cpu for MC event generation. Continue 4 week data distribution tests till 26 March then participate in all-experiment service challenge milestone taking 65% of the average 2008 rate as above but without AOD redistribution for the next 7 days. Provide 77 KSi2K of cpu for MC event generation. On 19 March begin second LoadTest07 5-week cycle (see CMS plans). From 26 March for 7 days participate in WLCG multi-VO 65% milestone so import at 17 MB/s from CERN. Provide 226 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 1.5 TB of tape and 10.3 TB of disk. CERN background disk-disk top up to 200MB/sec
April Require up to 670 KSi2K cpu, 167 TB disk and 532 TB tape at FZK. Starting in April and continuing throughout the year build up to full-scale dress rehearsal of p-p running with raw data (at 15 MB/s) and ESD (an additional 10% of the raw) import from CERN, reconstruction at Tier-1 and user analysis and simulation at Tier-2. The data are to be stored in a Tape1Disk1 class storage but where ALICE will manage the disk space. Provide 364 KSi2K of cpu each month and an additional 23.2 TB of permanent disk plus an additional 40.5 TB of permanent tape storage for this quarter for MC event generation. Provide a permanent 300 GB of disk space and 3 DB servers for ATLAS conditions and event tag databases. Provide 77 KSi2K of cpu and an additional 6 TB of permanent tape storage for MC event generation. Provide a permanent 300 GB of disk space and 2 squid server nodes for CMS conditions databases. Provide 226 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 1.5 TB of tape and 10.3 TB of disk. Provide a permanent 100 GB of disk space and 2 DB servers for LHCb conditions and LFC replica databases. CERN background disk-disk top up to 200MB/sec
May Require up to 670 KSi2K cpu, 167 TB disk and 532 TB tape at FZK. Export rate from CERN to FZK will be 60 MB/s. Provide 364 KSi2K of cpu for MC event generation. Repeat February/March data distribution tests. Provide 154 KSi2K of cpu and an additional 12 TB of permanent tape storage for MC event generation. Provide 26 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 5.3 TB of disk. CERN background disk-disk top up to 200MB/sec
June Require up to 670 KSi2K cpu, 167 TB disk and 532 TB tape at FZK. Export rate from CERN to FZK will be 60 MB/s. Provide 364 KSi2K of cpu for MC event generation. Provide 192 KSi2K of cpu and an additional 15 TB of permanent tape storage for MC event generation. Start import of simulated raw data from CERN at 6.3 MB/s. Provide 26 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 5.3 TB of disk. CERN background disk-disk top up to 200MB/sec
July Require up to 670 KSi2K cpu, 167 TB disk and 532 TB tape at FZK. Export rate from CERN to FZK will be 60 MB/s. Start full scale (2008 running) dress rehearsal. Provide 192 KSi2K of cpu and an additional 15 TB of permanent tape storage for MC event generation. Continue import of simulated raw data from CERN at 6.3 MB/s. Provide 35 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 0.3 TB of disk plus 3.3 TB of temporary disk. CERN background disk-disk top up to 200MB/sec
August Require up to 670 KSi2K cpu, 167 TB disk and 532 TB tape at FZK. Export rate from CERN to FZK will be 60 MB/s. Continue rampup of full scale dress rehearsal. Provide 192 KSi2K of cpu and an additional 15 TB of permanent tape storage for MC event generation. Provide 18 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 0.3 TB of disk. CERN background disk-disk top up to 200MB/sec
September Require up to 670 KSi2K cpu, 167 TB disk and 532 TB tape at FZK. Export rate from CERN to FZK will be 60 MB/s. Reach rates of full scale dress rehearsal. Take raw data from CERN (raw is to go to tape) at 32 MB/sec, ESD at 40 MB/sec and AOD at 20 MB/sec. Send and receive data from Tier-1 and Tier-2 according to the Megatable spreadsheet values (see link on first page of this Twiki). Starting 10 September perform 30-day run of CSA07 at twice the rate of CSA06 and adding Tier-1 to Tier-1 and to Tier-2 transfers. Import prompt reco events from Tier-0 at 26 MB/s to go to tape to be deleted when site requires. Run 2500 jobs/day including re-reconstruction and store these data on disk until they have been exported to other Tier-1 at 24 MB/s. Import similar data from other Tier-1 at 40 MB/s. Export samples to Tier-2 at 60 MB/s and import Monte-Carlo from Tier-2 to Tape1Disk0 class storage at 30 MB/s. Provide 43 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.8 TB of tape and 4 TB of disk. CERN background disk-disk top up to 200MB/sec
October Require up to 670 KSi2K cpu, 167 TB disk and 532 TB tape at FZK. Export rate from CERN to FZK will be 60 MB/s. Stable running of full scale dress rehearsal. Continue and finish CSA07. Provide 26 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.5 TB of tape and 5.3 TB of disk. CERN background disk-disk top up to 200MB/sec
November For eventual data taking startup require 893 KSi2K cpu, 223 TB disk and 709 TB tape at FZK. Export rate from CERN to FZK will be 80 MB/s. Provide a permanent 1000 GB of disk space and add DB servers if needed for ATLAS conditions and event tag databases.   Provide a permanent 300 GB of disk space and add DB servers if needed for LHCb conditions and LFC replica databases. Provide 18 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 0.3 TB of disk. CERN background disk-disk top up to 200MB/sec
December For eventual data taking startup require 893 KSi2K cpu, 223 TB disk and 709 TB tape at FZK. Export rate from CERN to FZK will be 80 MB/s.     Provide 18 KSi2K of cpu for stripping, reconstruction and analysis with an additional 0.1 TB of tape and 0.3 TB of disk. CERN background disk-disk top up to 200MB/sec
Edit | Attach | Watch | Print version | History: r39 < r38 < r37 < r36 < r35 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r37 - 2007-06-25 - HarryRenshall
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback