-- HarryRenshall - 06 Mar 2006

Last Updated 1.11.2007: Start to add 2008 plans and clarify experiment site resource requirements compared to offers.

Updated 25.06.2007: Split off 2006 plans into a separate linked page and remove LHC engineering run.

Updated 16.05.2007: Change date of CSA07 from July to September, update intervening months and point to new planning talk. Change site shares to be those of the 2008 disk resource pledges.

Updated 5.03.2007: Add plans and links for 5-weekly load generator cycles and July CSA07 half full LHC-scale challenge.

Updated 24.11.2006: request backup to tape by end of year of CSA06 data and add activity plans for December and the first 6 months of 2007.

Updated: 18.08.2006 to reflect delayed start of August raw data export and extension of it till the mid-September start of CSA06.

Updated 4.08.2006 to update the cpu and disk resources needed for CSA06.

Updated 27.07.2006 to add new 500 MB/sec Tier 0 disk to Tier 1 disk test, first week of August.

Updated 20.06.2006 to add retention statement for July T0 to T1 data.

Updated 02.06.2006 to add July Tier 0 to Tier 1 tape tests.

CMS Computing, Software, and Analysis challenge 2006 (CSA06)

CMS Computing, Software, and Analysis challenge 2007 (CSA07)

Starting September 10 CMS plan to run a 30-day half-scale simulation of their full LHC computing model.

  • See CMS CRB meeting of 15 May 2007 'CSA07 Planning' talk CSA07 and the detailed planning at the twiki CSA07Plan

CMS LoadTest07 transfer cycles

Starting 12 Feb CMS plan to run contiguous, repeated 5 weekly cycles of inter-site transfers to exercise CMS-driven transfers to meet CMS goals and provide the infrastructure to meet their own and WLCG milestones. Cycles should successively ramp up to the rates of CSA07. Tier-0 to Tier-1 data to go to tape. Sites can delete data once it is on tape. Tier-1 to Tier-1 and Tier-2 data to go to disk. Sites can delete physical files when they wish. For both tape and disk files CMS will delete them from its catalogues each Sunday.

  • See CMS Week 1 Feb for the draft planning with descriptions of the cycles at DraftLoadTest07 and objectives and other links in CMS Week 1 March 'Facility Operations:Planning for data taking' talk LoadTest07

CMS Tier 1 Resource Requirements Timetable for 2006

TimeTable2006

CMS Tier 1 Resource Requirements Timetable for 2007/2008

  • In any one year CMS expect to use up to the MoU pledged resources per site. For 2007/1Q2008 we use the Tier 1 average cpu+disk resource pledges (which add up to 100% of the cms requirements) to give the site shares:

ASGC to provide 12% of resources CNAF to provide 16% of resources FNAL to provide 36% of resources FZK to provide 12% of resources IN2P3 to provide 10% of resources PIC to provide 6% of resources RAL to provide 8% of resources

The site pledges for 2Q2008/1Q2009 exceed the cms cpu requirements by 8% but lag the disk requirements by 28%:

ASGC offers 12.5% of cpu+disk resources CNAF offers 8.5% of cpu+disk resources FNAL offers 35.5% of cpu+disk resources FZK offers 10.5% of cpu+disk resources IN2P3 offers 10% of cpu+disk resources PIC offers 5% of cpu+disk resources RAL offers 7.5% of cpu+disk resources

Renormalising this to 100% gives the per site shares of the total CMS requirements. These percentages are used to calculate the per site resource requirement spreadsheets:

ASGC to provide 14% of cpu+disk resources CNAF to provide 9.5% of cpu+disk resources FNAL to provide 40% of cpu+disk resources FZK to provide 12% of cpu+disk resources IN2P3 to provide 11% of cpu+disk resources PIC to provide 5.5% of cpu+disk resources RAL to provide 8% of cpu+disk resources

CMS Distribution of activities over 2007

Month CMS Requirements
January 2007 Production of 30M events/month requiring 4800 KSi2K. 20% of the CPU resources (960 KSi2K) required at Tier-1 centers. Raw and reconstructed events will be stored on tape at Tier-1. Expected volume in addition to the existing capacity is 75TB. Tier-0 to Tier-1 transfer exercises will continue, with mostly moderate rates at <100MB/s and short bursts up to 600 MB/s. Resource expectations at Tier-1 centers are within the limit of pledged computing resources.
February Production of 30M events/month requiring 4800 KSi2K. 20% of the CPU resources (960 KSi2K) required at Tier-1 centers. Raw and reconstructed events will be stored on tape at Tier-1. Expected volume in addition to the existing capacity is 75TB. On 12 Feb begin first LoadTest07 5-week cycle (see above). Resource expectations at Tier-1 centers are within the limit of pledged computing resources.
March Production of 30M events/month requiring 4800 KSi2K. 20% of the CPU resources (960 KSi2K) required at Tier-1 centers. Raw and reconstructed events will be stored on tape at Tier-1. Expected volume in addition to the existing capacity is 75TB. On 19 March begin second LoadTest07 5-week cycle (see above). From 26 March participate in WLCG multi-VO milestone to reach 65% of Tier-0 to Tier-1 2008 data rates so 171 MB/s out of CERN. Data to go to tape to be deleted when sites wish. Resource expectations at Tier-1 centers are within the limit of pledged computing resources.
April Production of 30M events/month requiring 4800 KSi2K. 20% of the CPU resources (960 KSi2K) required at Tier-1 centers. Raw and reconstructed events will be stored on tape at Tier-1. Expected volume in addition to the existing capacity is 75TB. On 23 April begin third LoadTest07 5-week cycle (see above). Resource expectations at Tier-1 centers are within the limit of pledged computing resources.
May Production of 40M events/month requiring 6400 KSi2K. 20% of the CPU resources (1280 KSi2K) required at Tier-1 centers. Reconstruct previous Monte-Carlo sample at Tier-1s. Raw and reconstructed events will be stored on tape at Tier-1. Expected volume in addition to the existing capacity is 100TB. Continue third LoadTest07 5-week cycle (see above). Resource expectations at Tier-1 centers are within the limit of pledged computing resources.
June Production of 50M events/month requiring 8000 KSi2K. 20% of the CPU resources (1600 KSi2K) required at Tier-1 centers. Reconstruct previous Monte-Carlo sample at Tier-1s. Raw and reconstructed events will be stored on tape at Tier-1. Expected volume in addition to the existing capacity is 125TB. Resource expectations at Tier-1 centers are within the limit of pledged computing resources.
July Production of 50M events/month requiring 8000 KSi2K. 20% of the CPU resources (1600 KSi2K) required at Tier-1 centers. Reconstruct previous Monte-Carlo sample at Tier-1s to create samples for 2007 physics analyses. Raw and reconstructed events will be stored on tape at Tier-1. Expected volume in addition to the existing capacity is 125TB. Last two weeks perform large scale simulated raw data reconstruction at Tier 0 with resulting data transfer between all Tiers at the CSA07 rates. Resource expectations at Tier-1 centers are within the limit of pledged computing resources for the 2007/8 period.
August Production of 50M events/month requiring 8000 KSi2K. 20% of the CPU resources (1600 KSi2K) required at Tier-1 centers. Reconstruct previous Monte-Carlo sample at Tier-1s to create samples for 2007 physics analyses. Raw and reconstructed events will be stored on tape at Tier-1. Expected volume in addition to the existing capacity is 125TB. Exercise data link from pit to Tier-0. Resource expectations at Tier-1 centers are within the limit of pledged computing resources for the 2007/8 period.
September Starting on 10 September run CSA07 (see links above) for 30 days. CSA07 scales to twice the rate of CSA06 and adds Tier-1 to Tier-1 and to Tier-2 transfers. Tier-0 to export prompt reco events at 300 MB/s to go to tape at Tier-1 sites to be deleted when sites require. Transfer rates between Tier-1 and associated Tier-2 to be between 20-200 MB/s and Tier-1 to Tier-1 to be 50 MB/s. Job submission to reach 25000 jobs/day to Tier-1s and 75000 jobs/day to Tier-2s.
October Continue CSA07
November Continue and finish CSA07 - ending now scheduled for 6 November
December  
January 2008  
February Participate in the CCRC'08 functional tests. Over 2 weeks build up to as close to the 2008 p-p running conditions as resources allow to reach rates Tier0-Tier1 of 600 MB/s, Tier1-Tier2 of 50-500 MB/s and Tier1-Tier1 of 100 MB/s. Reconstruct raw data at Tier0 at between 150-300 events per second. Run 50000 jobs/day at Tier1 and 150000 per day over the Tier2. The detector cosmics component of the raw data is to be kept permanently while the rest can be scratched after the run.
March  
April For 2008 running require 9600 KSi2K cpu, 7200 TB disk and 9800 TB tape over the 7 Tier-1.
May Continue as February as the CMS participation in the CCRC'08 May full nominal p-p rates running (4 weeks planned).
June  
July Start of Pilot Physics Run
Edit | Attach | Watch | Print version | History: r26 < r25 < r24 < r23 < r22 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r26 - 2007-11-02 - HarryRenshall
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback