TWiki
>
LCG Web
>
LCGServiceChallenges
>
ServiceSchedule
>
SC4ExperimentPlans
>
CmsPlans
(revision 23) (raw view)
Edit
Attach
PDF
-- Main.HarryRenshall - 06 Mar 2006 Last Updated 26.10.2007: Start to add 2008 plans and clarify experiment site resource requirements compared to offers. Updated 25.06.2007: Split off 2006 plans into a separate linked page and remove LHC engineering run. Updated 16.05.2007: Change date of CSA07 from July to September, update intervening months and point to new planning talk. Change site shares to be those of the 2008 disk resource pledges. Updated 5.03.2007: Add plans and links for 5-weekly load generator cycles and July CSA07 half full LHC-scale challenge. Updated 24.11.2006: request backup to tape by end of year of CSA06 data and add activity plans for December and the first 6 months of 2007. Updated: 18.08.2006 to reflect delayed start of August raw data export and extension of it till the mid-September start of CSA06. Updated 4.08.2006 to update the cpu and disk resources needed for CSA06. Updated 27.07.2006 to add new 500 MB/sec Tier 0 disk to Tier 1 disk test, first week of August. Updated 20.06.2006 to add retention statement for July T0 to T1 data. Updated 02.06.2006 to add July Tier 0 to Tier 1 tape tests. ---+++ CMS Computing, Software, and Analysis challenge 2006 (CSA06) * See [[https://twiki.cern.ch/twiki/bin/view/CMS/CSA06 CSA06]] ---+++ CMS Computing, Software, and Analysis challenge 2007 (CSA07) Starting September 10 CMS plan to run a 30-day half-scale simulation of their full LHC computing model. * See CMS CRB meeting of 15 May 2007 'CSA07 Planning' talk [[http://indico.cern.ch/materialDisplay.py?contribId=3&materialId=slides&;confId=15904 CSA07]] and the detailed planning at the twiki [[https://twiki.cern.ch/twiki/bin/view/CMS/CSA07Plan CSA07Plan]] ---+++ CMS LoadTest07 transfer cycles Starting 12 Feb CMS plan to run contiguous, repeated 5 weekly cycles of inter-site transfers to exercise CMS-driven transfers to meet CMS goals and provide the infrastructure to meet their own and WLCG milestones. Cycles should successively ramp up to the rates of CSA07. Tier-0 to Tier-1 data to go to tape. Sites can delete data once it is on tape. Tier-1 to Tier-1 and Tier-2 data to go to disk. Sites can delete physical files when they wish. For both tape and disk files CMS will delete them from its catalogues each Sunday. * See CMS Week 1 Feb for the draft planning with descriptions of the cycles at [[http://indico.cern.ch/materialDisplay.py?contribId=1&sessionId=0&materialId=slides&confId=11667 DraftLoadTest07]] and objectives and other links in CMS Week 1 March 'Facility Operations:Planning for data taking' talk [[http://indico.cern.ch/materialDisplay.py?contribId=12&materialId=slides&confId=13052 LoadTest07]] ---+++ CMS Tier 1 Resource Requirements Timetable for 2006 TimeTable2006 ---+++ CMS Tier 1 Resource Requirements Timetable for 2007/2008 * In any one year CMS expect to use up to the MoU pledged resources per site. For 2007/1Q2008 we use the Tier 1 average cpu+disk resource pledges (which add up to 100% of the cms requirements) to give the site shares: |ASGC to provide 12% of resources|CNAF to provide 16% of resources|FNAL to provide 36% of resources|FZK to provide 12% of resources|IN2P3 to provide 10% of resources|PIC to provide 6% of resources|RAL to provide 8% of resources| The site pledges for 2Q2008/1Q2009 exceed the cms cpu requirements by 8% but lag the disk requirements by 28%: |ASGC offers 12.5% of cpu+disk resources|CNAF offers 8.5% of cpu+disk resources|FNAL offers 35.5% of cpu+disk resources|FZK offers 10.5% of cpu+disk resources|IN2P3 offers 10% of cpu+disk resources|PIC offers 5% of cpu+disk resources|RAL offers 7.5% of cpu+disk resources| Renormalising this to 100% gives the per site shares of the total CMS requirements. These percentages are used to calculate the per site resource requirement spreadsheets: |ASGC to provide 14% of cpu+disk resources|CNAF to provide 9.5% of cpu+disk resources|FNAL to provide 40% of cpu+disk resources|FZK to provide 12% of cpu+disk resources|IN2P3 to provide 11% of cpu+disk resources|PIC to provide 5.5% of cpu+disk resources|RAL to provide 8% of cpu+disk resources| CMS Distribution of activities over 2007 |Month|CMS Requirements| |January 2007|Production of 30M events/month requiring 4800 KSi2K. 20% of the CPU resources (960 KSi2K) required at Tier-1 centers. Raw and reconstructed events will be stored on tape at Tier-1. Expected volume in addition to the existing capacity is 75TB. Tier-0 to Tier-1 transfer exercises will continue, with mostly moderate rates at <100MB/s and short bursts up to 600 MB/s. Resource expectations at Tier-1 centers are within the limit of pledged computing resources.| |February|Production of 30M events/month requiring 4800 KSi2K. 20% of the CPU resources (960 KSi2K) required at Tier-1 centers. Raw and reconstructed events will be stored on tape at Tier-1. Expected volume in addition to the existing capacity is 75TB. On 12 Feb begin first LoadTest07 5-week cycle (see above). Resource expectations at Tier-1 centers are within the limit of pledged computing resources.| |March|Production of 30M events/month requiring 4800 KSi2K. 20% of the CPU resources (960 KSi2K) required at Tier-1 centers. Raw and reconstructed events will be stored on tape at Tier-1. Expected volume in addition to the existing capacity is 75TB. On 19 March begin second LoadTest07 5-week cycle (see above). From 26 March participate in WLCG multi-VO milestone to reach 65% of Tier-0 to Tier-1 2008 data rates so 171 MB/s out of CERN. Data to go to tape to be deleted when sites wish. Resource expectations at Tier-1 centers are within the limit of pledged computing resources.| |April|Production of 30M events/month requiring 4800 KSi2K. 20% of the CPU resources (960 KSi2K) required at Tier-1 centers. Raw and reconstructed events will be stored on tape at Tier-1. Expected volume in addition to the existing capacity is 75TB. On 23 April begin third LoadTest07 5-week cycle (see above). Resource expectations at Tier-1 centers are within the limit of pledged computing resources.| |May|Production of 40M events/month requiring 6400 KSi2K. 20% of the CPU resources (1280 KSi2K) required at Tier-1 centers. Reconstruct previous Monte-Carlo sample at Tier-1s. Raw and reconstructed events will be stored on tape at Tier-1. Expected volume in addition to the existing capacity is 100TB. Continue third LoadTest07 5-week cycle (see above). Resource expectations at Tier-1 centers are within the limit of pledged computing resources.| |June|Production of 50M events/month requiring 8000 KSi2K. 20% of the CPU resources (1600 KSi2K) required at Tier-1 centers. Reconstruct previous Monte-Carlo sample at Tier-1s. Raw and reconstructed events will be stored on tape at Tier-1. Expected volume in addition to the existing capacity is 125TB. Resource expectations at Tier-1 centers are within the limit of pledged computing resources.| |July|Production of 50M events/month requiring 8000 KSi2K. 20% of the CPU resources (1600 KSi2K) required at Tier-1 centers. Reconstruct previous Monte-Carlo sample at Tier-1s to create samples for 2007 physics analyses. Raw and reconstructed events will be stored on tape at Tier-1. Expected volume in addition to the existing capacity is 125TB. Last two weeks perform large scale simulated raw data reconstruction at Tier 0 with resulting data transfer between all Tiers at the CSA07 rates. Resource expectations at Tier-1 centers are within the limit of pledged computing resources for the 2007/8 period.| |August|Production of 50M events/month requiring 8000 KSi2K. 20% of the CPU resources (1600 KSi2K) required at Tier-1 centers. Reconstruct previous Monte-Carlo sample at Tier-1s to create samples for 2007 physics analyses. Raw and reconstructed events will be stored on tape at Tier-1. Expected volume in addition to the existing capacity is 125TB. Exercise data link from pit to Tier-0. Resource expectations at Tier-1 centers are within the limit of pledged computing resources for the 2007/8 period.| |September|Starting on 10 September run CSA07 (see links above) for 30 days. CSA07 scales to twice the rate of CSA06 and adds Tier-1 to Tier-1 and to Tier-2 transfers. Tier-0 to export prompt reco events at 300 MB/s to go to tape at Tier-1 sites to be deleted when sites require. Transfer rates between Tier-1 and associated Tier-2 to be between 20-200 MB/s and Tier-1 to Tier-1 to be 50 MB/s. Job submission to reach 25000 jobs/day to Tier-1s and 75000 jobs/day to Tier-2s.| |October|Continue and finish CSA07| |November| | |December| |
Edit
|
Attach
|
Watch
|
P
rint version
|
H
istory
:
r26
<
r25
<
r24
<
r23
<
r22
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r23 - 2007-10-29
-
HarryRenshall
Log In
LCG
LCG Wiki Home
LCG Web Home
Changes
Index
Search
LCG Wikis
LCG Service
Coordination
LCG Grid
Deployment
LCG
Apps Area
Public webs
Public webs
ABATBEA
ACPP
ADCgroup
AEGIS
AfricaMap
AgileInfrastructure
ALICE
AliceEbyE
AliceSPD
AliceSSD
AliceTOF
AliFemto
ALPHA
Altair
ArdaGrid
ASACUSA
AthenaFCalTBAna
Atlas
AtlasLBNL
AXIALPET
CAE
CALICE
CDS
CENF
CERNSearch
CLIC
Cloud
CloudServices
CMS
Controls
CTA
CvmFS
DB
DefaultWeb
DESgroup
DPHEP
DM-LHC
DSSGroup
EGEE
EgeePtf
ELFms
EMI
ETICS
FIOgroup
FlukaTeam
Frontier
Gaudi
GeneratorServices
GuidesInfo
HardwareLabs
HCC
HEPIX
ILCBDSColl
ILCTPC
IMWG
Inspire
IPv6
IT
ItCommTeam
ITCoord
ITdeptTechForum
ITDRP
ITGT
ITSDC
LAr
LCG
LCGAAWorkbook
Leade
LHCAccess
LHCAtHome
LHCb
LHCgas
LHCONE
LHCOPN
LinuxSupport
Main
Medipix
Messaging
MPGD
NA49
NA61
NA62
NTOF
Openlab
PDBService
Persistency
PESgroup
Plugins
PSAccess
PSBUpgrade
R2Eproject
RCTF
RD42
RFCond12
RFLowLevel
ROXIE
Sandbox
SocialActivities
SPI
SRMDev
SSM
Student
SuperComputing
Support
SwfCatalogue
TMVA
TOTEM
TWiki
UNOSAT
Virtualization
VOBox
WITCH
XTCA
Welcome Guest
Login
or
Register
Cern Search
TWiki Search
Google Search
LCG
All webs
Copyright &© 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use
Discourse
or
Send feedback