-- HarryRenshall - 26 Jun 2007

Tier 1 RAL -Rutherford Appleton Lab. To provide 1% of ALICE resources from 2007 To provide 12% of ATLAS resources To provide 8% of CMS resources To provide 15% of LHCB resources  
Month ALICE Requirements ATLAS Requirements CMS Requirements LHCB Requirements (See LHCb070529.xls) Tier 0 Requirements
March 2006          
April   Provide 69 KSi2K of cpu for MC event generation and 4 TB of disk and 10 TB of tape for MC data this quarter 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Data to tape from Tier 0 at 10 MB/sec (may be part of SC4) Provide 150 KSi2K of cpu for MC event generation 3rd to 16th CERN disk-disk at 150 MB/sec. 18th to 24th CERN disk-tape at 75MB/sec
May   Provide 69 KSi2K of cpu for MC event generation 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk Provide 150 KSi2K of cpu for MC event generation CERN background disk-disk top up to 150 MB/sec
June   Provide 69 KSi2K of cpu for MC event generation. From 19 June to 7 July T0 to T1 tests take 24.0 MB/sec "Raw" to tape (rate to be reported), ESD at 15.0 MB/s to disk and AOD at 20 MB/s to disk from Tier 0 (total rate 59.0 MB/s). These data can be deleted after 24 hours 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. SC3 functionality rerun. Run 2500 jobs/day at end June Get 3.5 MB/sec of "raw" data from CERN and store 5 TB on tape.Reconstruct and strip these data on 21.5 KSi2K of cpu. Provide 78.5 KSi2K of cpu for MC event generation with 3 TB to tape CERN background disk-disk top up to 150 MB/sec
July   Provide 74 KSi2K of cpu for MC event generation and 6 TB of disk and 14 TB of tape for MC data this quarter. "Raw" reconstruction setting up - stagein from tape using 1-2 drives 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Monte Carlo from Tier 2 incoming sent on to CERN. Test Tier 2 to Tier 1 transfers at 10 MB/sec per Tier 2. Last 2 weeks take 'raw' data from CERN to tape at 10 MB/s Get 3.5 MB/sec of "raw" data from CERN and store 5 TB on tape. Reconstruct and strip these data on 21.5 KSi2K of cpu. Provide 78.5 KSi2K of cpu for MC event generation with 3 TB to tape CERN background disk-disk top up to 150 MB/sec
August   Provide 74 KSi2K of cpu for MC event generation. Two slots of 3 days of "raw" reconstruction - stagein from tape using 1-2 drives. Analysis tests - 20 MB/sec incoming - will include scalability tests and prefers to be only Atlas grid activity. Take 24.0 MB/sec "Raw" to tape (rate to be reported), ESD at 15.0 MB/s to disk and AOD at 20 MB/s to disk from Tier 0 (total rate 59.0 MB/s). These data can be deleted after 24 hours 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Monte Carlo from Tier 2 incoming sent on to CERN. Test Tier 2 to Tier 1 transfers at 10 MB/sec per Tier 2. Last 2 weeks (after high rate T0-T1disk-disk tests) take 'raw' data from CERN to tape at 10 MB/s (data can be deleted after 24 hours). Analysis of reconstructed data. Provide 55 KSi2K of cpu for MC event generation with 2 TB to tape CERN background disk-disk top up to 150 MB/sec
September   Provide 74 KSi2K of cpu for MC event generation. Take 24.0 MB/sec "Raw" to tape (rate to be reported), ESD at 15.0 MB/s to disk and AOD at 20 MB/s to disk from Tier 0 (total rate 59.0 MB/s). These data can be deleted after 24 hours 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Till mid-September take 'raw' data from CERN to tape at 10 MB/s (data can be deleted after 24 hours). From mid-September ramp up to 1 October start of CSA06 at 750 jobs/day (requiring 180 KSi2K of cpu and a total of 70 TB of disk storage). Provide 55 KSi2K of cpu for analysis of reconstructed data and MC event generation with 1.5 TB additional to tape CERN background disk-disk top up to 150 MB/sec.
October   Reprocessing tests - 20 MB/sec incoming 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Continue CSA06 at 750 jobs/day (requiring 180 KSi2K of cpu and a total of 70 TB of disk storage over CSA06). Provide 142 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 1.4 TB of tape and 0.3 TB of disk. CERN background disk-disk top up to 150MB/sec
November   Provide 97 KSi2K of cpu and an additional 2.5 TB of permanent disk and 2.5 TB of temporary (till reconstruction is run) disk plus an additional 3.9 TB of permanent tape storage for MC event generation. Analysis tests - 20 MB/sec incoming at the same time as reprocessing continues 20 MB/sec aggregate Phedex (FTS) traffic to/from temporary disk. Demonstrate 20 MB/sec from Tier 0 to tape (would like this to be an SC4 activity). Continue CSA06 at 750 jobs/day (requiring 180 KSi2K of cpu and a total of 70 TB of disk storage over CSA06) till mid-November. Provide 146 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 2.7 TB of tape and 0.9 TB of disk. CERN background disk-disk top up to 150MB/sec
December   Provide 97 KSi2K of cpu and an additional 2.5 TB of permanent disk and 2.5 TB of temporary (till reconstruction is run) disk plus an additional 3.9 TB of permanent tape storage for MC event generation. Backup the October CSA06 disk files of 70TB to new permanent tape storage. Provide 10 KSi2K of cpu and an additional 1 TB of permanent tape storage for MC event generation. Provide 233 KSi2K of cpu for reconstruction and analysis and MC event generation with an additional 3.2 TB of tape and 10.3 TB of disk. CERN background disk-disk top up to 150MB/sec


This topic: LCG > WebHome > LCGServiceChallenges > ServiceSchedule > SC4ExperimentPlans > SiteRAL > RALTimeTable2006
Topic revision: r1 - 2007-06-26 - HarryRenshall
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback