Week of 061002

Open Actions from last week: Several CASTOR upgrades this week. Monday stop castorgrid srm endpoint, upgrade CASTOR client on desktops, change grid mapping of cms to point to t0export pool and upgrade castorlhcb. Wednesday 09.30 upgrade castorpublic to the next version that supports repack (and xrootd).

Chair: H.Renshall

Gmod: J.Novak

Smod: V.Lefebure


Log: Nothing

New Actions: Jan to describe current GSSDATLAS CASTOR disk pools setup for circulation to them.

Discussion: castorgrid srm endpoint, cms mapping and castorlhcb upgrade all done. Desktop clients next. CMS CSA06 to start at 10.00 with Tier0 component leading to first 'processed' file exports to Tier1's around midday. GSSDATLAS also expected to start today but from Friday meeting are still confused over their CASTOR disk pools.



New Actions:

Discussion: CASTORPUBLIC upgrade postponed till next week. CMS report CSA06 has started well with all sites participating except CNAF. Data export is from the t0export pool. Today/tomorrow we will change the disk configurations of rb101 and rb103 from RAID-5 to 3 times RAID-1.



New Actions: rb101 disk reconfiguration today. Also rb102 and rb109 will need more inodes on data01 and data02. Move lxtsm from builing 613.

Discussion: cms report csa06 is going well. Export rate, from t0export pool, is lower than 150mb/sec since only minimum bias events are going through prompt reconstruction. Signal events next week.



New Actions: Finish glite rb reconfigurations. Make sure VO's tag all available CE's at CERN (eg Atlas only tagging ce101 which is hence overloaded - lhcb ok - cms to be checked).



Log: More CE101 and 102 high loads but lower than yesterday and they are responding again (Atlas started publishing good software tags on other ce's yesterday).

New Actions: After report from FNAL VOMRS the ORACLE bug-fix upgrades can go ahead Monday. Schedule is to start at 10:00. Convert rb103 friom raid-5 to raid-1 and move users back to it (done by 14:00).

Discussion: GRIDVIEW monitoring shows no traffic after 24:00. Turns out to be a python problem in the rgma archivers that came in a few weeks ago with a python upgrade but is only exposed when services restart. Lemon monitoring shows expected traffic for GSSDATLAS and CMS. CMS reported good progress with CSA06 though problems today transferring to CNAF, RAL and ASGC. They are already processing signal events so the export rate should approach the target 150 MB/s.

Your signature to copy/paste:

Force new revision help | | or or or Access keys: S = Save, Q = Quiet save, K = Checkpoint, P = Preview, C = Cancel

Edit | Attach | Watch | Print version | History: r6 < r5 < r4 < r3 < r2 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r6 - 2007-02-02 - FlaviaDonno
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback