Week of 081020

Open Actions from last week:

Daily WLCG Operations Call details

To join the call, at 15.00 CE(S)T Monday to Friday inclusive (in CERN 513 R-068) do one of the following:

  1. Dial +41227676000 (Main) and enter access code 0119168, or
  2. To have the system call you, click here

General Information

See the weekly joint operations meeting minutes

Additional Material:

Monday:

Attendance: local(Gavin, Jean-Philippe, Harry, Jamie, Sophie, Steve, Roberto, Simone, Andrea, Markus, Jan);remote(Gareth, Michael).

elog review:

Experiments round table:

  • ATLAS (Simone) - last Friday several activities were (re-)started following decisions Thursday. Replication of MC AODs restarted; consolidation of some SRM v1 data -> SRM v2 endpoints (data not needed will be exterminated); action from last week on huge backlog of cosmic data, subscriptions cancelled, some files recalled from tape and on Friday distribution (mainly BNL & Lyon) restarted. Activity continues now... Several hiccoughs over w/e: a few instabilities due to v. high load due to above + normal data export (e.g. SRM + SE overload, FZK, IN2P3, Taiwan Fri/Sat) - relatively minor. Bigger issue Saturday with RAL: ATLAS started observing v. high failure rate. Service not completely dead but failures (Oracle error) > 90%. Fixed this morning - comments later from RAL(?). INFN cloud: problem with CNAF tape service and with one of calibration sites (Napoli). Napoli problem still there, CNAF solved this morning. JP: what was problem? A: srm preparetoput failed. Issue with transfers to Lyon: not sure if error is from FTS or SRM - will follow offline. Above activities continue.. Other news: cosmics data taking continues for one more week - probably until Monday next week. Will likely open detector after that... Might still take data with sub-detectors...

  • RAL (Gareth) - 2 FE m/cs for ATLAS. Slow DB queries. Unbalanced in load. Various interventions over w/e to remove one of m/cs from load balancing went horribly wrong - now back with 2. Running ok for now... Degradation from ~04:00 Saturday morning until ~10:00 this morning - unscheduled outtage in GOCDB for this time.

  • ATLAS (SC) - starting from yesterday problem with unavailability of files from CASTOR at CERN - srmget - pool overload? Ticket exists..

  • RAL (Brian) - transfers MCDISK - MCTAPE at RAL ATLAS maybe don't get files where they want.. Depending on order and how CASTOR puts files on tape. ATLAS explicitly making 2 copies, 1 disk 1 tape. Issue may therefore affect all CASTOR sites.

  • NL-T1 (JT) - no jobs at NL-T1 since last Tuesday. Simone will check...

  • CMS (Andrea) - running global run for w/e. Problems with recons jobs up to Saturday - solved with a s/w fix. Transfers working fine. Average of 600MB/s overall. Quality pretty good apart from FNAL: 50% - investigating. ASGC: now ok; were problems due to v high load on SRM v2 server. CNAF In downtime tomorrow. Lemon 2nd-ary DB down - doesn't seem to affect anything...

  • Sophie - new node in production for WMS service. Informed CMS but jobs still going to wms107. New server is wms202(?) - used but still many jobs are on 107. Andrea will check (possible one is assigned to MC, one to analysis for ex;)

  • LHCb (Roberto) - apart from chaotic user activity schduled activites ~zero. Activities announced last week still waiting for SAM to upgrade to latest version of GAUSS at all sites cannot start new MC activities. Problem LHCb side. How to cope with some missing sites? Being discussed... If application missing in shared area install on WN.

  • NL-T1 (JT) - also no jobs from ALICE! Q: Steve - who do you have jobs from? A: nobody - been that way since Tuesday.

Sites round table:

Services round table:

  • AFS UI - Sophie - will reconfigure. Will add new WMS node for LHCb and new one for ALICE. Brand new m/cs.
  • VOMS - Steve - a pilot service is now available - what will be next version of s/w. Built against production DB - R/O access to DB. Simone: does this allow querying of e-mails? VOMS pilot twiki page. :
    • the voms core pilot is now running with the next version of the software to be deployed sometime.
    • It's running against the production database (it's a read only service (enforced by oracle as well)).
    • It's clearly important that this gets as much production like testing as possible.
    • Connection details: https://twiki.cern.ch/twiki/bin/view/LCG/VomsPilot
    • I'm in touch with US colleges so they can test GUMS and other things which is where it all went horribly wrong last time.

AOB:

Tuesday:

Attendance: local(Harry, Gavin, Roberto,Sophie);remote(Gareth,Jeff).

elog review:

Experiments round table:

LHCb (RS): Still running fake MC production. Alignment production tests yesterday failed with a bookeeping problem. Now the longer term plans during the LHC shutdown as decided at a meeting this morning. LHCb will run an activity FEST2009, Full Experiment System tests 2009. All components of the offline and online will be tested using realistic simulated events coming out of the HLT at 2 KHz. Data quality shifters will validate as if these were real data. Generation of the required 100M events will start in about a week from now and require several weeks at the T1 and T2. Starting next February there will be weekly functional tests leading to the full exercise in March 2009.

CMS (AS): Nothing special to report.

Sites round table: Jeff (who joined late) asked when Nikhef would get some new work and was told 1 week for the LHCb MC jobs.

Services round table: The Post-Mortem of the recent degradation in the LSF shares seen by LHCb has now been published by FIO group at https://twiki.cern.ch/twiki/bin/view/FIOgroup/PostMortem20081013

AOB: Roberto said he has put in a Remedy ticket to ask what made the burst of high LHCb export traffic from Castor disk servers overnight. Sophie will look into this.

Wednesday

Attendance: local(Simone, Maria, Harry, Jamie, Jean-Philippe, Patrica, Roberto, Nick, Gav);remote(Gareth, JT, Gonzalo).

elog review:

Experiments round table:

  • ATLAS (Simone) - situation relatively smooth. Yesterday and part of day before problem with ASGC - fixed this morning. Transfers now with high efficiency. (dest error from srm - no explanation or indeed confirmation problem fixed - see below!). A few things being tested or will start soon: after FTS on SLC4 modified to fix bug - Gav: now deployed at CERN on SLC4 production so expts can check - new CASTOR-SRM i/f in PPS that Jan configured for test - tests will start this week. In ATLAS DM as 'any other destination site'. Also new "CASTOR for T3" with xrootd enabled. 100TB (50 for now?) for local users at CERN. Will report on these tests next week. Last thing: deadline for closure of default pool being discussed. No deadline yet...

  • LHCb (Roberto) - glexec problem testing helped understand bug in DIRAC framework - see http://lblogbook.cern.ch/Operations/844. Fix on DIRAC side - to be rolled out... We had yesterday in data cleanup activity some issues at CNAF. Was in scheduled downtime until 12:00 but problems still there later in afternoon. Raised ticket asking to extend downtime. Ongoing MC activity for T2 - no jobs for T1s, including NIKHEF. A few pending DC06 MC to accomplish. Particle gun and alignment starting now. VOM(R)S meeting this morning: asked to provide R/O instance at VOMS - most likely at CNAF - using 3D?

  • ALICE (Patricia) - wanted to move to WMS at majority of sites and migrate from RBs. NIKHEF/SARA and IN2P3 don't have these services yet... No news from IN2P3. Being installed at NIKHEF on integration testbed. sysadmin team surprised - first time they had heard of this. Where / when was this announced? A: in this meeting a couple of weeks ago. Certain security issues with RBs. Also during weekly ops meeting ~2 weeks ago. Nick: any response from sites with CREAM CE. A: just some small T2s who wish to provide CREAM CE for ALICE. ALICE would prefer in some larger sites -> weekly OPS agenda.

Sites round table:

  • NL-T1: might fail SAM test or two - rogue biomed user. Load on b/e fileserver ~150. Jobs going through - working on it right now. Simone - looked at status of production: MC production v low except in a couple of clouds with backlog. Hence no jobs.

  • NDGF announced 07:00 - 14:00 UTC on 27th October.

  • ASGC - Sebastien - I can confirm (from ASGC) that the CASTOR instance was degrading this morning (your morning). This was the consequence a an accumulation of requests in the stager due to extremely high SRM activity. It was solved by Hungche and Giuseppe around 11am Geneva time

Services round table:

  • DB (Maria) - streams problem on Sunday. DB from NDGF was in strange state propagation for ATLAS to T1 started to fill correction - pileup) up on Saturday LCRs not propagated and paused on Sunday morning. Resumed in operation on Monday morning when the DB at NDGF was restated. One T1 can effectively block streams to all (when LCRs pileup to extent that queue memory for them is exhausted). Not source due to down stream capture box. Raised to Oracle as an enhancement - site with problem would be detected and split & re-merged when back. Not even in Oracle 11g - down by hand e.g. when site in maintenance mode. Can live for 5 days with buffer(s). Adding automatic detection of pileup so alarm can be sent. Simone: channel to alarm T1s? A: no - will implement alarm in monitoring but cannot expect T1s to intervene at w/es. Streams currently on working hours. Splitting operation - and merging - requires alot of manual work. Done only when window known to be >> 8 hours - e.g. 1 day or so. Power cut in CERN computer centre on 18 Nov - integration RACs will be affected (vault - hence tape robots). Simone - another intervention for robots in November - is this the same? Check. Maria - started with Oracle critical patch updates for October. integration being done, others being scheduled and then production. Have also to schedule interventions on integration clusters as some run production applications, e.g. LHCb booking on validation ATLAS integration used for online applications(!)

AOB:

Thursday

Attendance: local();remote().

elog review:

Experiments round table:

Sites round table:

  • ASGC - see addendum above regarding degradation seen by ATLAS at ASGC

Services round table:

  • DB (Maria) - Post mortem on ATLAS Streams problems over last weekend.

AOB:

Friday

Attendance: local();remote().

elog review:

Experiments round table:

Sites round table:

Services round table:

AOB:

Edit | Attach | Watch | Print version | History: r11 < r10 < r9 < r8 < r7 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r8 - 2008-10-23 - JamieShiers
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback