Week of 190923

WLCG Operations Call details

  • For remote participation we use the Vidyo system. Instructions can be found here.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • General information about the WLCG Service can be accessed from the Operations Portal
  • Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to wlcg-scod@cernSPAMNOTNOSPAMPLEASE.ch to allow the SCOD to make sure that the relevant parties have the time to collect the required information, or invite the right people at the meeting.

Best practices for scheduled downtimes

Monday

Attendance:

  • local: Kate(WLCG, chair), Julia (WLCG), Maarten (WLCG, ALICE), Borja (monitoring), Roberto (storage), Alberto (monitoring)
  • remote: Andrzej (ATLAS), Elena (CNAF), Zoltan (LHCb), Di (TRIUMF), David B (IN2P3), Dave M (FNAL), Xin (BNL), Pepe (PIC)

Experiments round table:

  • ATLAS reports ( raw view) -
    • Status: Several instabilities in production in last week
      • LHCONE routing issue affected site connectivity to CERN.
        OTG:0052301
        Root cause: AS2697 TIFR leaked all the CERN prefixes into their LHCONE peering with GEANT
      • Created 3 (CERN) + 5 (Rucio) hours ATLAS central production services downtime
        After return of CERN connectivity, returning wave of production jobs started overloading Rucio file access services
      • Rucio will soon have better protection by setting additional limits on the number of concurrent request connections

  • ALICE -
    • NTR

  • LHCb reports ( raw view) -
    • Activity:
      • MC, user jobs and data restripping.
      • Continuing staging (tape recall) at all T1s
    • Issues:
      • CERN:
      • RRCKI:
      • NIKHEF:
      • RAL:
        • GGUS:142350; (old ticket) Under investigation. User jobs increased, no queue. Issue seems to continue.

Sites / Services round table:

  • ASGC: nc
  • BNL: NTR
  • CNAF:
    • between 18.30 and 19.30 tonight there will be an intervention on the second link of the LHCOPN-LHCONE network lasting a few minutes. The connections above this link will fall.
  • EGI: nc
  • FNAL: NTR
  • IN2P3: NTR
  • JINR: NTR
  • KISTI: nc
  • KIT: nc
  • NDGF: Nothing to report
  • NL-T1:
    • No-one available to dial in.
    • Tape backend has been unavailable this weekend until ~12:00 today because of a broken network switch.
    • A dCache pool node was offline for ~15 minutes for hardware maintenance today.
    • Last week's upgrade of dCache (4.2 to 5.2) and Java (8 to 11) left us with password verification problems (https://github.com/dCache/dcache/issues/5077). We have to schedule a short downtime on short notice to roll back Java for the gPlazma authentication component. Apologies for the inconvenience.
  • NRC-KI: nc
  • OSG: nc
  • PIC: NTR
  • RAL: Network issues started on the 13th September. The initial symptoms impacted FTS transfers to the whole of the UK from BNL, Fermi Labs and FZK. It was initially assumed to be an FTS problem but after sites transferred to CERN FTS problems persisted at several sites (which are the non-OPN parts of the Tier-1 including FTS and GOCDB )
After extensive investigation by many people, Duncan Rand found a problem in the JANET core where traceroute6 is being dropped. This investigation is on-going.
  • TRIUMF: NTR

  • CERN computing services: nc
  • CERN storage services: GridFTP connection problems on Sat 21th being followed-up internally GGUS-143329
  • CERN databases: nc
  • GGUS: NTR
  • Monitoring: NTR
  • MW Officer: NC
  • Networks: Last week (18th morning only) - LHCONE IPv4 routing issue affecting CERN (OTG:0052301)
  • Security: NTR

AOB:

Edit | Attach | Watch | Print version | History: r21 < r20 < r19 < r18 < r17 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r21 - 2019-10-18 - KateDziedziniewicz
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback