Week of 200720

WLCG Operations Call details

  • For remote participation we use the Vidyo system. Instructions can be found here.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to wlcg-scod@cernSPAMNOTNOSPAMPLEASE.ch to allow the SCOD to make sure that the relevant parties have the time to collect the required information, or invite the right people at the meeting.

Best practices for scheduled downtimes



  • local:
  • remote: Onno (NL-T1), Elena (CNAF), Xavier (KIT), Maarten (ALICE), Alberto (Monitoring), Ivan (ATLAS), Brian (RAL), Christoph (CMS), Dave (FNAL), Gavin (Compute), Julia (WLCG), Andrew (TRIUMF), Xin (BNL)

Experiments round table:

  • ATLAS reports ( raw view) -
    • Problems:
      • DNS (mostly due to the missing monitoring (Monit) and HammerCloud)
      • FTS problem (filesize out-of-range value from gfal) (ticket)

  • CMS reports ( raw view) -
    • DNS outage at CERN during the weekend: OTG:0057924
      • CMS users reported problems
      • Impact on central production being investigated, but likely less severe

  • ALICE -
    • Mostly business as usual, no major problems
      • Only minor fallout from Sunday afternoon's DNS incident at CERN

  • LHCb reports ( raw view) -
    • Activity:
      • Usual MC, user and WG production.
      • Tomorrow, dCache namespace re-ordering at Gridka
      • Exceptionally agreed to have two T1 (CNAF/GRIDKA) down at the same time

Sites / Services round table:

  • ASGC: NC
  • BNL: 3840 new job slots added to the ATLAS Tier1 computing farm last week.
  • CNAF: this week there are two scheduled downtime: GOCDB 29099 and GOCDB 29100 to upgrade the CNAF storage systems and all the filesystems will be unmounted. The batch system for all experiments will be closed on Monday (20/07) at 04:00PM CEST to drain the farm. The storage systems for the non-LHC experiments will be down from Tuesday (21/07) at 09:00AM CEST. The Storage systems and the Farm will be available again from Wednesday 22/07 at 12:00PM CEST. The down will impact also all the User Interfaces. The following interventions will be performed during the down:
    • New storage systems (5PB) for non-LHC experiments in production
    • Change of the BlockSize global parameter (4MB ==> 16MB) in the farm cluster to accept the new filesystem.
    • Final rsync between the old and new file system
    • Update of tsm-hsm-10 (Non_LHC) to CentOS7
    • Update firmware new storage system before entering in production (bugfix)
    • Update of the gpfs farm and ui cluster to gpfs5
  • EGI: NC
  • IN2P3: NC
  • KIT:
    • Brief downtime this morning (~ 3 min) for relocating one of CMS' dCache pools between two servers.
    • Downtime tomorrow for LHCb from 9 to 17 o'clock CEST, where we attempt to remodel the LHCb dCache namespace via database update.
    • Last week there were some internet connectivity issues for GridKa explained by an overloaded KIT firewall. The overload was caused by network traffic from the WNs to the SE going through the firewall, while there should have been a by-pass configured for it. That is now fixed.
  • NDGF: NC
  • NL-T1: Reminder: two downtimes this week for Sara-matrix - see last week's minutes.
  • NRC-KI: NC
  • OSG: NC
  • PIC: NC
  • RAL:Campus firewall slow due to another community extraordinary workflow on 14/07/2020. (Led to very large rtt between hosts. ) Fix in place. Further ~OPN fiibre UK-CERN intervention have taken place.

  • CERN computing services:
  • CERN storage services: NC
  • CERN databases: NTR
  • Monitoring:
    • Infrastructure heavily affected by the DNS issue on Sunday (OTG:0057926)
    • All the fresh flows are re-established, we are working on the recovery of missing data
  • MW Officer: NC
  • Networks: NTR
  • Security: NTR


Edit | Attach | Watch | Print version | History: r19 < r18 < r17 < r16 < r15 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r19 - 2020-07-20 - MaartenLitmaath
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback