Week of 180917

WLCG Operations Call details

  • For remote participation we use the Vidyo system. Instructions can be found here.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • General information about the WLCG Service can be accessed from the Operations Portal
  • Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to wlcg-scod@cernSPAMNOTNOSPAMPLEASE.ch to allow the SCOD to make sure that the relevant parties have the time to collect the required information, or invite the right people at the meeting.

Best practices for scheduled downtimes



  • local:
  • remote:

Experiments round table:

  • CMS reports ( raw view) -
    • Lower CPU utilization this week: ~194k cores
      • Production low at ~153k cores
      • Analysis rather typical with ~41k cores
      • A central production service might still be suffering from EOS Fuse mount issue at CERN - INC:1784940
    • EOS file loss issue from Sep 4th INC:1785566
      • Several files appear to be lost - including RAW data

  • ALICE -
    • NTR

  • LHCb reports ( raw view) -
    • Activity
      • Data reconstruction for 2018 data
      • User and MC jobs
    • Site Issues

Sites / Services round table:

  • ASGC:
  • BNL:
  • CNAF:
  • EGI:
  • FNAL:
  • IN2P3:
  • JINR: Disk dCache srm and gsiftp doors were overloaded, some tuning improved the transfer quality.
  • KISTI:
  • KIT:
    • Still some CRL issues for ATLAS, with took a large toll on the availability for the last week. Should be resolved now.
    • Seemingly since yesterday 10 p.m. CEST one storage controller broke and the redundancy mechanism for it failed. As a consequence, access to a specific block of disks wasn't possible any more and one single dCache pool for CMS experienced critical errors, which made it disable itself. The pool was brought back online this morning 10 a.m. The broken controller needs to be replaced tomorrow, which should be transparent for all customer applications.
  • NDGF:
  • NL-T1: NTR
  • NRC-KI:
  • OSG:
  • PIC:
  • RAL: NTR
  • TRIUMF: Migrated dCache head nodes to new data centre last Tuesday. The data transfers had been interrupted about two hours and 45 minutes due to the migration. IPv6 was also enabled on the new dCache head nodes as dual stack.

  • CERN computing services:
    • Remaining Openstack service availability zones planned for Monday, Tuesday and Wednesday, as per OTG:0045522. Performance issue noted last week on one of the cells, affected some services - fixed after reconfiguration.
  • CERN storage services:
  • CERN databases:
  • Monitoring:
  • Monitoring:
    • Final reports for the Aug 2018 availability sent around
  • MW Officer:
  • Networks: NTR
  • Security: NTR


Edit | Attach | Watch | Print version | History: r16 | r14 < r13 < r12 < r11 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r12 - 2018-09-17 - XavierMol
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback