Week of 171218

WLCG Operations Call details

  • At CERN the meeting room is 513 R-068.

  • For remote participation we use the Vidyo system. Instructions can be found here.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • General information about the WLCG Service can be accessed from the Operations Web
  • Whenever a particular topic needs to be discussed at the daily meeting requiring information from site or experiments, it is highly recommended to announce it by email to wlcg-operations@cernSPAMNOTNOSPAMPLEASE.ch to make sure that the relevant parties have the time to collect the required information or invite the right people at the meeting.

Tier-1 downtimes

Experiments may experience problems if two or more of their Tier-1 sites are inaccessible at the same time. Therefore Tier-1 sites should do their best to avoid scheduling a downtime classified as "outage" in a time slot overlapping with an "outage" downtime already declared by another Tier-1 site supporting the same VO(s). The following procedure is recommended:

  1. A Tier-1 should check the downtimes calendar to see if another Tier-1 has already an "outage" downtime in the desired time slot.
  2. If there is a conflict, another time slot should be chosen.
  3. In case stronger constraints cannot allow to choose another time slot, the Tier-1 will point out the existence of the conflict to the SCOD mailing list and at the next WLCG operations call, to discuss it with the representatives of the experiments involved and the other Tier-1.

As an additional precaution, the SCOD will check the downtimes calendar for Tier-1 "outage" downtime conflicts at least once during his/her shift, for the current and the following two weeks; in case a conflict is found, it will be discussed at the next operations call, or offline if at least one relevant experiment or site contact is absent.

Links to Tier-1 downtimes




  • local: Julia (WLCG), Kate (WLCG, DB), Cristi (storage), Vincent (sec), Alberto (monitoring), Ivan (ATLAS), Raja (LHCb), Andrea M (MW), Alexander (LHCb, NRC-KI)
  • remote: Andrew (NL-T1), Darren (RAL), Gavin (comp), Sang Un (KISTI), Xavier (KIT), Di Qing (TRIUMF), Dave M (FNAL), Dmytro (NDGF), Victor (JINR), Xin (BNL), David B (IN2P3), Elizabeth (OSG), Vincenzo (EGI)

Experiments round table:

  • ATLAS reports ( raw view) -
    • Production:
      • 320k jobs (370k with HPCs)
      • Using T0 (19k slots) and HLT (55k slots) resources for GRID production.
      • Running in parallel derivations and reprocessing campaigns
    • BNL network outage stuck all transfers depending on BNL FTS. All recovered.
    • CNAF data replication - ongoing.

  • CMS reports ( raw view) -
    • Very quiet week, with high activity: ~200k cores in use (~75% production activies)
      • Tier-0 CPU is in use for processing
    • Last week we asked several dCache sites to restart their GSI components (GridFTP, SRM, ...)
    • Production activities expected during the Xmas break:
      • CMS thanks all sites and experts for another great and successful year... Have a nice break and best wishes for 2018!

  • ALICE -
    • High to very high activity on average
    • Expectations for the end-of-year break:
      • steady MC production
      • raw data reconstruction
      • analysis probably at a lower level than usual
    • Thanks to all sites and experts for another successful year!
    • Season's greetings and best wishes for 2018!

  • LHCb reports ( raw view) -
    • Activity
      • Stripping validation, user analysis, MC
    • Site Issues
      • T1
        • RAL: problems with file upload (GGUS:132540) - possibly solved. Internal ticket opened about pilots killed at RAL (not by LHCb).
        • SARA : Waiting for end of downtime.
        • Missing files : RAW files found missing at RRCKI (recovered), PIC(recovered) and IN2P3 (under investigation).
      • CERN :
        • Brief downtime of multiple database services yesterday. Also possibly a similar issue last week too.
        • Staging failures (GGUS:132516) - we hope that the 3day timeout request is not for long term.
        • Missing files on tape (GGUS:132525) - solved?

Sites / Services round table:

  • ASGC: NC
  • BNL:
    • BNL experienced a site wide external connectivity outage for 15 hours on Saturday, Dec 16th. From 6:30am to 9:30pm (local time).
    • Unprecedented incident, as both of our redundant 100Gbps network circuits were interrupted. The primary one went down at around 00:30 am (hardware failure), then the second one was down at 6:30am (power/telco pole fire).
    • All BNL services have been back online since 9:30pm (EST), Dec 16th
    • ATLAS ELOG and OSG/GGUS tickets were kept updated during the outage.
  • CNAF: NC
  • EGI: NTR
  • FNAL: FTS migration was moved to next year (1st week of January most probably)
  • IN2P3: NTR
  • JINR: long downtime announced for Tier1 and Tier2 resources on 26 and 27.12.
  • KIT:
    • Firewall maintainance on Tuesday went as expected with little to no impact on production activity.
    • On Thursday we had another attempt to activate network redundancy for our new storage and failed. As a side effect, several servers - most notably all dCache "doors" - were not reachable on their public interface for about an hour.
  • NDGF:
    • Tape robot at UCPH has been rewired around UPS work: downtime won't be needed.
    • Repmgr update to v. 4 needs config changes and schema update. Beware if you are running replicated postgres/dcache setup. Messed up DB replication. Downtime Friday fixed the issue.
    • Fibrework in Sweden gave a handful of ~5 minutes interuption to NDGF-OPN network the night between Thurs-Friday last week. Downtime was added as soon as it was noticed.
    • HPC2N cooling maintenance canceled. Downtime was shortened to only an afternoon of pool updates. Short interruptions during reboots. Atlas and Alice data affected.
  • NL-T1: SARA has scheduled outage for dCache updated.
Raja has mentioned that CEs are also down (possibly as a part of one intervention)
Julia has requested more frequent participation from the site.
  • OSG: NTR
  • PIC: NTR
  • RAL: Nothing to report.

  • CERN computing services:
  • CERN storage services:
  • CERN databases: ALICE Stager, CMSARC and LHCBR database were unresponsive for a few minutes yesterday morning due to a network glitch (OTG:0041497)
  • GGUS:
    • For the end-of-year break: GGUS is monitored by a system connected to the on-call service. In case of total GGUS unavailability the on-call engineer (OCE) at KIT will be informed and will take appropriate action. If GGUS is available but there is a problem with the workflow (e.g. ALARM to CERN doesn't generate email notification to the operators), then WLCG should submit an ALARM ticket, notifying site FZK-LCG2 (DE-KIT), which triggers a phone call to the OCE.
Clarification was requested by Raja on what needs to be done in case GGUS system is totally not available. Raj has also reported issues with a GGUS server today morning - it was reporting cache full.


Season's Greetings!

  • THANKS for your help in making 2017 a very successful year for WLCG!
    • Further challenges and opportunities await us in 2018... smile

  • Next meeting: Mon Jan 8
Edit | Attach | Watch | Print version | History: r19 < r18 < r17 < r16 < r15 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r17 - 2017-12-18 - KateDziedziniewicz
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback