Week of 171127

WLCG Operations Call details

  • At CERN the meeting room is 513 R-068.

  • For remote participation we use the Vidyo system. Instructions can be found here.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • General information about the WLCG Service can be accessed from the Operations Web
  • Whenever a particular topic needs to be discussed at the daily meeting requiring information from site or experiments, it is highly recommended to announce it by email to wlcg-operations@cernSPAMNOTNOSPAMPLEASE.ch to make sure that the relevant parties have the time to collect the required information or invite the right people at the meeting.

Tier-1 downtimes

Experiments may experience problems if two or more of their Tier-1 sites are inaccessible at the same time. Therefore Tier-1 sites should do their best to avoid scheduling a downtime classified as "outage" in a time slot overlapping with an "outage" downtime already declared by another Tier-1 site supporting the same VO(s). The following procedure is recommended:

  1. A Tier-1 should check the downtimes calendar to see if another Tier-1 has already an "outage" downtime in the desired time slot.
  2. If there is a conflict, another time slot should be chosen.
  3. In case stronger constraints cannot allow to choose another time slot, the Tier-1 will point out the existence of the conflict to the SCOD mailing list and at the next WLCG operations call, to discuss it with the representatives of the experiments involved and the other Tier-1.

As an additional precaution, the SCOD will check the downtimes calendar for Tier-1 "outage" downtime conflicts at least once during his/her shift, for the current and the following two weeks; in case a conflict is found, it will be discussed at the next operations call, or offline if at least one relevant experiment or site contact is absent.

Links to Tier-1 downtimes

ALICE ATLAS CMS LHCB
  BNL FNAL  

Monday

Attendance:

  • local: Luca (SCOD), Yolanda (Storage), Alexey (LHCb), Ivan (ATLAS), Maarten (ALICE), Vincent (Security), Gavin (Batch), Marian (Network), Julia (WLCG)
  • remote: Giuseppe B. (CMS), Chi-Hsun (ASGC), David M (FNAL), David B (IN2P3), Marcelo (CNAF), Sang-Un (KISTI), Xavier (KIT), Ulf (NDGF), Onno (NL-T1), Kyle (OSG), Gareth (RAL)

Experiments round table:

  • ATLAS reports ( raw view) -
    • No major problems.
    • RAL FTS problems
      • IPv6 networking problem on 21.11
      • Lots of errors - fixed by upgrading to FTS version 3.7.5 (27.11)
    • CNAF
      • Ongoing copying of RAW from CASTOR (finished data17).
      • Under discussion - replication of AOD from disks (i.e. second copy)

  • CMS reports ( raw view) -
    • pp low lumi Run ended on Sunday, start LHC MD
    • No relevant (new) problems in the past week
    • High CPU utilization
    • Transfer system still under pressure
      • US region still using old FTS service version (due to problems experienced by some US sites with latest version), GGUS:131836
    • CNAF outage mitigations
      • Received lists affected files
      • Identified RAW data files will get another tape copy at CCIN2P3
      • Urgent GEN-SIM samples are being reproduced (we don't have any other copy)

  • ALICE -
    • High to very high activity on average

  • LHCb reports ( raw view) -
    • General
      • almost no free disk space left, still waiting for complete disk pledged deployment 2017
    • Activity
      • Stripping validation, user analysis, MC
    • Site Issues
      • T1
        • SARA: problems with transfers today (GGUS:132067), no longer observing
        • RRC-KI: problems with file access, reported as fixed
        • FZK: one WN without CVMFS (GGUS:132064), solved close to instantly

Sites / Services round table:

  • ASGC: NTR
  • BNL: NTR
  • CNAF: The technical company to re-establish the power have arrived and it's working on it, thou must likely we still won't have power before January.
    • The tests on Hard Drives started but is still too early to give an estimation of damage.
  • EGI:
  • FNAL: NTR
  • IN2P3: Scheduled maintenance on tomorrow Tuesday 28th Nov. CEs and SEs will be in downtime for the whole day.
  • JINR: NTR
  • KISTI: NTR
  • KIT:
    • Announced at-risk downtimes for rolling dCache updates the next three days for LHCb, CMS and ATLAS SEs (in this order).
    • Discovered several damaged tapes, which are no longer readable. Unfortunately, several files are lost (at GridKa/KIT).
      • Alice: 14
      • ATLAS: 1708
      • CMS: 643
      • LHCb: 1857
  • NDGF: NTR
  • NL-T1: dCache crashed twice last week; Friday and last night. Cause: communication issue between dCacheDomain and Zookeeper. Each time the dCacheDomain fails over to another Zookeeper node, dCache hangs and has to be restarted. Problem is known to the dCache developers but not understood yet. We're planning to switch back to the dCache internal Zookeeper as a workaround; this will require a downtime.
  • NRC-KI:
  • OSG:
  • PIC:
  • RAL: We were dealing with a short (ten-minute) power outage shortly before the meeting last week. We had recovered from that by early evening that day (Monday 20th).
  • TRIUMF: Apologies that I could not join. Finally we migrated all of our WNs from Torque+Maui to HTCondor batch system and the CEs from CreamCE to ARC CE.

  • CERN computing services: NTR
  • CERN storage services: NTR
  • CERN databases: CMSONR Active DataGuard databases aliases were moved to new HW last Tuesday to avoid issues with log application that hit the old ADG. The issues repeated, one of the triggering queries was identified and the workload was moved to a separate ADG. Follow up with Oracle Support is ongoing.
  • GGUS:
    • A new release is planned for Wed this week
      • A downtime has been scheduled for 07:00-09:00 UTC
      • Test alarms will be submitted as usual
  • Monitoring:
  • MW Officer:
  • Networks: GEANT operations intervention on LHCONE - BGP sessions at risk of a brief flap - from 13/12/2017 18:00 (UTC) until 13/12/2017 20:30 (UTC)
  • Security: NTR

AOB:

Edit | Attach | Watch | Print version | History: r21 < r20 < r19 < r18 < r17 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r21 - 2017-11-27 - MaartenLitmaath
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback