Week of 160912

WLCG Operations Call details

  • At CERN the meeting room is 513 R-068.

  • For remote participation we use the Vidyo system. Instructions can be found here.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • General information about the WLCG Service can be accessed from the Operations Web
  • Whenever a particular topic needs to be discussed at the daily meeting requiring information from site or experiments, it is highly recommended to announce it by email to wlcg-operations@cernSPAMNOTNOSPAMPLEASE.ch to make sure that the relevant parties have the time to collect the required information or invite the right people at the meeting.

Tier-1 downtimes

Experiments may experience problems if two or more of their Tier-1 sites are inaccessible at the same time. Therefore Tier-1 sites should do their best to avoid scheduling a downtime classified as "outage" in a time slot overlapping with an "outage" downtime already declared by another Tier-1 site supporting the same VO(s). The following procedure is recommended:
  1. A Tier-1 should check the downtimes calendar to see if another Tier-1 has already an "outage" downtime in the desired time slot.
  2. If there is a conflict, another time slot should be chosen.
  3. In case stronger constraints cannot allow to choose another time slot, the Tier-1 will point out the existence of the conflict to the SCOD mailing list and at the next WLCG operations call, to discuss it with the representatives of the experiments involved and the other Tier-1.

As an additional precaution, the SCOD will check the downtimes calendar for Tier-1 "outage" downtime conflicts at least once during his/her shift, for the current and the following two weeks; in case a conflict is found, it will be discussed at the next operations call, or offline if at least one relevant experiment or site contact is absent.

Links to Tier-1 downtimes




  • local: Luca (SCOD+Storage), Sabine (ATLAS), Maarten (ALICE), Marcelo (LHCb), Julia (WLCG), Gavin (Computing), Sebastien (DB)
  • remote: Stefano (CMS), Eric (BNL), Fahui Lin (ASGC), Lucia (CNAF), Vincenzo (EGI), Rolf (IN2P3), Sang (KISTI), Dmytro (KIT), Ulf (NDGF), Onno (NL-T1), Kyle (OSG), Tiju (RAL), Di (TRIUMF), Jose (PIC), Victor (JINR), Eygene (RRC-KI)

Experiments round table:

  • ATLAS reports (raw view) -
    • Activities and global report
      • Production going well (above 250k running jobs) mostly event generation and simulation but also analysis and reprocessing tests.
      • Quite a high pressure of analysis jobs: few users sent huge production and one of them with very high memory consumption (this task was aborted).
      • T0 running grid jobs when CPU available
      • Some staging activity in order to prepare next reprocessing campaign
    • Problems:
      • Jobs
        • T1 site problems: high failure rate at RAL due to one WN, high failure at NIKHEF ongoing, one pool down at IN2P3-CC;
      • Data
        • T1s storage full with few secondaries: data has to be regularly rebalanced between them
        • FZK deletion failure due to a bug in old dcache version when https is used ( permission error rather than file not found reported when a file is missing): site will upgrade to new dcache version and in the mean time srm protocol will be used
        • Most T2 site problems were related to data (transfer, deletion, access)
      • Central services
        • CVMFS stratum1 and CONDB glitches but service is working
        • AMI replica at CERN down this weekend, main service was working. Seems a synchronization problem, CERN servers ok . Solved today.

  • CMS reports (raw view) -
    • Production activity low due to lack of requests
    • Now using same pilot at all sites
    • No problem to report.

  • ALICE -
    • NTR

  • LHCb reports (raw view) -
    • Activity
      • Monte Carlo simulation, data reconstruction/stripping and user jobs on the Grid
    • Site Issues
      • T0:
        • T-Systems cloud extension is removed from production mask.
        • CERN-PROD Multiple failures accessing and storing data to/from Castor srm-lhcb.cern.ch (GGUS:123821) in progress.
        • CERN-PROD request for Dual stack voms server for LHCb (GGUS:123799) in progress.
      • T1:
        • PIC Outage Downtime declared due to network and dCache upgrades on 14th September (next Wednesday)
        • NL-T1 warning for a network test to be conducted tomorrow morning (13th September) 6:00-8:00.
        • RAL Will be on Warning tomorrow (13th September) for maintenance on Tape Library.
        • IN2P3 there were problems with memory consumption on Turbo jobs. Being investigated.

Sites / Services round table:

  • BNL: NTR
  • EGI: NTR
  • FNAL: FNAL tape endpoint will be down to upgrade to dCache 2.13 on Sep 13, The entire site will be down from midnight CERN time on the 17th, until early in the morning (~5 AM CERN time) (5 pm FNAL time 16th to 23:55 17th FNAL time)
  • GridPP:
  • IN2P3: The site will be in downtime all day for batch and dCache on September 20th. Batch will start draining already the night before. For further details please refer to the downtime declarations.
  • KIT: NTR
  • NDGF: 20TB of ALICE data currently not available due to hw controller failure.
  • NL-T1:
    • Tomorrow SARA has a network failover test to prepare for the datacenter move. GOCDB
    • Reminder: the SARA datacenter move is planned for the first two weeks of October.
    • Waiting for more info on GGUS:123825
  • NRC-KI:
    • minor work with top-level BDIIs: upgrade of bdii software;
    • had hardware problems on one of the LHCb disk pools and files were unavailable; fixed at Sunday-to-Monday night by Alexander Rogovskiy.
  • OSG: looking into GGUS ticket exchange problem for tickets using the "related issue" field
  • PIC: Site downtime on Wed 14th Sep for dCache upgrade. Also the network will be upgraded from 10Gbit/s to 20Gbit/s.
  • RAL: Maintenance on tape library tomorrow 08:00-17:00 local time (GOCDB warning). Tape access for read will stop. Writes will be buffered on disk cache and flushed to tape after the maintenance has completed.

  • CERN computing services:
    • Following up problems on the CERN CREAM CE services seen in ETF.
  • CERN storage services:
    • CASTOR Databases upgraded and rebooted (user-facing daemon had to be restarted since we saw minor errors)
    • EOSCMS updated to latest version this morning (12.09.2016)
    • EOSLHCB will be updated tomorrow (13.09.2016)
  • CERN databases: CMSONR and ADG database will be patched tomorrow morning, there is a risk of 10 min downtime for ADG. CMSR and CMSARC will be patched Wednesday morning, there's also a risk of 10 minute downtime for them.
  • GGUS:
  • Monitoring:
    • Draft reports for the August 2016 availability sent around
  • MW Officer: NTR
  • Networks:
  • Security: NTR


  • RU-VRF and Belgian sites: we had an offer from CERN for the VLAN transit in Amsterdam (thanks, Edoardo!). Shkelzhen wrote to Belnet on this topic, but hadn't included our NOC into the discussion, so I don't know the status (and have no contact persons in Belnet). perfSonar at IIHE works, it shows that Belnet uses their exchange point to transit the traffic via ReTN (our commercial provider); we can try to use AMS-IX exchange, this will require reconfiguration at Belnet side, but, again, we need contact at Belnet for this who cares for their Grid sites/users. [Update from 18:00 MSK: Belnet responded, we're in the process of talking, but seems like there is no immediate solution without IIHE paying some additional money. Investigating the cheapest options...]
Edit | Attach | Watch | Print version | History: r19 < r18 < r17 < r16 < r15 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r19 - 2016-09-12 - SabineCrepe
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback