Week of 161010

WLCG Operations Call details

  • At CERN the meeting room is 513 R-068.

  • For remote participation we use the Vidyo system. Instructions can be found here.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • General information about the WLCG Service can be accessed from the Operations Web
  • Whenever a particular topic needs to be discussed at the daily meeting requiring information from site or experiments, it is highly recommended to announce it by email to wlcg-operations@cernSPAMNOTNOSPAMPLEASE.ch to make sure that the relevant parties have the time to collect the required information or invite the right people at the meeting.

Tier-1 downtimes

Experiments may experience problems if two or more of their Tier-1 sites are inaccessible at the same time. Therefore Tier-1 sites should do their best to avoid scheduling a downtime classified as "outage" in a time slot overlapping with an "outage" downtime already declared by another Tier-1 site supporting the same VO(s). The following procedure is recommended:

  1. A Tier-1 should check the downtimes calendar to see if another Tier-1 has already an "outage" downtime in the desired time slot.
  2. If there is a conflict, another time slot should be chosen.
  3. In case stronger constraints cannot allow to choose another time slot, the Tier-1 will point out the existence of the conflict to the SCOD mailing list and at the next WLCG operations call, to discuss it with the representatives of the experiments involved and the other Tier-1.

As an additional precaution, the SCOD will check the downtimes calendar for Tier-1 "outage" downtime conflicts at least once during his/her shift, for the current and the following two weeks; in case a conflict is found, it will be discussed at the next operations call, or offline if at least one relevant experiment or site contact is absent.

Links to Tier-1 downtimes

ALICE ATLAS CMS LHCB
  BNL FNAL  

Monday

Attendance:

  • local: Andrea Sciabà (SCOD), Jesús López (storage), Ignacio Coterillo (databases), Ivan Glushkov (ATLAS), Andrea Manzi (middleware), Marian Babik (networks), Vincent Brillault (security), Maria Dimou (GGUS), Andrew McNab (LHCb)
  • remote: Tommaso Boccali (CMS), Antonio Falabella (CNAF), Dmytro Karpenko (NDGF), Eygene Ryabinkin (NRC-KI), FaHui Lin (ASGC), Gareth Smith (RAL), Guenter Grein (GGUS), Rolf Rumler (IN2P3-CC), Luca Lama (CNAF), Xin Zhao (BNL), Victor Zhiltsov (JINR), David Mason (FNAL), Hiro Ito (BNL), Dmitri Nilsen (KIT)
  • apologies: Dennis van Dok (NL-T1)

Experiments round table:

  • ATLAS reports ( raw view) -
    • EOS - CASTOR rate tested, using xrootd not srm (no checksum validation) - ~ 5 - 6 GB/s chart44.png
    • SARA downtime (till Oct 17) - Some running tasks were to be re-assigned. Unique datasets were identified, no effect in the production.
    • Running low priority MC12c samples.

  • CMS reports ( raw view) -
    • running full speed (2016 data rereco + MC for high PU preparation)
    • a calm week overall, excluding some external networking problems on Oct 7th, promptly resolved (INC:1154149)
    • some struggle with Caltech being not correctly resolved by CERN DNS, which was claiming to be authoritative but out of date (GGUS:124143); solved late on friday when configuration at CERN was modified. It will probably require a fix on the readiness plots.
    • Computing shifters are often confused by Kibana showing pink plots; it is really happening too frequently, and we would need a fix or an explanation by Kibana team (now INC:1156813).

  • ALICE -
    • Apologies: the operations experts are at CHEP
    • NTR, at least until Monday morning

  • LHCb reports ( raw view) -
    • Activity
      • Monte Carlo simulation, data reconstruction/stripping and user jobs on the Grid
      • Running smoothly with no plans for significant changes while many people are at CHEP.
Marian mentioned an open ticket to fix the LHCb VO feed. Andrew reported that the fix has been committed in DIRAC but it will not be deployed before CHEP finishes.

Sites / Services round table:

  • ASGC: ntr
  • BNL: a recomputation of the availability for September 11 will be requested, as it was caused by an issue in a SAM test.
  • CNAF:
    • Computing elements problems of last two weeks now solved, detailed postmortem attached.
    • DDN storage system suffered an IO module failure on 21st September. It impacted Alice Atlas and Ams. Unscheduled downtime from 21-Sep-16 15:09:00 to 22-Sep-16 07:38:41 UTC. Services were back online since around midnight.
  • EGI:
  • FNAL: ntr
  • GridPP:
  • IN2P3: NTR
  • JINR: No site's issues for the week. dCache upgraded from 2.10 to 2.13 on Disk & Buffer/MSS instances last Friday. AAA local redirector (Federation host) stay with 4.3.0 version, looks like it's more stable than 4.4.0.
Andrea M. asked if CMS saw any issue with 4.4.0, considering that it asked for it to be deployed. Tommaso answered that no report of problems was received.
  • KISTI:
  • KIT:
  • NDGF:
    • Oslo cluster still down after the maintenance last week. Turned out to be wrong network cable problem. Being solved right now, but the cluster still may be down at the time of weekly.
  • NL-T1: ntr
    • SARA datacenter move: all hardware has safely arrived in the new datacenter. As far as we can see, no data has been lost. Still working to get everything up and running, but we hope to finish according to schedule.
  • NRC-KI: NTR
  • OSG:
  • PIC:
  • RAL: The network router that manages our external data flows was replaced last Wednesday (5th Oct) as planned. This has increased the bandwidth to non-OPN connected sites (up to a possible 40Gbit).
  • TRIUMF:

  • CERN computing services:
  • CERN storage services:
    • Investigating if the crashes of EOS atlas are correlated to an activity that started on Saturday at 3am and finished this morning by 8 am. The effect was about one hour of downtime.
  • CERN databases:
  • GGUS: The T0 ALARM test tickets from the GGUS Release of 28/9 (10 days ago) are still open. We wish to have the T0 service managers' action on this. These tests take place every month since 2009, it should be routine by now to get a acknowledgement from the relevant support level.
Maria asked if the text of the test ticket should be improved and Andrea M. answered that it does not need to be changed, CERN will simply make sure that next time the ticket gets answered quickly enough.
  • Monitoring:
  • MW Officer:
    • Storm Issue reported at GGUS:124293. After the upgrade to GPFS 4.2.1.1 Storm stopped recognising the mounted FSs, while waiting for a fix a workaround has been documented on the ticket by the devs.
  • Networks: IRFU reported a low throughput for IPv6 and a ticket was opened, but it has been touched since (GGUS:123930).
  • Security: While chasing for the ongoing vulnerability (Advisory-SVG-2016-11476), it appears that some sites were still using EMI3. This repository is dead for a year now, please make sure to use UMD instead!

AOB:

Edit | Attach | Watch | Print version | History: r21 < r20 < r19 < r18 < r17 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r21 - 2016-10-12 - MaartenLitmaath
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback