Week of 180416

WLCG Operations Call details

  • For remote participation we use the Vidyo system. Instructions can be found here.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • General information about the WLCG Service can be accessed from the Operations Portal
  • Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to wlcg-scod@cernSPAMNOTNOSPAMPLEASE.ch to allow the SCOD to make sure that the relevant parties have the time to collect the required information, or invite the right people at the meeting.

Best practices for scheduled downtimes

Monday

Attendance:

  • local: Kate (chair), Julia (WLCG), Ivan (ATLAS), Belinda (storage), Borja (monit), Alberto (monit), Gavin (comp)
  • remote: Xavier (KIT), Andrew (NIKHEF), Darren (RAL), Di (TRIUMF), Federico (LHCb), Christian (NDGF), Christoph (CMS), Dave (FNAL), Marcelo (CNAF), Xin (BNL)

Apologies: Balazs Konya (MW): i could only call in 15:15 but by that time seems the meeting was over.

Experiments round table:

  • ATLAS reports ( raw view) -
    • Overall - no major problems.
    • Production
      • Jobs: 305k/560k slots
      • Several generator job failures found (Sherpa, Pythia) - under investigation.
    • Transfers / Storage
      • xrootd 4.7 incompatible with cache <= 2.16.48 (link). Please upgrade your cache version. Workaround is to force using xrootd 4.8.1
      • Testing usage of T0 resources with SMT - on.
      • EventService is creating datasets with 800k files which kills the DB. Working on solution.
      • 200k / 80k files on EOS that might be damaged - The impact assessed to be low. Looking at adding MD5 checksum to the filesí metadata.

  • CMS reports ( raw view) -
    • Good CPU utilization
      • ~160k cores for production
      • ~60k cores for analysis
    • No major issues to report

  • ALICE -
    • Apologies: the ALICE operations experts cannot attend today
    • Activity levels have been normal in recent days

  • LHCb reports ( raw view) -
    • Activity
      • HLT farm to be used for some more time in parallel with the trigger
    • Site Issues
      • SARA: data access problems (GGUS:134545) being worked on
      • CNAF: working with the site to resurrect last 60 files for the re-stripping

Sites / Services round table:

  • ASGC: nc
  • BNL: ongoing discussion on ticket exchange between GGUS and BNL-RT, without OSG footprints in the middle
  • CNAF: NTR
  • EGI: nc
  • FNAL: Strange issue with network last Monday - misplaced router card caused strange issues - VOMS proxy was affected. Resolved Tue morning by re-seating card.
  • IN2P3: nc
  • JINR: NTR
  • KISTI: nc
  • KIT:
    • Last week Tuesday, we had added several IPv6 addresses to the KIT DNS for the CMS storage element. This was merely in preparation for full IPv6 deployment, but some sites tried to contact our services on IPv6 right away, without falling back to IPv4. Thus we dropped the new addresses again and will try activating IPv6 for the entire storage element once more tomorrow between 10 and 12 am CEST (at-risk downtime added to GOC-DB).
  • NDGF: HPC2N will upgrade pools tomorrow 13-14 CEST. Atlas and Alice data affected, but only for short periods.
  • NL-T1: A dCache storage node at SURFsara has failed. We are looking into the problem. The node holds ATLAS and LHCb files. LHCb has already submitted a GGUS ticket for this (GGUS:134545)
  • NRC-KI: nc
  • OSG: nc
  • PIC: nc
  • RAL: NTR
  • TRIUMF: NTR

  • CERN computing services: Short disruption to myproxy expected on Wednesday morning (OTG:0043356)
  • CERN storage services: NTR
  • CERN databases: NTR
  • GGUS:
    • Due to an unnoticed configuration change in MyOSG,
      GGUS could not fetch OSG site contact details after Feb 6.
    • US sites were not notified about any GGUS tickets since then.
    • The problem was fixed on Thu last week.
  • Monitoring:
    • Final Site Availability monthly report closed.
  • MW Officer: not aware of any new issues, the voms-clients-java-3.3.0 package problem reported earlier got fixed last week by Mattias Ellert.
  • Networks:
    • DESY inbound (GGUS:134470) from 10/4 to 13/4 - the issue seems fixed now, campus firewall reconfiguration appears to have fixed the problem
  • Security: NTR

AOB:

Edit | Attach | Watch | Print version | History: r17 < r16 < r15 < r14 < r13 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r17 - 2018-04-17 - MaartenLitmaath
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback