Week of 190225

WLCG Operations Call details

  • For remote participation we use the Vidyo system. Instructions can be found here.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • General information about the WLCG Service can be accessed from the Operations Portal
  • Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to wlcg-scod@cernSPAMNOTNOSPAMPLEASE.ch to allow the SCOD to make sure that the relevant parties have the time to collect the required information, or invite the right people at the meeting.

Best practices for scheduled downtimes

Monday

Attendance:

  • local: Borja (Chair, Monitoring), Julia (WLCG), Luis (Computing), Maarten (ALICE), Michal (ATLAS), Vladimir (LHCb)
  • remote: Christian (NDGF), Dave (FNAL), Di (TRIUMF), Jeff (OSG), John (RAL), Marcelo (CNAF), Sang-Un (KISTI), Stephan (CMS), Xavier (KIT)

Experiments round table:

  • ATLAS reports ( raw view) -
    • Activities:
      • normal activities
    • Problems
      • transfers from NDGF-T1_DATADISK fail with "[SRM_FILE_UNAVAILABLE] File is not online" (GGUS:139816) - one if the IJS pool servers panicked over the night - fixed
      • lost heartbeats at TRIUMF (GGUS:139851) - jobs fail because they need more memory than they request - current recommendation is to ask site to implement 50% margin for memory in the batch system - done
      • timeouts from TRIUMF-LCG2_MCTAPE (GGUS:139865) - hsm incoming partition setting adjusted
      • transfers to CERN-PROD_LOCALGROUPDISK failed with "500 mkdir() fail" (GGUS:139796) - atlas003 user does not have proper permissions - we are waiting for this to be set in EOS
  • CMS reports ( raw view) -
    • No major grid computing issue(s)
    • Running smoothly at about 230k cores
      • usual ~75% production, ~25% analysis

  • ALICE -
    • High activity
    • IN2P3-CC: CVMFS down on 1 VOBOX on Thu, fixed Fri (GGUS:139829)
    • KIT: 3 CEs in bad shape since Fri, fixed today (GGUS:139861)

Sites / Services round table:

  • ASGC: NC
  • BNL: NTR
  • CNAF: There was an issue with one switch between CNAF and CINECA (separate site holding some CNAF resources) during last Friday and the previous weekend. Problem seem to be fixed.

Asked by Marcelo if other experiments (together with LHCb that reported it) had experience the same issue (since it should have had an impact in all of them). answer from ALICE was nothing from a quick look to the monitoring numbers, LHCb remembered this was causing peaks of failures at a given time.

  • EGI: NC
  • FNAL: NTR
  • IN2P3: NC
  • JINR: NTR
  • KISTI: NTR
  • KIT:
    • Issues with half of our ARC-CEs as a consequence to connectivity issues with local NFS service on Friday last week (GGUS:139861). Resolved by a reboot of the CE nodes this morning.
  • NDGF: NTR
  • NL-T1: NC
  • NRC-KI: NC
  • OSG: NTR
  • PIC: NC
  • RAL: We are updating our Arc-ce machines to nordugrid arc 5.4.3-1. This morning we updated the FTS servers to version v3.8.3.
  • TRIUMF:
    • Last Thursday there were two 3-hour downtimes to fix the controller of storage backend of VM servers, originally only one downtime planned, but the problem appears again one hour after the vendor did the maintenance, had to declare another unscheduled downtime for the vendor to replace the broken parts. Some grid services on the VM servers, like frontier, CE and top BDII were affected by this.
    • A lot of transfers to tape in weekend timed out due to some hot tape pools, this should have been solved by adjusting incoming partition settings of tape pools.

  • CERN computing services: NTR
  • CERN storage services:
    • EOSLHCB: down this morning because of a switch issue (OTG:0048495)
    • EOSALICE: down for migration to QuarkDB backend on Thursday from 07h30 (CET) for ~4 hours
  • CERN databases: NC
  • GGUS: NTR
  • Monitoring: NTR
  • MW Officer: NC
  • Networks: GGUS:139866 GGUS:139874 - ESNet network incident impacting US to CERN connectivity (also impacted AMS) was due to a problem with the ESNet router at CERN - workaround was applied, root cause is still under investigation
  • Security: NC

AOB:

Edit | Attach | Watch | Print version | History: r18 < r17 < r16 < r15 < r14 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r18 - 2019-02-25 - MaartenLitmaath
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback