Week of 190114

WLCG Operations Call details

  • For remote participation we use the Vidyo system. Instructions can be found here.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • General information about the WLCG Service can be accessed from the Operations Portal
  • Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to wlcg-scod@cernSPAMNOTNOSPAMPLEASE.ch to allow the SCOD to make sure that the relevant parties have the time to collect the required information, or invite the right people at the meeting.

Best practices for scheduled downtimes



  • local: Julia (WLCG), Kate (WLCG, chair), Maarten (WLCG, ALICE), Petr (ATLAS), Gavin (computing), Renato (LHCb), Borja (monit), Alberto (monit)
  • remote: Xavier (KIT), Marcelo (INFN), Onno (NL-T1), John (RAL), Dave M (FNAL), Sang-Un (KISTI), Di (TRIUMF), Xin (BNL), Ville (NDGF), David B (IN2P3), Pepe (PIC), Jeff (OSG)

Experiments round table:

  • ATLAS reports ( raw view) -
    • Production - smooth operation with ~ 330k slots (no CERN P1 resources till April)
    • Storage - scratchdisks usage of popular sites reaching allocated size
      • slow deletions at BNL dCache
    • Transfers - network MTU problem at CA Waterloo under investigation

  • CMS reports ( raw view) -
    • Good CPU utilization during Xmas break
      • ~175k CPU cores for production
      • ~55k CPU cores for analysis
    • No major issues
    • Staging from Castor to EOS at CERN now progressing
      • Thanks to CERN storage team for fixing a transfer issue between Castor and EOS RQF:1193668

  • ALICE -
    • NTR

  • LHCb reports ( raw view)
    • Activity
      • Data reconstruction for 2018 data
      • User and MC jobs
    • Site Issues

Sites / Services round table:

  • ASGC: nc
  • BNL: Several electrical maintenance days scheduled in the data center, on 01/15, 01/17, 01/22 and 01/28 respectively. Partial computing farm will be shutdown from time to time. No downtime needed, and we plan to switch to event service jobs to avoid early closure of the affected WNs.
  • CNAF: CMS tiket open (GGUS:139062) about failing transfers. Files do not exist in source. Going to be discussed at CMS meeting today.
  • EGI: nc
  • IN2P3: NTR
  • KIT: On Wednesday, 9AM UTC - 1 hour downtime for ALICE, ATLAS and LHCb due to network reconfiguration
  • NDGF: Nothing to report
  • NL-T1: NTR
  • NRC-KI: nc
  • OSG: NTR
  • PIC: Tomorrow 2h downtime at risk
  • RAL: NTR
  • TRIUMF: There was a scheduled site-wide power outage at TRIUMF on Sunday. All old WNs (~4800 cores) at TRIUMF were down about 12hours. The storage system, grid services and new WNs (~7600 cores) are still up since they are at new data centre. Besides, on Monday starting around 18:00 (UTC time), we plan to replace one of DDN controllers, which we could not replace it due to the custom clearance delay in last downtime. A 'warning' downtime has been created for this since only part of data won't be available for short time.

  • CERN computing services:
    • OTG:0047775: NEW.The legacy lcg-voms2.cern.ch service will be decommissioned on 4 March 2019. Any VOMS client configurations still using the legacy service should be reconfigured to use the voms2.cern.ch service.
      • Ops Coordination will send broadcasts with instructions
    • OTG:0046088: REMINDER. LSF public deco, Wednesday 30th January 2019. (Dedicated shares handled separately.)
    • OTG:0047300: REMINDER. Draining HTCondor public now for CC7 upgrades. Schedule:
      • end January 2019: 30% public/grid will be CC7
      • end March 2019: 50% public/grid will be CC7
      • 2nd April 2019: lxplus.cern.ch alias change to CC7 (lxplus6 service will remain accessible on lxplus6.cern.ch), Default HTCondor target change to CC7 for local submission.
      • early June 2019: remainder of capacity will have ben migrated.
      • (Dedicated shares handled separately.)
For VOMS decommissioning Maarten commented that a broadcast will be needed but the schedule seems feasible. EGI and OSG will have to be engaged. Similar situation already occurred during the end-of-year break, as a certificate expired on lcg-voms2.cern.ch and it did not create a real problem, thus showing that a change like that shouldn't be difficult.
  • CERN storage services: nc
  • CERN databases: nc
  • Monitoring: NTR
  • MW Officer: nc
  • Networks: OTG:0047753 - CERN LHCOPN router in Amsterdam was unreachable due to HW fault (Sun 4am to Mon 1pm).
  • Security: NTR


Edit | Attach | Watch | Print version | History: r17 < r16 < r15 < r14 < r13 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r17 - 2019-01-15 - MaartenLitmaath
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback