Week of 180716

WLCG Operations Call details

  • For remote participation we use the Vidyo system. Instructions can be found here.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • General information about the WLCG Service can be accessed from the Operations Portal
  • Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to wlcg-scod@cernSPAMNOTNOSPAMPLEASE.ch to allow the SCOD to make sure that the relevant parties have the time to collect the required information, or invite the right people at the meeting.

Best practices for scheduled downtimes



  • local: Kate (WLCG, DB, chair), Julia (WLCG), Maarten (ALICE, WLCG), Renato (LHCb), Petr (ATLAS), Olga (computing), Vincent (security), Alberto (monitoring), Paul (storage)
  • remote: Marcelo (CNAF), Darren (RAL), Alexander (NL-T1), Di (TRIUMF), Dave (FNAL), Xin (BNL), Ville (NDGF), David B (IN2P3)

Experiments round table:

  • ATLAS reports ( raw view) -
    • smooth production without major issues, ~ 280-300k job slots in average (+ HPC)
      • changes in job brokerage (queued vs. running) caused few times small drop in production jobs for some corner cases (e.g. not enough activated jobs for HPC)
      • few minor issues / glitches with rucio (production input not transferred on Sunday, expired file deletion stuck on Saturday, rucio replica location 1 hour failure)
    • transfers to NET2 (BU_ATLAS_Tier2) and huge number of deletions overloading their BeStMan storage (still not completely understood where they comes from)
    • one storage server from RAL old CASTOR storage not recoverable (80k files, almost no primary data, already migrated)
    • BNL developed & applied procedure to detect / remove ATLAS dark data for their dCache storage
    • some information from GOCDB downtime calendar not correctly propagated to our storage endpoint blacklisting (e.g. SRM-less site)

  • CMS reports ( raw view) -
    • CMS has a Computing Management meeting today and tomorrow - nobody will be available for the call
    • High CPU utilization
    • Intense conversation about xrootd setup at T1_FR_CCIN2P3 GGUS:135931
      • There some not understood file access issues
    • No major items otherwise

  • ALICE -
    • NTR

  • LHCb reports ( raw view) -
    • Activity
      • Data reconstruction for 2018 data
      • User and MC jobs
    • Site Issues
      • IN2P3: There's a ticket for file transfer errors. "Better now" but need to be investigated ( GGUS:136067 )
      • CNAF: Ticket opened (GGUS:136120) for failing pilots; under investigation. Another ticket for file transfer errors, in progress(GGUS:136123).
      • KIT: issues with pilots disappearing.
Petr commented ATLAS has issues with failed pilots at CNAF and issues with IN2P3 transfers to Romania. Maarten mentioned that tickets are not necessary requirement to report on issues.

Sites / Services round table:

  • ASGC: nc
  • BNL: HS06 coefficient for BNL site needs to be updated on ATLAS dashboard, ticket open with MONIT team (GGUS:136051)
  • CNAF: 2 tickets open by LHCb:
    • Failing pilots - services were restarted, there are many pilots running ok, but still a large number of pilots failing, under investigation
    • Transfers failing - some files are failing to put, after some time it works. Storm behavior is being unstable in the last weeks, the issue seems to come and go, we are investigating.
  • EGI: nc
  • FNAL: Downtime this week, 20th afternoon-21st morning (FNAL time)
  • IN2P3: NTR
  • KISTI: nc
  • KIT: nc
  • NDGF: "Triolith" cluster is down and will be replaced by "Bluegrass" cluster.
  • NL-T1: NTR
  • NRC-KI: nc
  • OSG: nc
  • PIC: nc
  • RAL: NTR
  • TRIUMF: Still in the process of migrating services to new data center, most of grid services are done.

  • CERN computing services: brief worker instability due to OTG
  • CERN storage services: NTR
  • CERN databases: Nothing to report
  • Monitoring:
    • Draft reports for the Jun 2018 availability sent around a bit late by WLCG Office. Deadline is now Friday 20.
  • MW Officer: nc
  • Networks: nc
  • Security: EGI's Security Service Challenge postponed to September


Edit | Attach | Watch | Print version | History: r19 < r18 < r17 < r16 < r15 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r19 - 2018-07-16 - MaartenLitmaath
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback