Week of 181015

WLCG Operations Call details

  • For remote participation we use the Vidyo system. Instructions can be found here.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • General information about the WLCG Service can be accessed from the Operations Portal
  • Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to wlcg-scod@cernSPAMNOTNOSPAMPLEASE.ch to allow the SCOD to make sure that the relevant parties have the time to collect the required information, or invite the right people at the meeting.

Best practices for scheduled downtimes

Monday

Attendance:

  • local: Kate (WLCG, DB, chair), Julia (WLCG), Maarten (WLCG, ALICE), Gavin (comp), Borja (monit), Belinda (storage), Marian (networks), Alberto (monit)
  • remote:Tom (CMS), Andrew (NL-T1), David B. (IN2P3), Jeff (OSG), Di (TRIUMF), Darren (RAL), Dave (FNAL), Federico (LHCb), Pepe (PIC), Victor (JINR)

Experiments round table:

  • CMS reports ( raw view) -
    • quite productive week, over 160kCores in production (mainly Fall18 MC)
    • Disk situation getting worrying, with > 80% of the central space used. We are planning some fast actions, including changing priorities in the various productions
    • Finalizing HI run; needs special sw in order to handle hybrid tracker collections and partial events
    • GGUS:137731: On Sunday: incident with CMSR offline DB (one instance out of three)
      • surprisingly, it HAD impact (aren't we supposed to be shielded by HA?)
      • we opened a TEAM ticket not ALARM, since no clear impact on data taking; still many thanks for the prompt actions
      • we would like to understand to which extent we can count on HA
Kate commented the issue with CMS is rare, most probably related to the unclean shutdown of the node. It should have been discovered by DB monitoring and acted upon. It will be followed up with DB. Assignment of GGUS tickets to DB supporters will also be followed up (for the moment only the DB Service Managers receive GGUS tickets).

  • ALICE -
    • NTR

  • LHCb reports ( raw view) -
    • Activity
      • Data reconstruction for 2018 data
      • User and MC jobs
    • Site Issues
      • NTR

Sites / Services round table:

  • ASGC: nc
  • BNL: nc
  • CNAF: downtime for October 23rd on Storage and Farm for important upgrades confirmed.
    • ATLAS and ALICE will be affected
    • LHCb and CMS Should not be affected
    • The downtime is planned to be over by the end of the day
  • EGI: nc
  • FNAL: Follow up on the bad link issue from the previous weeks.
Maarten asked if the issue could have been discovered earlier, Dave replied it's being checked now. The issue discovery might have been delayed by the link still performing reasonably at times.
  • IN2P3: NTR
  • JINR: Broken files (bad checksum after crash of h/w raid adapter) are being restored from tapes. No lost files so far.
  • KISTI: nc
  • KIT: We propose next week Tuesday for updating the dCache instance cmssrm-kit.gridka.de to version 3.2. Will announce a downtime as soon as we got permission from CMS.
  • NDGF: nc
  • NL-T1:
  • NRC-KI: nc
    • reported after the meeting:
    • RRC-KI-T1 will perform tape system upgrade (first phase, hardware-only part) on the week starting 29.10, we should be ready no later than Friday, 02.11, and tape system will be back in the production. Technically, we will stop the library, put the new drives and tapes, organize new logical library and resume the operations. New tapes/drives won't be immediately seen by VOs: this will be the result of the second phase with estimated completion time near the end of this November.
    • Tape buffers will be up and running, tape instance of dCache will also be fully operable. If VOs need some specific data to be available in the tape buffers -- they are welcome to say that in advance and we will try to provide them.
    • This Wednesday, 17.10, we will talk to IBM engineers and polish our upgrade plan; if something bad (that precludes us from doing the upgrade) will be revealed -- we'll notify ASAP.
    • We had talked with ALICE, ATLAS and LHCb and settled this time frame with no (big) objections. The downtime was created.
  • OSG: NTR
  • PIC: On Tuesday 6th of November, PIC will be in complete scheduled downtime due to the following intervention: Main Router OS upgrade and WN farm reboot (kernel upgrade) / 06-11-2018 08:00 (PIC local time) until 06-11-2018 11:00 (PIC local time).
  • RAL: NTR
  • TRIUMF: We are in the process of commissioning new compute nodes. 6912 new cores will be added as bare-metal SL7 WNs.

  • CERN computing services: NTR
  • CERN storage services: NTR
  • CERN databases: CMSR database lost one instance yesterday evening. Clusterware didn't react properly and manual action was needed (it happens quite rarely), it was not discovered by monitoring, hence the delay (OTG:0046400)
  • GGUS: NTR
  • Monitoring:
    • Draft availability reports for September sent around
    • REBUS database not accessible for Thursday and Friday, came back on Saturday morning
  • MW Officer: NTR
  • Networks: GGUS:137632 - CERN to FNAL network performance capped at 100Mbps, investigated and narrowed down to segment at or close to FNAL, issue was resolved by FNAL.
  • Security: NTR

AOB:

Edit | Attach | Watch | Print version | History: r24 < r23 < r22 < r21 < r20 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r24 - 2018-10-16 - MarceloSoares
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback