Week of 180423

WLCG Operations Call details

  • For remote participation we use the Vidyo system. Instructions can be found here.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • General information about the WLCG Service can be accessed from the Operations Portal
  • Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to wlcg-scod@cernSPAMNOTNOSPAMPLEASE.ch to allow the SCOD to make sure that the relevant parties have the time to collect the required information, or invite the right people at the meeting.

Best practices for scheduled downtimes

Monday

Attendance:

  • local: Kate (chair, WLCG, DB), Julia (WLCG), Maarten (ALICE, WLCG), Andrew (LHCB), Borja (monit), Ivan (ATLAS), Gavin (comp), Juan (storage)
  • remote: Andrew (Nikhef), Balazs (MW), David B (IN2P3), Darren (RAL), Dmytro (NDGF), Marcelo (CNAF), Sang Un (KISTI), Xavier (KIT), Dave M (FNAL), Di (TRIUMF), Xin (BNL), Jean Roch (CMS), Pepe (PIC), Victor (JINR)

Experiments round table:

  • ATLAS reports ( raw view) -
    • Production
      • 304k / 489k slots average usage
      • Added ~5500 cores to T0 cluster
      • Decided to attach TAPE end points to user analysis queues
  • CMS reports ( raw view) -
    • Usage on the low side, due to decrease in pressure from lack or run-able workload
      • 140k cores for prod
      • 35k cores for analysys
    • sites temporary issues and outage (florida, london-IC, caltech,...) : looking forward to integration of dynamic site-blacklist from htcondor
    • FNAL disk rack incident. Data not available for couple of weeks
      • CMS might think of ways to replicate the data automatically in these situations
    • RAL storage and transfer
      • in the tail of migrating to echo hopefully
    • KIT xrootd outage : resolved
    • CERN xrootd outage and ce issues

  • ALICE -
    • High to very high activity in recent days
    • CERN: job submissions failed as of Sunday evening, fixed this morning (GGUS:134665)

  • LHCb reports ( raw view) -
    • Activity
      • HLT farm to be used for some more time in parallel with the trigger
    • Updates
      • Deploying LHCbDIRAC with GLUE2 support today
    • Site Issues
      • IN2P3: Some tape files lost (GGUS:134666), recopied from other sites.
      • PIC: Staging problems (GGUS:134667)
      • CNAF: LHCb completed data management actions after the long downtime.

Sites / Services round table:

  • ASGC: nc
  • BNL: NTR
  • CNAF: NTR
  • EGI: nc
  • FNAL: NTR
  • IN2P3: NTR
  • JINR: NTR
  • KISTI: NTR
  • KIT:
    • CMS' storage element now is IPv6 enabled. Started deployment on Tuesday, learned on Thursday that our servers are merely IPv4/IPv6 dual-home, instead of dual-stack, which shuts out IPv4 only sites (via FTS), made dCache doors true dual-stack on Friday.
  • NDGF: Tape pools at HPC2N not starting after intervention last Tuesday. Problem with spectre firmware fixes. Pools were fixed within an hour, but led to endit being forgotten. Nothing pushed to tape. Alarms went off during the night, and was solved by temporarily enabling ATLAS tape at UCPH. Hopefully we caught it before anyone else noticed?
  • NL-T1: NTR
  • NRC-KI: nc
  • OSG: nc
  • PIC: NTR
  • RAL: Scheduled upgrade (replacement), of site firewall (25/04/2018 07:00-08:00). Within this time window there is expected to be a short break in connectivity while connections are moved across to the new firewall.
  • TRIUMF: NTR

  • CERN computing services:
    • Site ARGUS certificate expiry last night affected Grid jobs at CERN for ~8 hours. We will switch on auto-enrolling for the Argus alias certificate which will prevent reoccurrence. [ OTG:0043597 ] The same mechanism will be deployed to VOMS as per Maarten's inquiry
  • CERN storage services:
    • FTS: Due to a DB migration, the FTS Pilot instance will be unavailable for 10 min on Thu 26 April during the time window 10:30 to 12:30 CEST. (OTG:0043599)
  • CERN databases: NTR
  • GGUS: OSG changed are being looked into
  • Monitoring: NTR
  • MW Officer: no issues to report
  • Networks: NTR
  • Security: NTR

AOB:

Edit | Attach | Watch | Print version | History: r18 < r17 < r16 < r15 < r14 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r18 - 2018-04-23 - KateDziedziniewicz
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback