Week of 230306
WLCG Operations Call details
- The connection details for remote participation are provided on this agenda page
.
General Information
- The purpose of the meeting is:
- to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
- to announce or schedule interventions at Tier-1 sites;
- to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
- to provide important news about the middleware;
- to communicate any other information considered interesting for WLCG operations.
- The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
- The SCOD rota for the next few weeks is at ScodRota
- Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to the
wlcg-scod
list (at cern.ch
) to allow the SCOD to make sure that the relevant parties have the time to collect the required information, or invite the right people at the meeting.
Best practices for scheduled downtimes
Monday
Attendance:
- remote: Kate (DB, chair), Julia (WLCG), Maarten (ALICE,WLCG), Onno (NL-T1), Xavier (KIT), Peter (ATLAS), Christoph (CMS), David (IN2P3), Maria A (computing), Panos (WLCG), Andrew (TRIUMF), Pepe (PIC), Doug (BNL), Darren (RAL), Daniele (CNAF), Marian (network), Dave (FNAL)
Experiments round table:
- ATLAS reports ( raw view) -
- Update to UK eScience affecting a few ATLAS sites, including transfers to INFN-T1 (GGUS:160759
)
- Harvester update this week to htcondor 10, a couple of sites will notice job drain tomorrow
- CMS reports ( raw view) -
- CMS has been affected by a security issue:
- Access tokens that allow (in principle) job/pilot submission were exposed publicly
- Sites were reached via their security e-mail lists with instructions
- So far no misuse has been identified
- RAL had a cooling problem on Mar 2nd from around 11am UK time (No GGUS ticket)
- Various parts of the disk storage and a few WNs switched themselves off, as they should. A big spike in the failure rate
- The site has recovered fully since Mar 5th
- Checking the status of some files regarding their migration to tape
Sites / Services round table:
- ASGC:
- BNL: Downtime for dCache upgrade to v8.2.15 This Wednesday 8-March-23 14:00 UTC - 20:00 UTC.
- CNAF: NTR
- EGI:
- FNAL: NTR
- IN2P3: Site will be in downtime for quarterly maintenance on Tuesday, March 14th. SE will be off for dCache upgrade and tape system maintenance. Downtime has been declared accordingly ( GODCB #33659
)
- JINR:
- KISTI:
- KIT: Some of the Nexus routers could not be updated during last week's downtime
and we'll need a second try at some later date.
- NDGF:
- NL-T1: Upcoming SARA-MATRIX maintenances:
- NRC-KI:
- OSG:
- PIC: Tape downtime on Wednesday due to a new tape library installation (https://goc.egi.eu/portal/index.php?Page_Type=Downtime&id=33639
)
- RAL: Antares/CTA is now in downtime for the frame extension to the Tape Library (https://goc.egi.eu/portal/index.php?Page_Type=Downtime&id=33618
)
- TRIUMF: NTR
- CERN computing services: NTR
- CERN storage services:
- CERN databases:
- GGUS:
- all issues reported last week have been resolved
- Monitoring:
- Distributed draft SiteMon availability/reliability reports for February 2023
- Middleware: NTR
- Networks: NTR
- Security:
AOB: