Week of 180226
WLCG Operations Call details
- For remote participation we use the Vidyo system. Instructions can be found here
.
General Information
- The purpose of the meeting is:
- to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
- to announce or schedule interventions at Tier-1 sites;
- to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
- to provide important news about the middleware;
- to communicate any other information considered interesting for WLCG operations.
- The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
- The SCOD rota for the next few weeks is at ScodRota
- General information about the WLCG Service can be accessed from the Operations Portal
- Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to wlcg-operations@cernSPAMNOTNOSPAMPLEASE.ch to make sure that the relevant parties have the time to collect the required information or invite the right people at the meeting.
Best practices for scheduled downtimes
Monday
Attendance:
- local: Kate(DB, chair), Julia (WLCG), Maarten (ALICE, WLCG), Remy (storage), Jesus (storage), Vincent (sec), Alberto (mon), Gavin (comp), Andrea M (FTS), Marian (network), Rob (LHCb)
- remote: Andrew (NIKHEF), Christoph (CMS), Di Qing (TRIUMF), Kyle (OSG), Sang Un (KISTI), Xin Zhao (BNL), Christian (NDGF), Dave (FNAL), John (RAL), Marcelo (CNAF), David B (IN2P3), Samuel (KIT)
Experiments round table:
- CMS reports ( raw view) -
- Another Mid Week Global Run (MWGR) end of last week
- Continued detector commissioning
- No computing issues
- Rather high load in CMS Global Pool
- ~167k cores for production
- ~42k cores for analysis
- Run into a CVMFS issue (subitems likely correlated)
- CMS managed to run into 8TB quota limit (possibly cause of following issues)
- CVMFS deployment became slow INC:1605471
- Occasional warnings in Stratum_1 monitoring
- Continued effort to improve tape staging performance at CERN: INC:1560174
- ALICE -
- Normal activity level on average last week
- CNAF: disk SE back in production since Fri!
- So far the data files that should be present look fine.
- A big cleanup should get the SE contents back in sync with the catalog.
- Computing resources are also looking good.
- We thank and congratulate CNAF with the success of their careful efforts!
- LHCb reports ( raw view) -
- Activity
- HLT farm fully running after dip over the weekend
- MC simulation and user jobs
- 2017 data restripping ongoing
- Started stripping 29 reprocessing
- Site Issues
Sites / Services round table:
- ASGC: NTR
- BNL: added new WNs (~ 80kHS06) to the ATLAS farm last week
- CNAF:
- Working in the last details to open production for Alice, Atlas and CMS
- Today starts LHCb procedures for migrating data to the new storage. expected to take a week for finishing the copy of the files
- Each experiments have opened tickets to follow up the restart of the services:
Kate commented SIR will be prepared by
DBoD for the DIRAC DB issue
Recovery followup
There are storage hw problems so we had to put an unscheduled downtime(
GOCDB24963
,
GOCDB 24958
) until Monday. This affects the UI home directories, Phedex agents and the lsf batch system shared fs.
The support is working on the issue.
Service |
VO |
Status |
Expected restart date |
Readiness |
GGUS ticket |
CNAF comment |
VO comment |
Electric power line |
- |
Maint. |
20.02, 21.02 |
One line in Production with UPS |
|
The second redundant line is still down |
|
Tape buffer |
ALICE |
OK |
|
Production |
133582 |
all ALICE services at CNAF are running in the final configuration. |
Looks OK |
Tape buffer |
ATLAS |
OK |
|
Production |
131742 |
Ok |
|
Tape buffer |
CMS |
OK |
|
Production |
133515 |
Ok |
|
Tape buffer |
LHCb |
Being Rebuilt |
Production in March |
Not Ready |
133673 |
|
|
Disk |
ALICE |
Parity OK |
|
Production |
133582 |
all ALICE services at CNAF are running in the final configuration. |
Looks OK |
Disk |
ATLAS |
Parity OK |
|
Production |
131742 |
Ok |
|
Disk |
CMS |
Degraded Parity |
|
Production |
133515 |
raid5 in a few LUNs, raid6 in the others |
Disks to be replaced |
Disk |
LHCb |
Parity Restored |
Production in March |
Not Ready |
133673 |
raid6 in all LUNs |
Old degraded system powered up, it's working better than expected - fs mounted, but not accessible from ouside cnaf - we can start the copy on the brand new system |
Computing farm |
- |
|
Ready |
|
CMS: configure singularity on only CINECA farm; ATLAS: FS mounting on farm nodes and restart LSF queues. |
|
Now all CEs in unscheduled downtime - see comments above |
Maarten asked if ALICE disk is not yet ready for production. Marcelo confirmed it is, some minor details still must be fixed.
Christoph demanded if CNAF can be added to the transfer system despite of degraded parity. To be checked.
- EGI: nc
- FNAL: NTR
- IN2P3: ALICE and LHCb on CentOS7 since few weeks. Access to CE SL6 will be removed so they will be fully on CentOS7. CMS T2 is on CentOS7 since last week.
- JINR: NTR
- KISTI: NTR
- KIT: NTR
- NDGF: Triolith CE down for Slurm upgrade Wednesday. Planned power cut at UCPH on Friday 9-13. Nodes will be down. Storage expected to survive on UPS and generator.
- NL-T1: VM platform issues continue. The mitigation of all possible symptoms is ongoing. A temporary workaround was found, root cause is still being sought.
- NRC-KI: nc
- OSG: NTR
- PIC: nc
- RAL: Castor downtime on 1st March to patch backend Databases.
- TRIUMF: Last Thursday updated the DDN storage firmware which should solve the file corruption issues. Roughly 15% of files were not available during the update (~5 hours)
- CERN computing services: NTR
- CERN storage services:
- FTS upgrade to v3.7.8 on Tue 27 Feb from 10 to 11 CET (OTG:0042488
), the upgrade will be transparent to clients.
- CERN databases: ALICE online DB was shutdown pro-actively on 19th Feb due to expected power cuts in P2, it was then restarted on 21st Feb.
- GGUS: NTR
- Monitoring: NTR
- MW Officer: NTR
- Networks:
- RO-02-NIPNE transfer issues were resolved last week (by migrating storage to 1500 MTU)
- Progress made on T2_PK_NCP performance to GRIDKA, still under investigation are links to PIC, FNAL and JINR
- Security: EGI Advisory sent last week
- Only an issue with unprivileged user namespaces enabled (non-SUID Singularity)
- See SVG's email for detailed information (still TLP:AMBER)
AOB:
Topic revision: r26 - 2018-03-05
- unknown