Week of 180312
WLCG Operations Call details
- For remote participation we use the Vidyo system. Instructions can be found here
.
General Information
- The purpose of the meeting is:
- to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
- to announce or schedule interventions at Tier-1 sites;
- to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
- to provide important news about the middleware;
- to communicate any other information considered interesting for WLCG operations.
- The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
- The SCOD rota for the next few weeks is at ScodRota
- General information about the WLCG Service can be accessed from the Operations Portal
- Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to wlcg-operations@cernSPAMNOTNOSPAMPLEASE.ch to make sure that the relevant parties have the time to collect the required information or invite the right people at the meeting.
Best practices for scheduled downtimes
Monday
Attendance:
- local: Borja (monitoring), Cristian (storage), Gavin (computing), Ivan (ATLAS), Maarten (SCOD + ALICE)
- remote: Alexander (NLT1), David B (IN2P3-CC), Di (TRUMF), Gareth (RAL), Jean-Roch (CMS), Jens (NDGF), Kyle (OSG), Marcelo (CNAF), Sang-Un (KISTI), Xavier (KIT)
Experiments round table:
- ATLAS reports ( raw view) -
- ATLAS Site jamboree & ATLAS S&C technical meeting - last week
- Rucio workshop the week before
- Production - 400k / 600k cores utilization
- Storage - Almost full. Waiting for DAOD absolution camping
- CMS reports ( raw view) -
- Pretty good utilization 240k-220k overall.
- 180k-160k production
- 70%-30% prod-analysis
- Incident on firewall of central collector on March 8 (issue understood, alarm now set), lead ramping down to 120k
- Recurrent issue with /store/unmerged/ deletion to be followed up on
- Abnormal level of stageout issues, correlated with singularity deployment ; we’ll have to understand better how to make this kind of roll-out smoother.
- ALICE -
- Lowish activity on average
- LHCb reports ( raw view) -
- Activity
- HLT farm fully running
- 2017 data re-stripping ongoing
- Stripping 29 reprocessing is ongoing
- Site Issues
- CNAF: coming back to life, but storage not working since Sunday evening
- Tier2D
- Users with UK certificate problems solved by upgrading xrootd server
- Maarten: xrootd >= 4.8.0 was needed
Sites / Services round table:
- ASGC: Unexpected power outage in ASGC happened at 13:35~14:05 on 11 March, 2018 due to the generator in data center wasn't working well during power supply interception by power company. Most servers were recovered so far, yet some disk servers are still down due to hardware issue and result in data transfer failures. Claimed an unscheduled downtime from now to 2018-03-14 23:59:00 for service recovery.
- BNL:
- CNAF:
- LHCb
- performed tests running MC - Passed
- Performed tests running Stripping productions - Passed
- Files have started to be replicated, went good till Sunday afternoon
- There is an issue with LHCb Storm Frontend, Downtime was created, being resolved, should be ok tomorrow
- CMS
- also had trouble with communication to Storm, this case was Gridftp issue, had been solved, should be working now.
- Singularity is showing an issue, last communications show that it is in CMS side.
- Since all the experiments are working (with minor tuning), from next week on the CNAF report will return to normal, without the need of the experiments situation sheet:
Recovery followup
Service |
VO |
Status |
Expected restart date |
Readiness |
GGUS ticket |
CNAF comment |
VO comment |
Electric power line |
- |
OK |
Production |
First line in Production with UPS |
|
The second line is working |
|
Tape buffer |
ALICE |
OK |
|
Production |
133582 |
all ALICE services at CNAF are running in the final configuration. |
Looks OK |
Tape buffer |
ATLAS |
OK |
|
Production |
131742 |
Ok |
|
Tape buffer |
CMS |
OK |
|
Production |
133515 |
Ok |
|
Tape buffer |
LHCb |
OK |
Production |
Production |
133673 |
OK |
|
Disk |
ALICE |
Parity OK |
|
Production |
133582 |
all ALICE services at CNAF are running in the final configuration. |
Looks OK |
Disk |
ATLAS |
Parity OK |
|
Production |
131742 |
Ok |
|
Disk |
CMS |
Degraded Parity |
|
Production |
133515 |
raid5 in a few LUNs, raid6 in the others |
Disks to be replaced |
Disk |
LHCb |
Parity OK |
Production |
Production |
133673 |
raid6 in all LUNs |
Data fully copied to new storage, Critical Tapes arrived and tested, MC and Stripping Jobs running, Stage and Replication working except for the hiccup described above |
Computing farm |
- |
|
Ready |
|
CMS: configure singularity on only CINECA farm (DONE); ATLAS: FS mounting on farm nodes and restart LSF queues. |
|
All CE's are operational |
- EGI:
- FNAL: NTR
- IN2P3: Tomorrow, IN2P3-CC will be in scheduled maintenance for the whole day.
- JINR: NTR
- KISTI: NTR
- KIT: NTR
- NDGF: NTR
- NL-T1:
- NRC-KI:
- OSG: NTR
- PIC:
- RAL: Since Echo (CEPH) storage was enabled for IPv6 there have been problems accessing it via the RAL FTS service. A problem has now been found in the IPv6 configuration on some nodes and we are hopeful this will resolved the FTS-Echo problems imminently. Also: We had to delay an intervention on our Castor storage for updates to be applied to the back-end databases. This is being re-scheduled for Tuesday 27th March. It will affect Castor storage (including Tape access). Access to Echo storage is not affected.
- TRIUMF: NTR
- CERN computing services: NTR
- CERN storage services:
- There were crashes of EOS-ATLAS and EOS-CMS last week
- Due to ongoing repack activities, tape recalls may be slow
- CERN databases:
- GGUS: NTR
- Monitoring:
- Draft reports for the February availability sent around
- MW Officer:
- Networks: NTR
- Security: NTR
AOB: