Week of 190902
WLCG Operations Call details
- For remote participation we use the Vidyo system. Instructions can be found here
.
General Information
- The purpose of the meeting is:
- to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
- to announce or schedule interventions at Tier-1 sites;
- to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
- to provide important news about the middleware;
- to communicate any other information considered interesting for WLCG operations.
- The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
- The SCOD rota for the next few weeks is at ScodRota
- General information about the WLCG Service can be accessed from the Operations Portal
- Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to wlcg-scod@cernSPAMNOTNOSPAMPLEASE.ch to allow the SCOD to make sure that the relevant parties have the time to collect the required information, or invite the right people at the meeting.
Best practices for scheduled downtimes
Monday
Attendance:
- local: Remy (Storage), Vincent (Security), Olga (Compute), Michal (ATLAS), Alberto (Monitoring), Maarten (ALICE)
- remote: Renato (LHCb), Christoph (CMS), Darren (RAL), Dave (FNAL), Mike (ASGC), Onno (NL-T1), Sang-Un (KISTI), Ville (NDGF), David (IN2P3), Marcelo (CNAF)
Experiments round table:
- ATLAS reports ( raw view) -
- Activities:
- pilot2/singularity migration converging, remaining sites/queues handled one by one
- tail of ongoing reprocessing campaign (using inputs from tape)
- Issues
- Transfers from NDGF-T1_DATATAPE were failing with "Changing file state because request state has changed" (GGUS:142926
)
- CMS reports ( raw view) -
- Some MC workflows causing trouble at sites due to high I/O on local WN disk
- Heavy "untar" of Grid packs
- Becomes troublesome, when many jobs start at the same time
- LHCb reports ( raw view) -
- Activity:
- MC, user jobs and data re-stripping.
- Massive staging at all T1
- Issues:
- RAL:
- GGUS:142350
; Under investigation. User jobs increased, no queue. Issue seems to continue.
Sites / Services round table:
- ASGC: NTR
- BNL: NC
- CNAF: NTR
- EGI: NC
- FNAL: NTR
- IN2P3: Massive staging from LHCb, 100 TB on the last 2 days, has filled the cache (43 TB) in dCache. More than 2k requests were pending this morning (2nd September 2019). Requests has been released and transfers efficiency is now correct. Pin lifetime is around 10h while on dCache timeout is configured to 48 h. LHCb needs to check the timeout set on his side for the "srmBringOnline" requests.
- JINR: NTR
- KISTI: NTR
- KIT: NC
- NDGF: NTR
- NL-T1: NTR
- NRC-KI: NC
- OSG: NC
- PIC: NC
- RAL: NTR
- TRIUMF: NTR
- CERN computing services: Reduced capacity in HTCondor Monday/Tuesday (OTG:0051883
) due to planned intervention (OTG:0051379
).
- CERN storage services:
- CERN databases: NTR
- GGUS: NTR
- Monitoring: NTR
- MW Officer: NTR
- Networks: NTR
- Security: NTR
AOB: