Week of 181119

WLCG Operations Call details

  • For remote participation we use the Vidyo system. Instructions can be found here.

General Information

  • The purpose of the meeting is:
    • to report significant operational issues (i.e. issues which can or did degrade experiment or site operations) which are ongoing or were resolved after the previous meeting;
    • to announce or schedule interventions at Tier-1 sites;
    • to inform about recent or upcoming changes in the experiment activities or systems having a visible impact on sites;
    • to provide important news about the middleware;
    • to communicate any other information considered interesting for WLCG operations.
  • The meeting should run from 15:00 Geneva time until 15:20, exceptionally to 15:30.
  • The SCOD rota for the next few weeks is at ScodRota
  • General information about the WLCG Service can be accessed from the Operations Portal
  • Whenever a particular topic needs to be discussed at the operations meeting requiring information from sites or experiments, it is highly recommended to announce it by email to wlcg-scod@cernSPAMNOTNOSPAMPLEASE.ch to allow the SCOD to make sure that the relevant parties have the time to collect the required information, or invite the right people at the meeting.

Best practices for scheduled downtimes

Monday

Attendance:

  • local: Julia (WCLG), Kate (WCLG, chair, DB), Maarten (WLCG, ALICE), Ivan (ATLAS), Borja (monit), Alberto (monit), Vincent (security), Marian (network), Marcelo (LHCb/CNAF)
  • remote: Jens (NDGF), John (RAL), Xavier (KIT), Di (TRIUMF), Christoph (CMS), Xin (BNL), Jeff (OSG), Pepe (PIC)

Experiments round table:

  • ATLAS reports ( raw view) -
    • HI data taking - as expected i.e.
        • Limiting factor is writing to tape - 2.5 GB/s
        • In case of higher LHC efficiency, the SFO-CASTOR handshake will be switched off.
        • So far all data is saved successfully.
    • DB: ADCR problems on Thursday - solved. GGUS:138318
    • NDGF Rucio problems reported last week - solved.
Xin asked if the problems with BNL transfers are still ongoing. Ivan replied that there were some issues in the morning. A ticket will be opened to follow up.

  • CMS reports ( raw view) -
    • Heavy Ion run ongoing
      • Tape archiving of RAW at CERN ~keeping up with 2.5-3GB/s
      • Other activities developing tails (as expected and planned for)
        • RAW data archiving at FNAL
        • 'Prompt' reconstruction
    • SAM test submission stuck: GGUS:138351
    • Continued investigation of (remote) file access issues at RAL: GGUS:137650
    • Very good CPU utilization
      • ~180k cores Production
      • ~50k cores Analysis
Marian commented that he checked SAM but found no visible culprit. Local HTCondor has different backends, the one for ARC CEs keeps crashing for CMS only (both prod and test). Maarten suggested site upgrade might be a reason. Marian replied that the ping to ARC CE is crashing, each time on a different CE. ARC CE tests will temporarily be stopped to avoid crashes.

  • ALICE -
    • NTR

  • LHCb reports ( raw view) -
    • Data Access issues in PIC
    • A ticket was opened to SARA for data transfers

Sites / Services round table:

  • ASGC: nc
  • BNL: slow transfers last week, priority was increased and it solved some issues, others to be followed up with a ticket
  • CNAF: NTR
  • EGI: nc
  • FNAL: NTR
  • IN2P3: NTR
  • JINR: 1680 cores were added recently to Tier1
  • KISTI: nc
  • KIT:
    • Updated dCache for CMS last Thursday in order to fix a bug where pools would disable themselves, in case files were deleted by CMS before they could be flushed to tape.
    • Downtime for updating dCache, GPFS and Postgres for the LHCb SE - lhcbsrm-kit.gridka.de - tomorrow.
  • NDGF: One tape library down for complete hardware replacement. Scheduled November 19-26 but hopefully back online in the end of the week. About 3PB of atlas tape data offline during the event.
  • NL-T1: nc
  • NRC-KI: nc
  • OSG: NTR
  • PIC: Micro current cut took place in Barcelona, UPS system hasn't behaved properly so cooling failed on Saturday. UPS is being checked now. Site should be back apart from a few diskservers.
  • RAL: There is a planned outage of Castor tomorrow to enable us to patch the Database systems.
  • TRIUMF: NTR

  • CERN computing services: NTR
  • CERN storage services: nc
  • CERN databases: NTR
  • GGUS: NTR
  • Monitoring:
    • Final reports for the October availability sent around
    • Issues with CMS availability computation solved
    • Issues with QA availability computation solved
    • GGUS:138351 ETF CMS has stopped testing resources on Friday - the issue is still under investigation
  • MW Officer: nc
  • Networks: NTR
  • Security: NTR

AOB:

Edit | Attach | Watch | Print version | History: r15 < r14 < r13 < r12 < r11 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r15 - 2018-11-19 - MaartenLitmaath
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback