Summary of December 2019 GDB, December 11th, 2019


Agenda

For all slides etc. see:

Agenda

Introduction

Speaker: Ian Collier (Science and Technology Facilities Council STFC (GB))

slides

  • March GDB at Taipei
  • Formally welcome Mattias and Pepe as new chairs
  • Let Mattias and Pepe know re updates
Discussion:
  • Mattias: Could not imagine running for chair without steering group
  • Maarten: Use the opportunity to thank Ian for helmsmanship, getting us get ready for our own H2020, and interesting years ahead. Hope Ian will be around in the future.
    • Ian: Don't have totally new objectives, will be here at least in January and take it from there
  • Pepe: Will be a pleasure for us to do this over the next few years, hopefully it will be at the same level as you have done!

SVG Deployment Expert group

Speaker: Linda Cornwall (Science and Technology Facilities Council STFC (GB))

slides

Discussion:

  • Maarten: Hope that the traffic will be less than even Linda said. Not that we have vulnerability every week. Will peak at times, get a few mails, but will be about the same topic. Our experience since many years is that you don't have vulnerability every week, they are randomly spread, treat each one as quickly as possible with help of DEG.

CTA Deployment and Migration from CASTOR

Speaker: Michael Davis (CERN)

slides

Discussion

  • Maarten: You mentioned the LHC are biggest customers. Could imagine other experiments are interested, where they are ready before their next run.
  • Michael:Not waiting until end of 2020 to talk to smaller experiments, early next year to talk. Easiest use case is smaller expt that already uses FTS, could be migrated after we do ATLAS first.
  • Vincent: For experiments not using FTS?
  • Michael: Long tail of experiments, need to talk individually to get data management software, workflows. Already work with ALICE who don't use FTS, need to work that out
  • Liviu: SSD buffer: 250T / experiment. Is that enough for ATLAS?
  • Michael: The bandwidth in will be 10Gb/s. The buffer is exactly that. Based on how much data writing in. ATLAS writing in first case to EOS buffer in the pit, slowly moved onto tape over months. Not writing directly from data queue
  • Mattias: Regarding non SRM TPC, our site, don't use xrootd, https only, question is am I an outlier?
  • Maarten: dCache?
  • Mattias: Can do it, not interested in testing
  • Michael: Know that latest dCache can do xrootd TPC
  • Maarten: You're [Mattias] not concerned with LHCb, others are. [dCache] 5.2 or higher should be in good shape. Don't think we're worried about scenarios that Michael has presented. Still have time to sort out niggles that come up. DOMA work is ongoing since 2 years now, haven't started recently and hope for the best but spent time thinking about this. Know that Run 3 is going to be late which buys us some time. All in all think we should be good.
  • Pepe: [I understand that you're] going to be busy with migrations. Do you have plans to plug CTA into other storage like dCache?
  • Michael: dCache have talked to us. Happy to support that but not on our development plan. Hope that eventually have dCache as backend, but that's for dCache devs.
  • IanC: Observe that at RAL we considered using a different backend but will be using EOS.

Accounting: status , plans and data validation

Speaker: Julia Andreeva (CERN)

slides

Discussion

  • Oxana: Writing a report, it would be nice to see how much ATLAS has collected, etc. Data collected, how much in copies etc.
  • Juila: Disk usage?
  • Oxana: When you write a report you want the data; disk/tape/medium info is not so important. What is more important is what data has been collected and the copies - that would be an interesting thing to know, what percentage is copies. Not per experiment/anything
  • Julia: This was out of scope of this work. Never had normal storage accounting before, yes within expt but not generally. As first approx, get some trustable numbers for data on disk/tape. Actual content of data, analysis/second copy, maybe in the future, for the time being focus on reliable data
  • Maarten: Very expt specific. For Funding agency shouldn't matter - data that expt cares about, exact purpose is probably not
  • Oxana: Need to write our data usage, and would like to show our number [of copies]. Yes expt specific, but would be very useful
  • Maarten: It's maybe not rocket science but devil in the details
    • Oxana: doesn't have to be precise
  • Maarten: Would dispute that this should be exposed to third party. What would they use it for - one site has large number of copies/AODs/etc?
  • Julia: Each experiment has policy for how many copies. Implementation of storage can also impact number of copies (internal policy)
  • Pepe: This is something that site admins have been dreaming of. Sent you already some comments. For tape metrics, had way to build JSON file, asking sites to provide data. Need to interpret data in the right way. Need to validate the data.
  • Julia: Need to discuss in archival working group, review how the JSON is doing
  • Pepe: Spacetokens are part of pledge.
  • Julia: Thinking about CRIS topology. Not split storage in different areas, can also have flag whether it is pledge or not. Then in SRR can see what should be in pledge or not. WLCG don't take into account local storage which is not pledged. While maybe [good] to show people, don't have this capability. First need to have flag in CRIC, then SRR can use it. pledge/buffer, etc. Can find a way to deal with it
  • Pepe: Sites can always go and amend the report
  • Julia; Can tell you that 2 sites didn't do this but can chase
  • Mattias: Validation of data: ready made graphs or API, get site data to include?
  • Julia: Can do that already. Don't have recipe to hand but I think it is possible
  • Mattias: If this could be documendted very useful.

SOC WG update

Speaker: David Crooks (Science and Technology Facilities Council STFC (GB))

slides

Discussion

  • Maarten: Would the next milestone be reporting that in a given incident the SOC capabilities were in play?
  • David: It's definitely part of our planning that as the prototype capabilities grow we look at how they are used operationally - this is a new tool for the operational teams. Had this in mind during last SSC and definitely in future ones that we identify which information came from the prototype SOCs, and perhaps hence which capabilities we might add to those deployments. At the WG level we absolutely need to stay in touch with how the prototypes are developing and how they are being used - this will in many cases be a new capability for sites.
  • Vincent: During a recent OSG/EGI CSIRT exercise, could have usefully used MISP to share IoCs.

Monit Project

Speaker: Borja Garrido Bear (CERN)

slides

Discussion

  • Maarten: Many already know this
  • Pepe: Looking into CMS monitoring, new dashboard. I found some inconsistencies - how would you like to know about these?
  • Working with expts, each one has set up a WG. For inconsistencies, the first point of contact should be the expt WG, if they can't fix it they will contact us.
Edit | Attach | Watch | Print version | History: r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r2 - 2020-02-10 - JosepFlix
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback