TWiki> LCG Web>WLCGGDBDocs>GDBMeetingNotes20190508 (revision 2)EditAttachPDF

Summary of May GDB, May 8th, 2019 (@ EGI Conference, Amsterdam)

Agenda

Agenda

Introduction - Ian Collier

slides

When the room was asked about FNAL Dune hosted GDB in September there was general nodding and no objections. Dave Kelsey highlighted that there may also be a FIM4R (Federated Identity Management for Research) meeting in Chicago being planned around the same time.

Dave Kelsey (DK) - TCM in Tallinn should be added to list

Catalin C - CVMFS in June at Cern can be added to list

IC requested that the details of any missing meetings are emailed to him

Benchmarking Update - Domenico Giordano (DG)

Presented remotely

slides

Volker Guelzow (VG) do you expect companies to run the benchmark, who will run it? If companies, what will you allow in terms of optimised libraries and code modification

DG - this goes beyond preparation of the tool and needs wider discussion in WLCG. We are trying to make the barrier to running tool as low as possible. To this end also making clear to experiments that code must be publicly licenced and sample data fine to be publicly accessible. Another aspect to the answer to this question is: if they are able to identify areas of optimisation then why not get this feedback? It will be useful for the general optimisation of the code for the experiments.

Brian Bockelman (BB) - HEPSPEC06 is useful as it has been the same for 10 years - talking about continuous integration and a benchmark seems a bit strange. Do you see this benchmark as something for HEP, WLCG, or CERN.

DG - We can guarantee the same stability - you can choose to use the same container version for the next 10 years. However, this is not an advantage as the experiment code evolves. There is an interest in including the latest applications in the latest version of the benchmark.

BB- differences in versions have an impact on pledges, there is a need for a fixed point of reference.

Maarten Litmaath (ML) - HEPSPEC is diverging more and more from our code base and is less representative of this. There will need to be a process for moving between versions of the benchmark.

BB - Need to use benchmark for at least 5 years for use cases like pledges.

DG - Slide 3. Correlation for many years with HEPSPEC but we are now diverging and so there is no good justification for remaining on HEPSPEC

Helge M - there are good arguments for having one and only one benchmark and keeping it stable for as long as possible. Last time we coupled ourselves to the idea of SPEC CPU, and we have been stable on that for over a decade. Whilst there are alarm bells being rung, we are only seeing a 20% divergence, at worst, from this for our workloads. This is a success story, but it is unclear if we can repeat this success. We should be clear that changing the benchmark every year would be a mistake, but maybe 10 years is not possible. We must seek to have one benchmark and not end up with multiple benchmarks to reflect different workloads, which ultimately vary by only a few percent.

BB - we have created a tool that creates benchmarks - perhaps we have a standard reference for WLCG, but we can provide ability to generate new benchmarks for others to use.

HM - need to be careful as not only used for purposes such as pledges, but also used for procurement. There is a risk of causing confusion with companies that we work with if there are multiple benchmarks.

VG - Agreed that community specific benchmarks may be ok, but benchmarking costs money, so we need to make sure we don't ask companies to run multiple benchmarks.

DG - we are working on ensuring report from benchmark provides detail on e.g. code version. The discussion here is beyond the work that is being done.

IC - on the point of who runs benchmark, surely both sides run it - we procure to meet a set benchmark and provide benchmark. We then run the benchmark ourselves to validate the procurement.

Brief discussions in the room suggested this wasn't the case for all sites.

Mattias Wadenstein - we need to think about what the trigger point for using a new benchmark is. E.g. major changes in hardware

IC - This new framework allows us to change the benchmark when we feel the need, unlike the current situation where we are reliant on outside developers. It might also be the framework can be transferred to other communities

ML - We are at EGI. EGI adopted HEPSPEC06. If we do this right we can provide this benchmark to EGI and they also benefit.

BB - raised a concern that if the benchmark is hosted at cern.ch other communities may ignore it.

IC - may make sense in the future to change that, it is in our control. Approach should be theoretically extensible.

IC - should be noted that there is an action from the last management board for the benchmarking and cost modelling groups to work together

Middleware Evolution - Erik Mattias Wadenstein, Martin Litmaath

slides

IC - It is often easy to forget that this kind of migration is a normal kind of thing for us. It is not that it is no work and does not require attention and people to learn things; but we do know how to do it.

Report on Cream CE Migration Workshop - Jose Flix Molina

slides

Slide 5 - DK - Security evaluation is complete (in response to "being worked out")

IC - it may be interesting to not look just at the number of deployments, but at the amount of resources behind them.

-- IanCollier - 2019-05-08

Edit | Attach | Watch | Print version | History: r6 | r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r2 - 2019-05-14 - IanCollier
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback