TWiki
>
LCG Web
>
WLCGGDBDocs
>
GDBMeetingNotes20180411
(revision 2) (raw view)
Edit
Attach
PDF
---+!! Summary of March GDB, April 11, 2018 %RED%DRAFT%ENDCOLOR% <br />%TOC% ---++ Agenda <span>https://indico.cern.ch/event/651352/</span> ---++ Introduction - I. Collier [[https://indico.cern.ch/event/651352/contributions/2960302/attachments/1630456/2600435/GDB-Introduction-2018-04-11.pdf][presentation]] * Mattias: the !NorduGrid workshop mentioned in the slides is actually the ARC workshop. ---++ Naples Workshop Summary - I. Collier [[https://indico.cern.ch/event/651352/contributions/2960305/attachments/1630797/2599738/Naples-Workshop-Snapshot-20180411.pptx][presentation]] ---++ WLCG Strategy & Discussion - I. Bird [[https://indico.cern.ch/event/651352/contributions/2960318/attachments/1630863/2600699/WLCG-StrategyGDB-110418.pdf][presentation]] * Mattias: it will not only be necessary to port software to new CPU architectures, but also to new libraries and compilers. * Romain: do not underestimate the enormous cost of identity management: it should be taken into account early on. * Ian B.: even if it is not very visible in the strategy document, we explained the need of a transition to auth mechanisms used elsewhere and there are submitted projects to prototype work on this respect. The document will also get updates in the future. ---++ Naples - Common Data Management - next steps - S. Campana [[https://indico.cern.ch/event/651352/contributions/2960308/attachments/1630962/2599804/WLCGDataManagementGDB-04-2018.pptx][presentation]] * Markus E.: will these working groups take over planning and development from e.g. the dCache project? * Ian B.: the dCache team is fully involved in these discussions since the beginning and Patrick agrees with these plans. * Simone: the replacement of !GridFTP with HTTP and !WebDAV could be accomplished in the short term between the GDB and WLCG operations coordination, as it is mostly a deployment and configuration issue. * Andreas: if Rucio is mentioned as a "very promising candidate" as a common solution, does this mean that it has been already decided to adopt it as such? * Simone: It is up to CMS to make a decision, which is in the process of evaluating Rucio, as it was presented at the Rucio workshop. * Markus E.: what are the plans for the experiment DDM systems? Should not the architecture be defined first? * Simone: experiments are part of these working groups. This concerns in particular the end-to-end integration working group. We will need some R&D and defining an architecture will be one of the first steps, but understanding the components is a prerequisite. * Torre: in this context we will also address the tight relationship between DM and WM * Alessandro: how can we optimise information sharing among the different working groups? Wouldn't it be better to have a single integration group and a single technology group each with different activities rather then many groups? * Ian B.: we know what are the tasks that need to be done, but the organisational details still need to be fully defined * Maria: during the ongoing O&C week, CMS is discussing how to get organised. We will refine the DM plans as we get input from the experiments. * Oxana: do you expect commitment from sites for setting up prototypes (without disturbing normal work)? * Simone: yes, we will need resources, hardware and people for the prototypes, initially from contingencies at facilities. * Ian C.: we have a good record of sites contributing to common work without breaking things, and not all sites will be asked to contribute and at all times. * Maria: we would like to kick-off these activities in May * Latchezar: do you plan to run these WGs in parallel to existing structures, or to absorb those? Too many working groups are difficult to manage for smaller collaborations. It should be in the mandate to see how to supersede current activities when there is an overlap. * Ian B., Ian C.: we should use current groups whenever possible and create new ones only when needed. ---++ Scientific Computing Forum report - H. Meinhard [[https://indico.cern.ch/event/651352/contributions/2960306/attachments/1630987/2599865/2018-04-11-GDB-SCFreport.pdf][presentation]] * Latchezar: I see a big overlap between the SCF and the previously discussed groups * Ian B.: the SCF is a discussion forum mainly intended for the funding agencies with no decision making involved. -++Naples - Performance & Cost Modelling - next steps Ian C.: is there enough effort available? Markus: network experts would be welcome. Also from the physics side, we would need someone with some knowledge about HL-LHC parameters. Site managers from small size sites would also be welcome, there is already representation from large sites. -++ Naples - Analysis Facilities & Use Cases - next steps Ian C.: do you see implications for the infrastructures particularly on aspects we are not providing now? Eduardo: SWAN and the infrastructure needed for scaling out interactive analysis is probably the main one. Some functionality for peeking on sparse data is another example. Final remark: we need a set of topical meetings to discuss what different people are doing in the are of end user analysis. -++ Naples - Workload Management - next steps Ian C.: is the pre-gdb a possible place where discussing this? Needs to be favourable also to the US timezone, so afternoon. Simone/Torre/Ian/Ale: there are concrete topics in the DMWM area that we should start talking in working groups (or activities, or tasks): usage of high latency media, usage of caches and workflows in a data lake environment with CPUs non co-located with storage. Alessandro DG: resource provisioning , aka submit pilots (wrappers) to sites: reality is that we do have more than one per experiment solution to do very similar things. Can we consolidate? (e.g. FTS is the one interacting with storages, can't we have something similar to compute?). To be followed up. -++ Network Virtualisation WG Torre: making SDN End-2-End is a challenge, should we consider making it work at the level of the ³lake² perimeter rather than across the full mesh? Maria: yes, probably this a good approach to start with. Shawn: we can try to be close N2N trying to show advantages for a selected set of sites. Marian: the situation is very different depending on the R&E network provider. Some provide and share more and some less. -- Main.AndreaSciaba - 2018-04-11
Edit
|
Attach
|
Watch
|
P
rint version
|
H
istory
:
r3
<
r2
<
r1
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r2 - 2018-04-11
-
IanCollier
Log In
LCG
LCG Wiki Home
LCG Web Home
Changes
Index
Search
LCG Wikis
LCG Service
Coordination
LCG Grid
Deployment
LCG
Apps Area
Public webs
Public webs
ABATBEA
ACPP
ADCgroup
AEGIS
AfricaMap
AgileInfrastructure
ALICE
AliceEbyE
AliceSPD
AliceSSD
AliceTOF
AliFemto
ALPHA
Altair
ArdaGrid
ASACUSA
AthenaFCalTBAna
Atlas
AtlasLBNL
AXIALPET
CAE
CALICE
CDS
CENF
CERNSearch
CLIC
Cloud
CloudServices
CMS
Controls
CTA
CvmFS
DB
DefaultWeb
DESgroup
DPHEP
DM-LHC
DSSGroup
EGEE
EgeePtf
ELFms
EMI
ETICS
FIOgroup
FlukaTeam
Frontier
Gaudi
GeneratorServices
GuidesInfo
HardwareLabs
HCC
HEPIX
ILCBDSColl
ILCTPC
IMWG
Inspire
IPv6
IT
ItCommTeam
ITCoord
ITdeptTechForum
ITDRP
ITGT
ITSDC
LAr
LCG
LCGAAWorkbook
Leade
LHCAccess
LHCAtHome
LHCb
LHCgas
LHCONE
LHCOPN
LinuxSupport
Main
Medipix
Messaging
MPGD
NA49
NA61
NA62
NTOF
Openlab
PDBService
Persistency
PESgroup
Plugins
PSAccess
PSBUpgrade
R2Eproject
RCTF
RD42
RFCond12
RFLowLevel
ROXIE
Sandbox
SocialActivities
SPI
SRMDev
SSM
Student
SuperComputing
Support
SwfCatalogue
TMVA
TOTEM
TWiki
UNOSAT
Virtualization
VOBox
WITCH
XTCA
Welcome Guest
Login
or
Register
Cern Search
TWiki Search
Google Search
LCG
All webs
Copyright &© 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use
Discourse
or
Send feedback