Summary of GDB meeting, June 8, 2016 (CERN)



  • D4RI - Digital Infrastructures for Research conference will have a lot of colocated events

GDB steering group

  • no comments

Traceability WG



  • Oxana: walltime is underaccounted in volunteer computing, how is it normalized?
  • Michel: Are we moving to wallclock accounting? Yes, the pledges already are in wallclock.
  • Jeff: we should stop using the term "normalized time", and this would be a good time to end it.

Journal ideas

  • currently now most publications are in the CHEP proceedings which have very few cites.
  • scoping is going to be hard and will require a lot of discussion
  • editorial line should be hard, should include some form of research
  • Perhaps we should not isolate ourself to just physics?
  • Lifescience have a lot of interesting things, mostly data intensive issues and wide collaboration.
  • It should be open for data intensive work, not just physics.

CVMFS release testing

  • sounds sensible

CVMFS and data federations

  • WAN load? yes and no. it might.
  • Will require a lot of diskspace on the WN? Yes. it's io-intensive on the WM.
  • How is performace compared to xrootd? Unknown.

WLCG and IPv6-only CPUs

Introduction - D. Kelsey

Google IPv6 stats: 12% of connections in average but 43% in Belgium, 27% in USA

  • Also Apple mandates all apps in the AppleStore to be IPv6 compliant (be capable of IPv6-only) since June 1st

Motivation background

  • Oct. 2015 : Canada requested if pure IPv6 was possible for some new deployments planned
  • Some IPv6-only ressources may become available as opportunistic resources
    • 1 cloud provider offers discount if resources accessed through IPv6

Testbed work started a while ago: this year engaged with LHC experiments

  • 2015-16: moved from testbed to dual-stack in production
  • CERN: several significant spike in IPv6 outbound traffic at the level of 4 Gbits/s

Yesterday pre-GDB

  • Present experience with dual-stack production services
  • Look at other developments: monitoring, security...
  • Experiment status and requests
  • 20 people in the room + 20 remote

LHCb - R. Nandakumar

Plan is to be ready for IPv6-only in in (April) 2017: no major issue identified, on track

  • DIRAC has been made IPv6 compliant end of 2014: GridPP instance at Imperial dual-stacked with no issue
    • Main issue has been Python libs compiled without "--use-ipv6"
    • Still no test with real IPv6-only WNs
  • New CERN VO boxes are dual-stacked
  • An open issue with submission from DIRAC to dual-stacked CREAM CEs (CERN, QMUL): being worked on
  • Work still needed for authorization of IPv6 WNs

Most sites still running IPv4-only services

ALICE - C. Grigoras

Central services configured with IPv6 3 years ago, including DNS aliases

  • No problem but most services never contacted through IPv6
  • Exception:, 11% through IPv6

AliEN not yet IPv6 ready: in particular still shipping xrootd v3 but upgrade in progress

Support for IPv6-only WN requires all the storage to be dual-stacked: sites are *requested* to dual-stack their storage asap

CMS - A. Sciaba

11 sites have dual-stacked services and are not causing problems

  • T2 except PIC
    • Small fraction of the storage
  • Several xrootd redirectors dual-stacked

Core services:

  • cmsweb, glideinWMS/pilot factories and HTCondor validated
    • Not all pilot factories already dual-stacked
  • CRAB is not yet IPv6 compliant
  • Frontier has issue but fixes underway

CMS happy with dual-stacked WNs but CMS requests sites to keep IPv4 connectivity on WNs until the end of Run2 (even if degraded)

  • Sites encouraged to dual-stack their xrootd storage asap
  • CMS service maintainers encouraged to dual-stack the services they have in charge asap, after discussing with CMS
    • Most important services not yet dual-stacked: FTS, CVMFS, VOMS, PheDeX Oracle

Not yet ready to handle properly that some part of the storage is not reachable through IPv6: SW development needed

ATLAS - A. Dewhurst

A small number of sites with dual stack services, including storage: no problems observed

Offline SW ready

  • Last issue with Frontier client fixed in May, workaround possible for older versions
    • Squid 3 ok for Atlas, even though a few flaws can reduce the effectiveness

Core services: all compliant but Panda and Rucio servers still need to be dual stacked at CERN

ATLAS new computing model: nucleus/satelling

  • Nucleus in charge of data consolidation: any large site can be a nucleus for a particular task
    • Satellite only requires a decent connectivity with nucleus site
  • ATLAS would like to have several dual-stack nucleus asap

ATLAS encourages sites to upgrade their storage to dual stack asap

Ipv6 WG Proposal - A. Dewhurst

As a whole WLCG is significantly behind the commercial world in term of IPv6 readiness

  • IPv6 becoming mainstream
  • Mostly small sites with a limited number of IPv4 addresses allocated are really pushing: big sites tend to have big chuncks of addresses
    • Hopefully some large sites like CERN has made huge effort to drive the move

Eventual goal is to replace IPv4 by IPv6: running 2 protocols is more complex than one...

  • Would be good to dual stack a small number of services...
  • WN is the easiest resource to make IPv6 only
  • Idea is to require leading facilities to support dual stack services (e.g. T1s) and allow other sites to upgrade directly from IPv4 to IPv6
    • Still discussed with VOs what is the minimum amount of dual stack storage before a VO is comfortable with IPv6-only WN
    • Would be good it a site wanted to become IPv6-only for WN this year: would greatly help
  • A lot of testing done by the HEPiX IPv6 WG and no blocking issue have been identified
    • Key SW/protocols work: it a SW has no developer able to do the work, experiment should consider moving away from it

Agreements with VOs so far

  • All VOs encourage sites to dual stack their storage
  • All VOs working towards making their central services dual-stacked by April 2017
  • Shared services like CVMFS should be reachable through IPv6 by April 2017
  • T1s should provide dual stockage with 90% availability by April 2017
    • At least 1 GB/s by April 2017 and 10 GB/s by April 2018

-- MichelJouvin - 2016-06-08

Edit | Attach | Watch | Print version | History: r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r3 - 2016-07-20 - IanCollier
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback