IPv6 Task Force

Mandate and goals

The imminent exhaustion of the IPv4 address space will eventually require to migrate the WLCG services to an IPv6 infrastructure, with a timeline heavily dependent on the needs of individual sites. For this reason the HEPiX IPv6 Working Group was created in April 2011 having this mandate.

The WLCG Operations Coordination and Commissioning Team has established an IPv6 Task Force to establish a close collaboration with the HEPiX IPv6 WG on these aspects (listed in chronological order):

  • Define realistic IPv6 deployment scenarios for experiments and sites
  • Maintain a complete list of clients, experiment services and middleware used by the LHC experiments and WLCG
  • Identify contacts for each of the above and form a team of people to run tests
  • Define readiness criteria and coordinate testing according to the most relevant use cases
  • Recommend viable deployment scenarios

perfSONAR dashboard links

OSG IPv6 activities

They are detailed here.

WLCG Tier-2 IPv6 deployment status (2017-2018) [last checked on 2019-10-11]

  • chart.png:
    chart.png
  • chart2.png:
    chart2.png
  • chart3.png:
    chart3.png

VO T2 storage on IPv6 (%)
ALICE 85
ATLAS 59
CMS 89
LHCb 75
WLCG 73
(checked on 2019-10-11)

  • chart4.png:
    chart4.png
  • chart5.png:
    chart5.png

Site Region ALICE ATLAS CMS LHCb Status perfSONAR Storage Ticket Details
UKI-GridPP-Cloud-IC UK       Y Done NA NA GGUS:131599 The site is an extension of UKI-LT2-IC-HEP, has not pS or storage and all services are IPv6-enabled
UKI-LT2-Brunel UK   Y Y Y Done NA Tested GGUS:131600 Dual stack on all services since years
UKI-LT2-IC-HEP UK   Y Y Y Done NA Tested GGUS:131601 pS not deployed by choice of the site
UKI-LT2-QMUL UK   Y Y Y Done Dual stack Tested GGUS:131602  
UKI-LT2-RHUL UK   Y Y Y In progress Dual stack IPv4 GGUS:131603 Still waiting for the central IT service to provide DNS, which is outsourced to JANET
UKI-LT2-UCL-HEP UK   Y     Done Dual stack NA GGUS:131604  
UKI-NORTHGRID-LANCS-HEP UK   Y   Y Done Dual stack Tested GGUS:131605  
UKI-NORTHGRID-LIV-HEP UK   Y   Y On hold Dual stack IPv4 GGUS:131606 Unable to enable IPv6 on storage for fears of overloading routers and firewalls; need to wait for the upgrades planned for the next financial year
UKI-NORTHGRID-MAN-HEP UK   Y   Y Done Dual stack Dual stack GGUS:131607 Note: pS is currently off because it needs to be upgraded to the latest version
UKI-NORTHGRID-SHEF-HEP UK   Y   Y In progress IPv4 IPv4 GGUS:131608 Will deploy on pS 1st week of June; disk will be decommissioned and never put in dual stack
UKI-SCOTGRID-DURHAM UK   Y   Y Done Dual stack Dual stack GGUS:131609  
UKI-SCOTGRID-ECDF UK   Y   Y Done Dual stack Dual stack (partial) GGUS:131610 ECDF storage in dual stack, ECDF-RDF will never be in dual stack
UKI-SCOTGRID-GLASGOW UK   Y Y Y In progress Dual stack IPv4 GGUS:131611 The new DC is almost ready and it will have IPv6. ETA is 15 October
UKI-SOUTHGRID-BHAM-HEP UK Y Y   Y In progress IPv4 IPv4 GGUS:131612 Have now access to the DNS system, must learn how to use it
UKI-SOUTHGRID-BRIS-HEP UK   Y Y Y Done Dual stack Tested GGUS:131613  
UKI-SOUTHGRID-CAM-HEP UK   Y   Y Done Dual stack Tested GGUS:131614 No answer from ATLAS, assumed OK
UKI-SOUTHGRID-OX-HEP UK Y Y Y Y On hold Dual stack IPv4 GGUS:131615 Stated not to be priority for the university
UKI-SOUTHGRID-RALPP UK   Y Y Y Done Dual stack Tested GGUS:131616  
UKI-SOUTHGRID-SUSX UK   Y     Done Dual stack Testing GGUS:131617 Deployment completed, to be checked by ATLAS
IN2P3-CPPM FRANCE   Y   Y Done Dual stack Testing GGUS:131782  
IN2P3-CC-T2 FRANCE   Y Y   Done Dual stack Dual stack GGUS:131781 Services shared with Tier-1
GRIF_IRFU FRANCE Y Y Y Y Done Dual stack Tested GGUS:131778  
GRIF_LLR FRANCE Y Y Y Y Done Dual stack Tested GGUS:131778  
GRIF_LPNHE FRANCE Y Y Y Y Done Dual stack Dual stack GGUS:131778  
GRIF_IPNO FRANCE Y Y Y Y Done Dual stack Tested GGUS:131778 No perf-sonar, as it shares the perfSonar of GRIF_LAL
GRIF_LAL FRANCE Y Y Y Y Done Dual stack Dual stack GGUS:131778  
IN2P3-LAPP FRANCE   Y   Y Done Dual stack Testing GGUS:131784 Deployment completed, waiting for checks from ATLAS
IN2P3-LPC FRANCE Y Y   Y Done Dual stack Tested GGUS:131785 No answer from ATLAS
IN2P3-LPSC FRANCE Y Y     Done Dual stack Tested? GGUS:131786  
IN2P3-SUBATECH FRANCE Y       Done Dual stack Tested GGUS:131787 pS and EOS now dual stack, verified by ALICE. Ready to close the ticket
IN2P3-IRES FRANCE Y   Y   Done Dual stack Tested GGUS:131783  
HEPHY-UIBK IT   Y     Done NA NA GGUS:131779 The site does not have any storage accessible from the outside
Hephy-Vienna IT Y   Y   On hold IPv4 IPv4 GGUS:131780 The new data centre will be ready in Q2 2020
INFN-Bari IT Y Y Y Y Done Dual stack Tested GGUS:131788  
INFN-CATANIA IT Y       Done Testing Tested GGUS:131789  
INFN-FRASCATI IT   Y     Done Dual stack Tested? GGUS:131790  
INFN-LNL-2 IT Y   Y Y Done Dual stack Tested GGUS:131791  
INFN-MILANO-ATLASC IT   Y     Done Dual stack Tested GGUS:131792  
INFN-NAPOLI-ATLAS IT   Y   Y Done Dual stack Dual stack GGUS:131793  
INFN-PISA IT     Y Y In progress IPv4 Testing GGUS:136471 Need to reinstall pS and add IPv6 addresses; need to fix xrootd access
INFN-ROMA1 IT   Y     Done Dual stack Tested? GGUS:131795  
INFN-ROMA1-CMS IT     Y   On hold IPv4 IPv4 GGUS:131796 First, we need to upgrade dCache to a recent version and fix a problem with CVMFS. ETA unclear, need to hire a new technician
INFN-TORINO IT Y     Y In progress IPv4 IPv4 GGUS:131797 IPv6 peering being activated, waiting for a new switch module. ETA end of 2018 (probably earlier)
wuppertalprod DE   Y     On hold IPv4 IPv4 GGUS:131967 IPv6 address blocks being rolled out but slowly; deployment expected some time in 2019
GoeGrid DE   Y     In progress IPv4 IPv4 GGUS:131952 Need to reinstall pS; updates on the timescale soon
DESY-HH DE   Y Y Y Done Dual stack Tested GGUS:131950  
LRZ-LMU DE   Y     Done NA Tested GGUS:131957 No perfSonar for security reasons
MPPMU DE   Y     On hold NA IPv4 GGUS:131958 IPv6 deployment will not start before the end of this year and no date can be estimated
DESY-ZN DE   Y   Y On hold IPv4 IPv4 GGUS:131951 Router upgrade on Q1 2019
UNI-FREIBURG DE   Y     On hold IPv4 IPv4 GGUS:131964 Plans already laid down; slight concern about the firewall setup, to be done at the storage services level, and without assistance from the network department
RWTH-Aachen DE     Y   Done Dual stack Tested GGUS:131962  
NCG-INGRID-PT IBERGRID   Y Y   On hold IPv4 IPv4 GGUS:131959 Started to look into it, new ETA is end of 2019
IFCA-LCG2 IBERGRID     Y   Done Dual stack Tested GGUS:131955  
UAM-LCG2 IBERGRID   Y     Done Dual stack Tested GGUS:131963  
ifae IBERGRID   Y     Done Dual stack Tested GGUS:131954 Embedded in PIC Tier-1
USC-LCG2 IBERGRID       Y Done Dual stack Dual stack GGUS:131966  
IFIC-LCG2 IBERGRID   Y     Done Dual stack Dual stack GGUS:131956  
CIEMAT-LCG2 IBERGRID     Y   Done Dual stack Tested GGUS:131947  
CSCS-LCG2 CH   Y Y Y Done Testing Tested GGUS:131948  
UNIBE-LHEP CH   Y     On Hold NA IPv4 GGUS:131965 A major overhaul of the site infrastructure has pushed IPv6 back
praguelcg2 CZ Y Y     Done Dual stack Testing GGUS:131960 asked ATLAS and ALICE to check
BUDAPEST HU Y   Y   Done Dual stack Tested GGUS:131946  
CYFRONET-LCG2 PL Y Y   Y On hold IPv4 IPv4 GGUS:131949 ETA for storage is June 2018, no problems expected
PSNC PL Y Y   Y Done NA Tested GGUS:131961 Ticket kept open until VObox is dual stack, but deployment completed from the WLCG point of view
ICM PL     Y Y Done NA Tested GGUS:131953  
NCBJ PL       Y On hold IPv4 IPv4 GGUS:138521 Changed network provider, now ETA is mid-September for IPv6
GR-07-UOI-HEPLAB GRNET     Y   Done Dual stack Tested GGUS:132103  
GR-12-TEIKAV GRNET   Y     In progress NA IPv4 GGUS:132104 Still waiting for IPv6 from the university; no exact ETA, but it shoud be within a few months
SE-SNIC-T2 NDGF Y Y     Done NA Dual stack GGUS:132114  
FI_HIP_T2 NDGF     Y   Done Dual stack Tested GGUS:132101  
T2_Estonia NDGF     Y   Done Dual stack Tested GGUS:132116  
BelGrid-UCL NL     Y   Done IPv4 Tested GGUS:132100 No ETA for pS
BEgrid-ULB-VUB NL     Y   Done Dual stack Dual stack GGUS:132099  
RO-14-ITIM RO   Y     Done Duak stack Tested? GGUS:132112 Working on pS issues
RO-11-NIPNE RO       Y Done IPv4 NA GGUS:132110 pS is now tracked by a different ticket
NIHAM RO Y       Done IPv4 Tested GGUS:132107  
RO-07-NIPNE RO Y Y   Y Done Dual stack Tested GGUS:132109  
RO-13-ISS RO Y       Done NA Tested GGUS:132111  
RO-02-NIPNE RO   Y     In progress IPv4 Tested? GGUS:132108 All storage nodes dual-stacked; will proceed with pS next week after upgrading to 4.1.3; routing issues to be solved (same as at NIHAM)
RO-16-UAIC RO   Y     Done Dual stack NA GGUS:132113  
RO-03-UPB RO Y       Done Dual stack Tested    
SiGNET SI   Y     Done Dual Stack Testing GGUS:132115 Deployment completed, waiting for the ATLAS confirmation
IEPSAS-Kosice SK Y Y     Done Dual Stack Tested GGUS:132105  
FMPhI-UNIBA SK Y Y     Done Dual stack Tested GGUS:132102  
WEIZMANN-LCG2 IL   Y   Y In progress NA IPv4 GGUS:132118 Network people working on it, ETA end of 2018 seems likely
IL-TAU-HEP IL   Y   Y Done NA Testing GGUS:132106  
TECHNION-HEP IL   Y   Y Done Dual stack Testing GGUS:132117  
RU-SPbSU Russia Y     Y In progress IPv4 IPv4 GGUS:132276 Plan to install the hardware in September
ITEP Russia Y Y Y Y Done Dual stack Tested GGUS:132270  
ru-PNPI Russia Y Y Y Y Done Dual stack Dual stack GGUS:132274 IPv6 deployed and ticket closed before I could ask the experiments to check...
RU-Protvino-IHEP Russia Y Y Y Y Done Dual stack Tested? GGUS:132275 pS issues to be dealt in another ticket; storage tested OK by CMS and LHCb
JINR-LCG2 Russia Y Y Y Y Done Dual stack Tested GGUS:132271  
RRC-KI Russia Y Y   Y In progress IPv4 IPv4 GGUS:132273  
Ru-Troitsk-INR-LCG2 Russia Y   Y Y Done NA Tested GGUS:132277  
UA-KNU UA Y       In progress NA IPv4 GGUS:132282 Still busy with reconfiguring devices for IPv6
UA-BITP UA Y       Done NA Tested GGUS:132280  
UA-ISMA UA Y       In progress NA IPv4 GGUS:132281 Still busy with reconfiguring devices for IPv6
Kharkov-KIPT-LCG2 UA     Y   Done Dual stack Tested GGUS:132272  
TR-03-METU TR     Y   Done Testing Tested GGUS:132278  
TR-10-ULAKBIM TR   Y     Done NA Testing GGUS:132279 To be checked
BEIJING-LCG2 CHINA   Y Y   Done Dual stack Testing GGUS:132266 Everything dual stack, waiting for experiment tests
ZA-CHPC AfricaArabia Y Y     Done NA Dual stack (partially) GGUS:132283 ALICE OK, ATLAS storage to be completely overhauled and therefore it will be deployed with IPv6 support
CA-SCINET-T2 Canada   Y     Done IPv4 IPv4 GGUS:132268 Site to be decommissioned in June 2018, so no need to deploy IPv6. A new site, CA-WATERLOO-T2, will be commissioned in March
CA-MCGILL-CLUMEQ-T2 Canada   Y     Done IPv4 IPv4 GGUS:132267 Site to be decommissioned and replaced by CA-WATERLOO-T2 in March
CA-WATERLOO-T2 Canada   Y     On hold IPv4 IPv4 GGUS:137950 IPv6 on low priority due to several more urgent issues; ETA is mid-2019
CA-VICTORIA-WESTGRID-T2 Canada   Y     In progress IPv4 IPv4 GGUS:132269 Maintenance intervention to install new network line cards foreseen for September 25
Australia-ATLAS AsiaPacific   Y     On hold IPv4 IPv4 GGUS:132472 Working on the network reconfiguration but IP range addressing and DNS are not yet enabled for IPv6
IN-DAE-VECC-02 AsiaPacific Y       Done NA Tested GGUS:132476  
TOKYO-LCG2 AsiaPacific   Y     Done Dual stack Tested GGUS:132481  
TW-FTT AsiaPacific   Y     Done Dual stack Dual stack GGUS:132482  
NCP-LCG2 AsiaPacific Y   Y   Done Dual stack Tested GGUS:132480  
INDIACMS-TIFR AsiaPacific     Y   Done Dual stack (local) Tested GGUS:132477 Storage tests passed, but FTS transfers started to fail, had to roll back IPv6
T2-TH-SUT AsiaPacific Y       Done Dual stack Tested GGUS:132486  
EELA-UTFSM LA   Y     Done Dual stack Tested GGUS:132474  
ICN-UNAM LA Y       On hold NA IPv4 GGUS:132475 The local NOC is working on the IPv6 deployment, which will be fully available around November
CBPF LA Y     Y In Progress Dual stack Dual stack GGUS:132473 The required new equipment needed for IPv6 arrived, now to be configured
SUPERCOMPUTO-UNAM LA Y       In progress NA Testing GGUS:132485 Will deploy a new xrootd storage element, to have the latest xrootd version, and to which migrate data from the old storage
SAMPA LA Y     Y Done Dual stack Tested GGUS:132484  
T2_BR_SPRACE USCMS     Y   Done        
T2_BR_UERJ USCMS     Y   In progress        
T2_US_Caltech USCMS     Y   Done        
T2_US_Florida USCMS     Y   Done        
T2_US_MIT USCMS     Y   On hold       IPv6 not supported on campus, no ETA yet
T2_US_Nebraska USCMS     Y   Done        
T2_US_Purdue USCMS     Y   Done        
T2_US_UCSD USCMS     Y   Done        
T2_US_Vanderbilt USCMS     Y   In progress       GridFTP OK, xrootd no IPv6 addresses
T2_US_Wisconsin USCMS     Y   Done        
AGLT2 USATLAS   Y     Done        
MWT2 USATLAS   Y     In progress        
NET2 USATLAS   Y     In progress        
SWT2_OU USATLAS   Y     In progress        
SWT2_UTA USATLAS   Y     On hold        

Legend:

  • Status: No reply, on hold, in progress, done
  • perfSONAR: NA (not available at site), (only) IPv4, Dual stack
  • Storage: NA (not available at site), (only) IPv4, Dual stack (and not tested), Testing, Tested

Notes:

  • The tickets are submitted progressively over time, and not all sites are present currently.

Some experiments track the IPv6 readiness status independently:

Experiment Specific checks

ATLAS

For ATLAS, before you migrate your storage to IPv6, please send an email for information to atlas-adc-ddm-support at cern.ch and atlas-adc-dpa at cern.ch
ATLAS setup an ETF IPv6 only testing node to check the behaviour of the sites and in general
  • check FTS monitoring with IPv6 filter to make sure transfers are succeeding.
  • check Panda and HammerCloud to make sure there are not changes in failure rates due to the IPv6 migration

Reports

Report 14/09/2017

A GGUS support unit for IPv6 in GGUS has been created. Some experts from the HEPiX IPv6 are volunteering to be members of it.

A WLCG broadcast will be sent very soon with this content:

The WLCG management and the LHC experiments approved several months ago (+) a deployment plan for IPv6 (++) which requires that:

  • all Tier-1 sites provide dual-stack access to their storage resources by April 1st 2018
  • all Stratum-1 and FTS instances for WLCG need to be dual-stack by April 1st 2018
  • the vast majority of Tier-2 sites provide dual-stack access to their storage resources by the end of Run2 (end of 2018).

All WLCG sites are therefore invited to plan accordingly in case they have not yet met these requirements. Individual tickets will be sent in the coming weeks to Tier-2 sites (Tier-1 sites are already tracked separately) to track their progress.

Various support channels are available:

Interested sites may also join the HEPiX IPv6 working group (https://hepix-ipv6.web.cern.ch/), which provides some documentation.

(+) https://espace.cern.ch/WLCG-document-repository/Boards/MB/Minutes/2016/MB-Minutes-160920-v1.pdf

(++) https://indico.cern.ch/event/467577/contributions/1976037/attachments/1340008/2017561/Kelsey20sep16.pdf

Report 29/09/2016

See slides.

Report 02/06/2016

Next week's pre-GDB is devoted to IPv6, as well as two-hours slot in the GDB. The main topics to be discussed are:

  • Experiment requirements
  • Status of support to IPv6-only CPUs
  • Experience on dual-stack services
  • Monitoring and IPv6
  • Security and IPv6
  • Status of WLCG tiers and LHCOPN/LHCONE

Report 05/11/2015

  • Deploying an instance of ETF (new implementation of Nagios for SAM) to test the nodes in the IPv6 testbed

Report 17/09/2015

Update on the status of IPv6 deployment in WLCG (from Bruno Hoeft)

Tier-1
Site LHCOPN IPv6 peering LHCONE IPv6 peering perfSONAR via IPv6
ASGC - - -
BNL not on their priority list
CH-CERN yes yes LHC[OPN/ONE]
DE-KIT yes yes LHC[OPN/ONE]
FNAL yes yes LHC[OPN/ONE] but not yet visible in Dashboard
FR-CCIN2P3 yes yes LHC[OPN/ONE] but not yet visible in Dashboard
IT-INFN-CNAF - yes LHCONE
NDGF yes yes LHC[OPN/ONE]
ES-PIC yes yes LHCOPN
KISTI started but no peering implemented
NL-T1 no peering implemented
TRIUMF IPv6 peering planned at end of 2015
RRC-KI-T1 - - -

Tier-2
Site LHCONE IPv6 peering perfSONAR
DESY yes LHCONE
CEA SACLAY yes -
ARNES yes -
WISC-MADISON yes -
UK sites QMUL peers with LHCONE but not for IPv6
Prague FZU IPv6 still working but the previous contact person left
There are additional IPv6 perfSONAR servers at Tier-2 centres, but not via LHCONE.

Report 07/05/2015

  • LHCb: DIRAC was made IPv6-compatible back in November, but testing has started in April: a DIRAC installation on a dual stack machine is running at CERN. Successfully tested that can be contacted from IPv6 and IPv4 nodes and can run jobs submitted from LXPLUS. However, 50% of client connections fail, which was hidden by automatic retries, and it was found to be caused by a CERN python library (wrong IPV6 address returned).

Report 02/04/2015

  • FTS3 testbed operational, with servers at KIT and Imperial College both working fine
  • The following sites activated IPv6:
    • LHCOPN: CERN, KIT, NDGF, PIC, NL-T1, IN2P3-CC, HIP
    • LHCONE: CERN, CEA Saclay, IN2P3 -CC, IJS (NDGF site)
  • OSG is testing (among other middleware) glideinWMS. The central manager, frontend and schedd machines have to be dual-stack and can talk to IPv4, IPv6 and dual-stack startd's. glideinWMS must specify to wget that it prefers IPv6 ( details)
  • OSG confirmed that Bestman2 is IPv6-compliant, but srmcp is not (it has not been patched for the extensions needed for IPv6)
  • squid 2 is not IPv6-compliant, while squid 3 is. OSG is still using squid 2
  • Duncan's dual stack mesh includes several dual-stack perfSONAR instances (~14 sites included) ( link)

Task Overview

Task Deadline Progress Affected VO Affected Siites Comment
WLCG applications readiness   60% All All Maintain software component readiness information in this table
User scenarios 100% All All Define the relevant user scenarios to be tested by the experiments
Experiment tests ATLAS, CMS started All All Have the experiments to test their main workload/data management tools and central services over Pv6

Scenarios

We can classify the actors in these categories:

Users
end users (human or robotic) using a client interface to interact with services
Jobs
user processes running on a batch node
Site services
services present at all sites (CE, SE, BDII, CVMFS, ARGUS, etc.)
Central services
services presents at a few sites (VOMS, MyProxy, Frontier, Nagios, etc.)

The following table describes the requirements of the corresponding nodes in terms of IP protocol in a timescale limited to a few years from now.

Node Network Requirement
User IPv4 MUST work, as users can connect from anywhere
User IPv6 SHOULD work, but it would concern only very few users working from IPv6-only networks
User dual stack MUST work, it should be the most common case in a few years
Batch IPv4 MUST work, as some batch systems might not work on IPv6, or e.g. the site might want to use AFS internally
Batch IPv6 MUST work, as some sites might exceed their IPv4 allocation otherwise
Batch dual stack MUST work, as some sites might want to use legacy software but also be fully IPv6-ready (e.g. CERN)
Site service IPv4 MUST work, as many institutes will not adopt IPv6 for some years and backward compatibility is required
Site service IPv6 SHOULD work, but it will have to work when there will be new sites with only IPv6
Site service dual stack MUST work, it should be the most common case in a few years
Central service IPv4 MAY work, but central services can be expected to run at sites with an IPv6 infrastructure
Central service IPv6 MAY work, as above sites certainly have an IPv4 infrastructure
Central service dual stack MUST work, and all above sites are expected to be able to provide dual-stack nodes
Existing WLCG sites may have only IPv4 and will not be forced by WLCG to deploy IPv6 to continue working. This is obviously true for resources that WLCG cannot control (opportunistic, clouds, etc.). On the other hand, WLCG should allow new sites to deploy only IPv6 in a scenario where IPv4 addresses cannot be obtained. Therefore, a realistic scenario is such that some sites will be accessible only via IPv4, some only via IPv6 and some via both protocols. Similarly, users may have to work from nodes supporting only IPv4, only IPv6 or both.

An additional constraint comes from storage federations: it is obvious that sites using only a protocol will not be able to read data from sites using only the other protocol. Therefore sites wishing to participate to a storage federation will need to deploy their SEs in dual stack, when sites with IPv6-only WNs become a reality.

In such scenario, central services are obviously required to work in dual stack using both protocols and be hosted at eligible sites.

All middleware used at a site must work via both protocols, to accommodate IPv4/6-only sites. A site is recommended to deploy the services it exposes to the outside in dual stack, but it is not a requirement (apart in the storage federation case).

To summarise, these are the testing scenarios to be considered:

  • central services MUST be deployed on dual stack nodes and tested using both protocols
  • site services MUST be deployed on dual stack nodes and tested using both protocols (which guarantees they work in IPv4/6 mode)
  • user clients and libraries MUST be deployed on dual stack nodes and tested using both protocols (which guarantees they work in IPv4/6 mode)
  • batch nodes MUST be deployed on IPv4, IPv6 or dual stack nodes (not all three configurations might be possible for a given site, though).

From now on all services are assumed to run on dual stack nodes. Moreover, when testing on a dual stack testbed, tests need to be run by forcing IPv4 or IPv6 either on the client node.

Use cases to test

Basic job submission

The user submits a job using the native middleware clients (CREAM client, Condor-G, etc.) or intermediate services (gLite WMS, glideinWMS, PanDA, DIRAC, AliEN, etc.).

User CE Batch Notes
IPv4 dual stack IPv4  
IPv4 dual stack dual stack  
IPv4 dual stack IPv6  
dual stack dual stack IPv4 also forcing IPv6 on user node
dual stack dual stack dual stack also forcing IPv6 on user node
dual stack dual stack IPv6 also forcing IPv6 on user node
All "auxiliary" services (ARGUS, VOMS, MyProxy, etc.) are supposed to work on dual stack, but may run on IPv4 initially for practical purposes, to avoid having a full dual-stack service stack right from the beginning. This remark is totally general and applies to all tests described below.

In case of intermediate services, the tests become much more complex given the higher number of services involved.

Basic data transfer

The user copies a file from his node to a SE and back.

User SE Notes
IPv4 dual stack  
dual stack dual stack also forcing IPv6 on user node
In this context, a batch node reading/writing to a local or remote SE is treated as a user node. The file copy MUST be tried with all protocols supported by the SE.

Third party data transfer

The user replicates a bunch of files between sites via FTS-3.

User SEs (source, destination) FTS-3 Notes
IPv4 dual stack dual stack in practice FTS3 could be IPv4 forever
dual stack dual stack dual stack in practice FTS3 could be IPv4 forever

Production data transfer

The user replicates a dataset using experiment-level tools (PhEDEx, DDM, DIRAC, etc.).

User SEs (source, destination) FTS-3 Experiment tool Notes
IPv4 dual stack dual stack dual stack  
dual stack dual stack dual stack dual stack  

Conditions data

A job access conditions data from a batch node via Frontier/squid.

Batch squid Frontier Notes
IPv4 dual stack dual stack  
IPv6 dual stack dual stack  
dual stack dual stack dual stack  

Experiment software

A job accesses experiment software in CVMFS from a batch node.

Batch squid Stratum0/1 Notes
IPv4 dual stack dual stack  
IPv6 dual stack dual stack  
dual stack dual stack dual stack  

Experiment workflow

A user runs a real workflow (event generation, simulation, reprocessing, analysis).

This test combines all previous tests into one.

Information system

A user queries the information system

User BDII Notes
IPv4 dual stack  
dual stack dual stack  

Job monitoring

Monitoring information from jobs, coming either from central services or from batch nodes via messaging systems, is collected, stored and accessed by a user.

User Monitoring server Messaging system Batch Notes
IPv4 dual stack dual stack IPv4  
IPv4 dual stack dual stack IPv6  
IPv4 dual stack dual stack dual stack  
dual stack dual stack dual stack IPv4  
dual stack dual stack dual stack IPv6  
dual stack dual stack dual stack dual stack  

IPv6 compliance of WLCG services

AliEN

ARC

ARGUS

BDII

  • Contact: Maria Alandes
  • Status: BDII is IPv6 compliant since the EMI 2 release. (OpenLDAP OK since v2)
    • Further info on the investigation here: https://savannah.cern.ch/bugs/index.php?95839
    • In order to enable the IPv6 interface, a yaim variable BDII_IPV6_SUPPORT needs to be set to 'yes' (default is no). This is all described in the sys admin guide and the corresponding release notes.

BestMAN

CASTOR

cfengine

CMS Tag Collector

cmsweb

CREAM CE

CVMFS

Dashboard Google Earth

dCache

DIRAC

DPM

EGI Accounting Portal

EOS

Experiment Dashboards

Frontier

FTS

  • Contact: Michail Salichos for both FTS2 and FTS3
  • FTS2 Status:
  • FTS3 status:
    • looks good for FTS3 and its dependencies (modulo the globus issue mentioned above)
    • with the exception of Active MQ-cpp - the messaging side will need some attention for IPv6 support.

Ganglia

GFAL/lcg_util

  • Contact:
    • GFAL2: Adrien Devresse
    • gfal/lcg_util: Alejandro Alvarez Ayllon
  • Status: gfal2 is plugin based, so it all depends on the plugin
    • HTTP: neon supports IPv6
    • SRM: gsoap supports IPv6
    • GridFTP: it is enabled - https://its.cern.ch/jira/browse/LCGUTIL-4
    • DCAP: unknown
    • LFC and RFIO: should work
    • BDII: OpenLDAP does support ipv6
    • Note on gsoap - this is used for WS in a number of cases. It supports IPV6 but this has to be enabled at compile time, so in certain builds it could be missing.
  • Note: gfal/lcg_util is probably OK, not tested, and by default would not fix this if broken.

glideinWMS

GOCDB

Gratia Accounting

Gridsite

GridView

Gstat

iCMS

LFC

  • See details for DPM

MonALISA

MyOSG

MyProxy

MyWLCG

Nagios

OpenAFS

PanDA

perfSONAR

PhEDEx

REBUS

SAM

Scientific Linux

STD IB and QA pages

StoRM

Ticket system (GGUS)

various D web tools

VOMS

gLite WMS

xroot

DualStack Virtual Machines at CERN

In the CERN Agile Infrastructure it is possible to ask for a Virtual Machine and set it up as dual stack node. This allows procuring "hardware" for testing of any kind of service on dual stack. To setup a dual stack VM at CERN please follow the instructions at the DualStackCERNVirtualMachine page

IPv6 Site Survey

The results of the 2014 IPv6 Site Survey are reported in...

SAM migration to IPv6

This page details the steps to be accomplished to use SAM to test IPv6 endpoints.

-- AndreaSciaba - 15-Jul-2013

Topic attachments
I Attachment History Action Size Date Who Comment
Texttxt IPv6_use_cases.txt r1 manage 4.3 K 2013-09-04 - 12:33 AndreaSciaba Andrea's use cases
PNGpng chart.png r25 r24 r23 r22 r21 manage 14.7 K 2019-10-11 - 14:51 AndreaSciaba  
PNGpng chart2.png r23 r22 r21 r20 r19 manage 14.0 K 2019-10-11 - 14:51 AndreaSciaba  
PNGpng chart3.png r13 r12 r11 r10 r9 manage 18.4 K 2019-10-11 - 14:51 AndreaSciaba  
PNGpng chart4.png r8 r7 r6 r5 r4 manage 12.6 K 2019-10-11 - 14:51 AndreaSciaba  
PNGpng chart5.png r8 r7 r6 r5 r4 manage 19.2 K 2019-10-11 - 14:51 AndreaSciaba  
Edit | Attach | Watch | Print version | History: r150 < r149 < r148 < r147 < r146 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r150 - 2019-10-11 - AndreaSciaba
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback