IPv6 Task Force
Mandate and goals
The imminent exhaustion of the IPv4 address space will eventually require to migrate the WLCG services to an IPv6 infrastructure, with a timeline heavily dependent on the needs of individual sites. For this reason the HEPiX IPv6 Working Group was created in April 2011 having this
mandate
.
The WLCG Operations Coordination and Commissioning Team has established an IPv6 Task Force to establish a close collaboration with the HEPiX IPv6 WG on these aspects (listed in chronological order):
- Define realistic IPv6 deployment scenarios for experiments and sites
- Maintain a complete list of clients, experiment services and middleware used by the LHC experiments and WLCG
- Identify contacts for each of the above and form a team of people to run tests
- Define readiness criteria and coordinate testing according to the most relevant use cases
- Recommend viable deployment scenarios
perfSONAR dashboard links
OSG IPv6 activities
They are detailed
here
.
WLCG Tier-2 IPv6 deployment status (2017-2018) [last checked on 04-05-2023]
- chart.png:
- chart2.png:
- chart3.png:
(checked on 04-05-2023)
- chart4.png:
- chart5.png:
Site |
Region |
ALICE |
ATLAS |
CMS |
LHCb |
Status |
perfSONAR |
Storage |
Ticket |
Details |
BelGrid-UCL |
NL |
|
|
Y |
|
Done |
IPv4 |
Tested |
GGUS:132100 |
No ETA for pS |
FMPhI-UNIBA |
SK |
Y |
Y |
|
|
Done |
Dual stack |
Tested |
GGUS:132102 |
|
GoeGrid |
DE |
|
Y |
|
|
Done |
IPv4? |
Dual stack |
GGUS:131952 |
|
IN2P3-CC-T2 |
FRANCE |
|
Y |
Y |
|
Done |
Dual stack |
Dual stack |
GGUS:131781 |
Services shared with Tier-1 |
IN2P3-CPPM |
FRANCE |
|
Y |
|
Y |
Done |
Dual stack |
Testing |
GGUS:131782 |
|
IN2P3-IRES |
FRANCE |
Y |
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:131783 |
|
IN2P3-LAPP |
FRANCE |
|
Y |
|
Y |
Done |
Dual stack |
Testing |
GGUS:131784 |
Deployment completed, waiting for checks from ATLAS |
IN2P3-LPC |
FRANCE |
Y |
Y |
|
Y |
Done |
Dual stack |
Tested |
GGUS:131785 |
No answer from ATLAS |
IN2P3-LPSC |
FRANCE |
Y |
Y |
|
|
Done |
Dual stack |
Tested? |
GGUS:131786 |
|
IN2P3-SUBATECH |
FRANCE |
Y |
|
|
|
Done |
Dual stack |
Tested |
GGUS:131787 |
pS and EOS now dual stack, verified by ALICE. Ready to close the ticket |
SiGNET |
SI |
|
Y |
|
|
Done |
Dual Stack |
Testing |
GGUS:132115 |
Deployment completed, waiting for the ATLAS confirmation |
AGLT2 |
USATLAS |
|
Y |
|
|
Done |
|
|
|
|
Australia-ATLAS |
AsiaPacific |
|
Y |
|
|
On hold |
IPv4 |
IPv4 |
GGUS:132472 |
The site is in pure break-fix mode, no ETA |
BEgrid-ULB-VUB |
NL |
|
|
Y |
|
Done |
Dual stack |
Dual stack |
GGUS:132099 |
|
BEIJING-LCG2 |
CHINA |
|
Y |
Y |
|
Done |
Dual stack |
Testing |
GGUS:132266 |
Everything dual stack, waiting for experiment tests |
BUDAPEST |
HU |
Y |
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:131946 |
|
CA-MCGILL-CLUMEQ-T2 |
Canada |
|
Y |
|
|
Done |
IPv4 |
IPv4 |
GGUS:132267 |
Site to be decommissioned and replaced by CA-WATERLOO-T2 in March |
CA-SCINET-T2 |
Canada |
|
Y |
|
|
Done |
IPv4 |
IPv4 |
GGUS:132268 |
Site to be decommissioned in June 2018, so no need to deploy IPv6. A new site, CA-WATERLOO-T2, will be commissioned in March |
CA-VICTORIA-WESTGRID-T2 |
Canada |
|
Y |
|
|
In progress |
IPv4 |
IPv4 |
GGUS:132269 |
Will first upgrade dCache end then configure IPv6 |
CA-WATERLOO-T2 |
Canada |
|
Y |
|
|
On hold |
IPv4 |
IPv4 |
GGUS:137950 |
The site lost its network specialist and no equipment before 2024, so no ETA |
CBPF |
LA |
A |
|
|
Y |
Done |
Dual stack |
Dual stack |
GGUS:132473 |
|
CIEMAT-LCG2 |
IBERGRID |
|
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:131947 |
|
CSCS-LCG2 |
CH |
|
Y |
Y |
Y |
Done |
Testing |
Tested |
GGUS:131948 |
|
CYFRONET-LCG2 |
PL |
Y |
Y |
|
Y |
Done |
NA |
Dual stack |
GGUS:131949 |
The site will decommission its ALICE SE |
DESY-HH |
DE |
|
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131950 |
|
DESY-ZN |
DE |
|
Y |
|
Y |
Done |
IPv6 |
Dual stack |
GGUS:131951 |
EELA-UTFSM |
LA |
|
Y |
|
|
Done |
Dual stack |
Tested |
GGUS:132474 |
|
FI_HIP_T2 |
NDGF |
|
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:132101 |
|
GR-07-UOI-HEPLAB |
GRNET |
|
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:132103 |
|
GR-12-TEIKAV |
GRNET |
|
Y |
|
|
Done (site suspended) |
NA |
IPv4 |
GGUS:132104 |
Still waiting for IPv6 from the university; no exact ETA, but it should be within a few months |
GRIF_IPNO |
FRANCE |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131778 |
No perf-sonar, as it shares the perfSonar of GRIF_LAL |
GRIF_IRFU |
FRANCE |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131778 |
|
GRIF_LAL |
FRANCE |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Dual stack |
GGUS:131778 |
|
GRIF_LLR |
FRANCE |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131778 |
|
GRIF_LPNHE |
FRANCE |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Dual stack |
GGUS:131778 |
|
HEPHY-UIBK |
IT |
|
Y |
|
|
Done |
NA |
NA |
GGUS:131779 |
The site does not have any storage accessible from the outside |
Hephy-Vienna |
IT |
Y |
|
Y |
|
Done |
Dual stack |
Dual stack |
GGUS:131780 |
|
ICM |
PL |
|
|
Y |
Y |
Done |
NA |
Tested |
GGUS:131953 |
|
ICN-UNAM (suspended) |
LA |
Y |
|
|
|
On hold |
NA |
IPv4 |
GGUS:132475 |
Working on upgrading the site, when done IPv6 will be fully supported |
IEPSAS-Kosice |
SK |
Y |
Y |
|
|
Done |
Dual Stack |
Tested |
GGUS:132105 |
|
ifae |
IBERGRID |
|
Y |
|
|
Done |
Dual stack |
Tested |
GGUS:131954 |
Embedded in PIC Tier-1 |
IFCA-LCG2 |
IBERGRID |
|
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:131955 |
|
IFIC-LCG2 |
IBERGRID |
|
Y |
|
|
Done |
Dual stack |
Dual stack |
GGUS:131956 |
|
IL-TAU-HEP |
IL |
|
Y |
|
Y |
Done |
NA |
Testing |
GGUS:132106 |
|
IN-DAE-VECC-02 |
AsiaPacific |
Y |
|
|
|
Done |
NA |
Tested |
GGUS:132476 |
|
INDIACMS-TIFR |
AsiaPacific |
|
|
Y |
|
Done |
Dual stack (local) |
Tested |
GGUS:132477 |
Storage tests passed, but FTS transfers started to fail, had to roll back IPv6 |
INFN-Bari |
IT |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131788 |
|
INFN-CATANIA |
IT |
Y |
|
|
|
Done |
Testing |
Tested |
GGUS:131789 |
|
INFN-FRASCATI |
IT |
|
Y |
|
|
Done |
Dual stack |
Tested? |
GGUS:131790 |
|
INFN-LNL-2 |
IT |
Y |
|
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131791 |
|
INFN-MILANO-ATLASC |
IT |
|
Y |
|
|
Done |
Dual stack |
Tested |
GGUS:131792 |
|
INFN-NAPOLI-ATLAS |
IT |
|
Y |
|
Y |
Done |
Dual stack |
Dual stack |
GGUS:131793 |
|
INFN-PISA |
IT |
|
|
Y |
Y |
Done |
IPv4 |
Tested |
GGUS:136471 |
|
INFN-ROMA1 |
IT |
|
Y |
|
|
Done |
Dual stack |
Tested? |
GGUS:131795 |
|
INFN-ROMA1-CMS |
IT |
|
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:131796 |
|
INFN-TORINO |
IT |
Y |
|
|
Y |
In progress |
IPv4 |
IPv4 |
GGUS:131797 |
IPv6 deployment in the pipeline, will provide soon an ETA |
ITEP |
Russia |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:132270 |
|
JINR-LCG2 |
Russia |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:132271 |
|
Kharkov-KIPT-LCG2 |
UA |
|
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:132272 |
|
LRZ-LMU |
DE |
|
Y |
|
|
Done |
NA |
Tested |
GGUS:131957 |
No perfSonar for security reasons |
MPPMU |
DE |
|
Y |
|
|
Done |
NA |
Dual stack |
GGUS:131958 |
|
MWT2 |
USATLAS |
|
Y |
|
|
Done |
|
|
|
|
NCBJ |
PL |
|
|
|
Y |
In progress |
? |
Dual stack |
GGUS:138521 |
Sorting out peering/routing issues, otherwise it should work |
NCG-INGRID-PT |
IBERGRID |
|
Y |
Y |
|
Done |
NA |
Tested |
GGUS:131959 |
|
NCP-LCG2 |
AsiaPacific |
Y |
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:132480 |
|
NET2 |
USATLAS |
|
Y |
|
|
Done |
|
|
|
|
NIHAM |
RO |
Y |
|
|
|
Done |
IPv4 |
Tested |
GGUS:132107 |
|
praguelcg2 |
CZ |
Y |
Y |
|
|
Done |
Dual stack |
Testing |
GGUS:131960 |
asked ATLAS and ALICE to check |
PSNC |
PL |
Y |
Y |
|
Y |
Done |
NA |
Tested |
GGUS:131961 |
Ticket kept open until VObox is dual stack, but deployment completed from the WLCG point of view |
RO-02-NIPNE |
RO |
|
Y |
|
|
Done |
IPv4 |
Tested? |
GGUS:132108 |
Site suspended in GOCDB |
RO-03-UPB |
RO |
Y |
|
|
|
Done |
Dual stack |
Tested |
|
|
RO-07-NIPNE |
RO |
Y |
Y |
|
Y |
Done |
Dual stack |
Tested |
GGUS:132109 |
|
RO-11-NIPNE |
RO |
|
|
|
Y |
Done |
IPv4 |
NA |
GGUS:132110 |
pS is now tracked by a different ticket |
RO-13-ISS |
RO |
Y |
|
|
|
Done |
NA |
Tested |
GGUS:132111 |
|
RO-14-ITIM |
RO |
|
Y |
|
|
Done |
Duak stack |
Tested? |
GGUS:132112 |
Working on pS issues |
RO-16-UAIC |
RO |
|
Y |
|
|
Done |
Dual stack |
NA |
GGUS:132113 |
|
RRC-KI |
Russia |
Y |
Y |
|
Y |
Done (site suspended) |
IPv4 |
IPv4 |
GGUS:132273 |
|
ru-PNPI |
Russia |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Dual stack |
GGUS:132274 |
IPv6 deployed and ticket closed before I could ask the experiments to check... |
RU-Protvino-IHEP |
Russia |
Y |
Y |
Y |
Y |
Done |
Dual stack |
Tested? |
GGUS:132275 |
pS issues to be dealt in another ticket; storage tested OK by CMS and LHCb |
RU-SPbSU |
Russia |
Y |
|
|
Y |
Done |
IPv4 |
Tested |
|
Ru-Troitsk-INR-LCG2 |
Russia |
Y |
|
Y |
Y |
Done |
NA |
Tested |
GGUS:132277 |
|
RWTH-Aachen |
DE |
|
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:131962 |
|
SAMPA |
LA |
Y |
|
|
Y |
Done |
Dual stack |
Tested |
GGUS:132484 |
|
SE-SNIC-T2 |
NDGF |
Y |
Y |
|
|
Done |
NA |
Dual stack |
GGUS:132114 |
|
SUPERCOMPUTO-UNAM (suspended) |
LA |
Y |
|
|
|
Done |
NA |
Testing |
GGUS:132485 |
Will deploy a new xrootd storage element, to have the latest xrootd version, and to which migrate data from the old storage. NOTE: site is unresponsive |
SWT2_OU |
USATLAS |
|
Y |
|
|
Done |
|
|
|
|
SWT2_UTA |
USATLAS |
|
Y |
|
|
Done |
|
|
|
|
T2_BR_SPRACE |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2_BR_UERJ |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2_Estonia |
NDGF |
|
|
Y |
|
Done |
Dual stack |
Tested |
GGUS:132116 |
|
T2_US_Caltech |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2_US_Florida |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2_US_MIT |
USCMS |
|
|
Y |
|
In progress |
|
|
GGUS:156428 |
Connecting the xrootd servers to IPv6 |
T2_US_Nebraska |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2_US_Purdue |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2_US_UCSD |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2_US_Vanderbilt |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2_US_Wisconsin |
USCMS |
|
|
Y |
|
Done |
|
|
|
|
T2-TH-SUT |
AsiaPacific |
Y |
|
|
|
Done |
Dual stack |
Tested |
GGUS:132486 |
|
TECHNION-HEP |
IL |
|
Y |
|
Y |
Done |
Dual stack |
Testing |
GGUS:132117 |
|
TOKYO-LCG2 |
AsiaPacific |
|
Y |
|
|
Done |
Dual stack |
Tested |
GGUS:132481 |
|
TR-03-METU |
TR |
|
|
Y |
|
Done |
Testing |
Tested |
GGUS:132278 |
|
TR-10-ULAKBIM |
TR |
|
Y |
|
|
Done |
NA |
Testing |
GGUS:132279 |
To be checked |
TW-FTT |
AsiaPacific |
|
Y |
|
|
Done |
Dual stack |
Dual stack |
GGUS:132482 |
|
UA-BITP |
UA |
Y |
|
|
|
Done |
NA |
Tested |
GGUS:132280 |
|
UA-ISMA |
UA |
Y |
|
|
|
Done |
NA |
Tested |
GGUS:132281 |
|
UA-KNU |
UA |
Y |
|
|
|
Done |
NA |
Tested |
GGUS:132282 |
SE not used by ALICE |
UAM-LCG2 |
IBERGRID |
|
Y |
|
|
Done |
Dual stack |
Tested |
GGUS:131963 |
|
UKI-GridPP-Cloud-IC |
UK |
|
|
|
Y |
Done |
NA |
NA |
GGUS:131599 |
The site is an extension of UKI-LT2-IC-HEP, has not pS or storage and all services are IPv6-enabled |
UKI-LT2-Brunel |
UK |
|
Y |
Y |
Y |
Done |
NA |
Tested |
GGUS:131600 |
Dual stack on all services since years. pS not deployed by choice of the site |
UKI-LT2-IC-HEP |
UK |
|
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131601 |
|
UKI-LT2-QMUL |
UK |
|
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131602 |
|
UKI-LT2-RHUL |
UK |
|
Y |
Y |
Y |
On hold |
Dual stack |
IPv4 |
GGUS:131603 |
Work on hold indefinitely due to difficulties related to COVID-19 and home working |
UKI-LT2-UCL-HEP |
UK |
|
Y |
|
|
Done |
Dual stack |
NA |
GGUS:131604 |
|
UKI-NORTHGRID-LANCS-HEP |
UK |
|
Y |
|
Y |
Done |
Dual stack |
Tested |
GGUS:131605 |
|
UKI-NORTHGRID-LIV-HEP |
UK |
|
Y |
|
Y |
In progress |
Dual stack |
IPv4 |
GGUS:131606 |
Deployment completed but waiting for ATLAS green light |
UKI-NORTHGRID-MAN-HEP |
UK |
|
Y |
|
Y |
Done |
Dual stack |
Dual stack |
GGUS:131607 |
Note: pS is currently off because it needs to be upgraded to the latest version |
UKI-NORTHGRID-SHEF-HEP |
UK |
|
Y |
|
Y |
Done |
Dual stack |
NA |
GGUS:131608 |
|
UKI-SCOTGRID-DURHAM |
UK |
|
Y |
|
Y |
Done |
Dual stack |
Dual stack |
GGUS:131609 |
|
UKI-SCOTGRID-ECDF |
UK |
|
Y |
|
Y |
Done |
Dual stack |
Dual stack (partial) |
GGUS:131610 |
ECDF storage in dual stack, ECDF-RDF will never be in dual stack |
UKI-SCOTGRID-GLASGOW |
UK |
|
Y |
Y |
Y |
In progress |
Dual stack |
IPv4 |
GGUS:131611 |
The campus-wide network upgrade is proceeding, but it's too early for IPv6 provisioning |
UKI-SOUTHGRID-BHAM-HEP |
UK |
Y |
Y |
|
Y |
In progress |
Dual Stack |
IPv4 |
GGUS:131612 |
IPv6 being deployed on site, perfSonar seems OK, next step is storage |
UKI-SOUTHGRID-BRIS-HEP |
UK |
|
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131613 |
|
UKI-SOUTHGRID-CAM-HEP |
UK |
|
Y |
|
Y |
Done |
Dual stack |
Tested |
GGUS:131614 |
No answer from ATLAS, assumed OK |
UKI-SOUTHGRID-OX-HEP |
UK |
Y |
Y |
Y |
Y |
Done |
Dual stack |
NA |
GGUS:131615 |
|
UKI-SOUTHGRID-RALPP |
UK |
|
Y |
Y |
Y |
Done |
Dual stack |
Tested |
GGUS:131616 |
|
UKI-SOUTHGRID-SUSX |
UK |
|
Y |
|
|
Done |
Dual stack |
Testing |
GGUS:131617 |
Deployment completed, to be checked by ATLAS |
UNI-FREIBURG |
DE |
|
Y |
|
|
In progress |
Dual stack |
IPv4 |
GGUS:131964 |
Already deploying IPv6, now working on topology plan and configuring the perfSonar nodes |
UNIBE-LHEP |
CH |
|
Y |
|
|
Done |
NA |
IPv4 |
GGUS:131965 |
|
USC-LCG2 |
IBERGRID |
|
|
|
Y |
Done |
Dual stack |
Dual stack |
GGUS:131966 |
|
WEIZMANN-LCG2 |
IL |
|
Y |
|
Y |
Done |
NA |
IPv4 |
GGUS:132118 |
|
wuppertalprod |
DE |
|
Y |
|
|
Done |
? |
Dual stack |
GGUS:131967 |
|
ZA-CHPC |
AfricaArabia |
Y |
Y |
|
|
Done |
NA |
Dual stack (partially) |
GGUS:132283 |
ALICE OK, ATLAS storage to be completely overhauled and therefore it will be deployed with IPv6 support |
Legend:
- Status: No reply, on hold, in progress, done
- perfSONAR: NA (not available at site), (only) IPv4, Dual stack
- Storage: NA (not available at site), (only) IPv4, Dual stack (and not tested), Testing, Tested
Notes:
- The tickets are submitted progressively over time, and not all sites are present currently.
Some experiments track the IPv6 readiness status independently:
Experiment Specific checks
ATLAS
For ATLAS, before you migrate your storage to IPv6, please send an email for information to atlas-adc-ddm-support at cern.ch and atlas-adc-dpa at cern.ch
ATLAS setup an
ETF IPv6
only testing node to check the behaviour of the sites
and in general
- check FTS monitoring
with IPv6 filter to make sure transfers are succeeding.
- check Panda
and HammerCloud to make sure there are not changes in failure rates due to the IPv6 migration
Reports
Report 14/09/2017
A GGUS support unit for IPv6 in GGUS has been created. Some experts from the HEPiX IPv6 are volunteering to be members of it.
A WLCG broadcast will be sent very soon with this content:
The WLCG management and the LHC experiments approved several months ago
(+) a deployment plan for IPv6 (++) which requires that:
- all Tier-1 sites provide dual-stack access to their storage resources by April 1st 2018
- all Stratum-1 and FTS instances for WLCG need to be dual-stack by April 1st 2018
- the vast majority of Tier-2 sites provide dual-stack access to their storage resources by the end of Run2 (end of 2018).
All WLCG sites are therefore invited to plan accordingly in case they have not yet met these requirements. Individual tickets will be sent in the coming weeks to Tier-2 sites (Tier-1 sites are already tracked separately) to track their progress.
Various support channels are available:
Interested sites may also join the HEPiX IPv6 working group (
https://hepix-ipv6.web.cern.ch/
), which provides some documentation.
(+)
https://espace.cern.ch/WLCG-document-repository/Boards/MB/Minutes/2016/MB-Minutes-160920-v1.pdf
(++)
https://indico.cern.ch/event/467577/contributions/1976037/attachments/1340008/2017561/Kelsey20sep16.pdf
Report 29/09/2016
See
slides
.
Report 02/06/2016
Next week's pre-GDB is devoted to IPv6, as well as two-hours slot in the GDB. The main topics to be discussed are:
- Experiment requirements
- Status of support to IPv6-only CPUs
- Experience on dual-stack services
- Monitoring and IPv6
- Security and IPv6
- Status of WLCG tiers and LHCOPN/LHCONE
Report 05/11/2015
- Deploying an instance of ETF (new implementation of Nagios for SAM) to test the nodes in the IPv6 testbed
Report 17/09/2015
Update on the status of IPv6 deployment in WLCG (from Bruno Hoeft)
Tier-1 |
Site |
LHCOPN IPv6 peering |
LHCONE IPv6 peering |
perfSONAR via IPv6 |
ASGC |
- |
- |
- |
BNL |
not on their priority list |
CH-CERN |
yes |
yes |
LHC[OPN/ONE] |
DE-KIT |
yes |
yes |
LHC[OPN/ONE] |
FNAL |
yes |
yes |
LHC[OPN/ONE] but not yet visible in Dashboard |
FR-CCIN2P3 |
yes |
yes |
LHC[OPN/ONE] but not yet visible in Dashboard |
IT-INFN-CNAF |
- |
yes |
LHCONE |
NDGF |
yes |
yes |
LHC[OPN/ONE] |
ES-PIC |
yes |
yes |
LHCOPN |
KISTI |
started but no peering implemented |
NL-T1 |
no peering implemented |
TRIUMF |
IPv6 peering planned at end of 2015 |
RRC-KI-T1 |
- |
- |
- |
Tier-2 |
Site |
LHCONE IPv6 peering |
perfSONAR |
DESY |
yes |
LHCONE |
CEA SACLAY |
yes |
- |
ARNES |
yes |
- |
WISC-MADISON |
yes |
- |
UK sites |
QMUL peers with LHCONE but not for IPv6 |
Prague FZU |
IPv6 still working but the previous contact person left |
There are additional IPv6 perfSONAR servers at Tier-2 centres, but not via LHCONE.
Report 07/05/2015
- LHCb: DIRAC was made IPv6-compatible back in November, but testing has started in April: a DIRAC installation on a dual stack machine is running at CERN. Successfully tested that can be contacted from IPv6 and IPv4 nodes and can run jobs submitted from LXPLUS. However, 50% of client connections fail, which was hidden by automatic retries, and it was found to be caused by a CERN python library (wrong IPV6 address returned).
Report 02/04/2015
- FTS3 testbed operational, with servers at KIT and Imperial College both working fine
- The following sites activated IPv6:
- LHCOPN: CERN, KIT, NDGF, PIC, NL-T1, IN2P3-CC, HIP
- LHCONE: CERN, CEA Saclay, IN2P3 -CC, IJS (NDGF site)
- OSG is testing (among other middleware) glideinWMS. The central manager, frontend and schedd machines have to be dual-stack and can talk to IPv4, IPv6 and dual-stack startd's. glideinWMS must specify to wget that it prefers IPv6 ( details
)
- OSG confirmed that Bestman2 is IPv6-compliant, but srmcp is not (it has not been patched for the extensions needed for IPv6)
- squid 2 is not IPv6-compliant, while squid 3 is. OSG is still using squid 2
- Duncan's dual stack mesh includes several dual-stack perfSONAR instances (~14 sites included) ( link
)
Task Overview
Task |
Deadline |
Progress |
Affected VO |
Affected Siites |
Comment |
WLCG applications readiness |
|
60% |
All |
All |
Maintain software component readiness information in this table |
User scenarios |
100% |
All |
All |
Define the relevant user scenarios to be tested by the experiments |
Experiment tests |
ATLAS, CMS started |
All |
All |
Have the experiments to test their main workload/data management tools and central services over Pv6 |
Scenarios
We can classify the actors in these categories:
- Users
- end users (human or robotic) using a client interface to interact with services
- Jobs
- user processes running on a batch node
- Site services
- services present at all sites (CE, SE, BDII, CVMFS, ARGUS, etc.)
- Central services
- services presents at a few sites (VOMS, MyProxy, Frontier, Nagios, etc.)
The following table describes the requirements of the corresponding nodes in terms of IP protocol in a timescale limited to a few years from now.
Node |
Network |
Requirement |
User |
IPv4 |
MUST work, as users can connect from anywhere |
User |
IPv6 |
SHOULD work, but it would concern only very few users working from IPv6-only networks |
User |
dual stack |
MUST work, it should be the most common case in a few years |
Batch |
IPv4 |
MUST work, as some batch systems might not work on IPv6, or e.g. the site might want to use AFS internally |
Batch |
IPv6 |
MUST work, as some sites might exceed their IPv4 allocation otherwise |
Batch |
dual stack |
MUST work, as some sites might want to use legacy software but also be fully IPv6-ready (e.g. CERN) |
Site service |
IPv4 |
MUST work, as many institutes will not adopt IPv6 for some years and backward compatibility is required |
Site service |
IPv6 |
SHOULD work, but it will have to work when there will be new sites with only IPv6 |
Site service |
dual stack |
MUST work, it should be the most common case in a few years |
Central service |
IPv4 |
MAY work, but central services can be expected to run at sites with an IPv6 infrastructure |
Central service |
IPv6 |
MAY work, as above sites certainly have an IPv4 infrastructure |
Central service |
dual stack |
MUST work, and all above sites are expected to be able to provide dual-stack nodes |
Existing WLCG sites may have only IPv4 and will not be forced by WLCG to deploy IPv6 to continue working. This is obviously true for resources that WLCG cannot control (opportunistic, clouds, etc.).
On the other hand, WLCG should allow new sites to deploy only IPv6 in a scenario where IPv4 addresses cannot be obtained.
Therefore, a realistic scenario is such that some sites will be accessible only via IPv4, some only via IPv6 and some via both protocols. Similarly, users may have to work from nodes supporting only IPv4, only IPv6 or both.
An additional constraint comes from storage federations: it is obvious that sites using only a protocol will not be able to read data from sites using only the other protocol. Therefore sites wishing to participate to a storage federation will need to deploy their SEs in dual stack, when sites with IPv6-only WNs become a reality.
In such scenario, central services are obviously required to work in dual stack using both protocols and be hosted at eligible sites.
All middleware used at a site must work via both protocols, to accommodate IPv4/6-only sites. A site is recommended to deploy the services it exposes to the outside in dual stack, but it is not a requirement (apart in the storage federation case).
To summarise, these are the testing scenarios to be considered:
- central services MUST be deployed on dual stack nodes and tested using both protocols
- site services MUST be deployed on dual stack nodes and tested using both protocols (which guarantees they work in IPv4/6 mode)
- user clients and libraries MUST be deployed on dual stack nodes and tested using both protocols (which guarantees they work in IPv4/6 mode)
- batch nodes MUST be deployed on IPv4, IPv6 or dual stack nodes (not all three configurations might be possible for a given site, though).
From now on all services are assumed to run on dual stack nodes. Moreover, when testing on a dual stack testbed, tests need to be run by forcing IPv4 or IPv6 either on the client node.
Use cases to test
Basic job submission
The user submits a job using the native middleware clients (
CREAM client, Condor-G, etc.) or intermediate services (gLite WMS, glideinWMS, PanDA, DIRAC, AliEN, etc.).
User |
CE |
Batch |
Notes |
IPv4 |
dual stack |
IPv4 |
|
IPv4 |
dual stack |
dual stack |
|
IPv4 |
dual stack |
IPv6 |
|
dual stack |
dual stack |
IPv4 |
also forcing IPv6 on user node |
dual stack |
dual stack |
dual stack |
also forcing IPv6 on user node |
dual stack |
dual stack |
IPv6 |
also forcing IPv6 on user node |
All "auxiliary" services (ARGUS, VOMS, MyProxy, etc.) are supposed to work on dual stack, but may run on IPv4 initially for practical purposes, to avoid having a full dual-stack service stack right from the beginning.
This remark is totally general and applies to all tests described below.
In case of intermediate services, the tests become much more complex given the higher number of services involved.
Basic data transfer
The user copies a file from his node to a SE and back.
User |
SE |
Notes |
IPv4 |
dual stack |
|
dual stack |
dual stack |
also forcing IPv6 on user node |
In this context, a batch node reading/writing to a local or remote SE is treated as a user node. The file copy MUST be tried with all protocols supported by the SE.
Third party data transfer
The user replicates a bunch of files between sites via FTS-3.
Production data transfer
The user replicates a dataset using experiment-level tools (
PhEDEx, DDM, DIRAC, etc.).
Conditions data
A job access conditions data from a batch node via Frontier/squid.
Experiment software
A job accesses experiment software in CVMFS from a batch node.
Experiment workflow
A user runs a real workflow (event generation, simulation, reprocessing, analysis).
This test combines all previous tests into one.
Information system
A user queries the information system
Job monitoring
Monitoring information from jobs, coming either from central services or from batch nodes via messaging systems, is collected, stored and accessed by a user.
User |
Monitoring server |
Messaging system |
Batch |
Notes |
IPv4 |
dual stack |
dual stack |
IPv4 |
|
IPv4 |
dual stack |
dual stack |
IPv6 |
|
IPv4 |
dual stack |
dual stack |
dual stack |
|
dual stack |
dual stack |
dual stack |
IPv4 |
|
dual stack |
dual stack |
dual stack |
IPv6 |
|
dual stack |
dual stack |
dual stack |
dual stack |
|
IPv6 compliance of WLCG services
AliEN
ARC
ARGUS
BDII
- Contact: Maria Alandes
- Status: BDII is IPv6 compliant since the EMI 2 release. (OpenLDAP OK since v2)
- Further info on the investigation here: https://savannah.cern.ch/bugs/index.php?95839
- In order to enable the IPv6 interface, a yaim variable BDII_IPV6_SUPPORT needs to be set to 'yes' (default is no). This is all described in the sys admin guide and the corresponding release notes.
CASTOR
cfengine
CMS Tag Collector
cmsweb
CVMFS
Dashboard Google Earth
dCache
DIRAC
DPM
- Contact: Fabrizio Furano
- Status:
- SRM and rfio need a config workaround
- Glasgow are evaluating DPM on IPv6 as part of their Hepix involvement (Sam Skipsey). Quite a lot is working, details should come from Sam.
- Dependencies:
- MySQL 5.5 (min version for IPv6) not available for SL6, DPM has been successfully deployed with MariaDB (IPv6 compliant), works out of the box.
- xrootd frontend - awaiting v4
- apache >=2.2 (used for HTTP/DAV interface) supports ipv6
EGI Accounting Portal
EOS
Experiment Dashboards
Frontier
FTS
- Contact: Michail Salichos for both FTS2 and FTS3
- FTS2 Status:
- FTS3 status:
- looks good for FTS3 and its dependencies (modulo the globus issue mentioned above)
- with the exception of Active MQ-cpp - the messaging side will need some attention for IPv6 support.
Ganglia
GFAL/lcg_util
- Contact:
- GFAL2: Adrien Devresse
- gfal/lcg_util: Alejandro Alvarez Ayllon
- Status: gfal2 is plugin based, so it all depends on the plugin
- HTTP: neon supports IPv6
- SRM: gsoap supports IPv6
- GridFTP: it is enabled - https://its.cern.ch/jira/browse/LCGUTIL-4
- DCAP: unknown
- LFC and RFIO: should work
- BDII: OpenLDAP does support ipv6
- Note on gsoap - this is used for WS in a number of cases. It supports IPV6 but this has to be enabled at compile time, so in certain builds it could be missing.
- Note: gfal/lcg_util is probably OK, not tested, and by default would not fix this if broken.
glideinWMS
GOCDB
Gratia Accounting
Gridsite
Gstat
iCMS
LFC
Nagios
perfSONAR
REBUS
SAM
Scientific Linux
STD IB and QA pages
Ticket system (GGUS)
various D web tools
gLite WMS
xroot
DualStack Virtual Machines at CERN
In the CERN Agile Infrastructure it is possible to ask for a Virtual Machine and set it up as dual stack node. This allows procuring "hardware" for testing of any kind of service on dual stack. To setup a dual stack VM at CERN please follow the instructions at the
DualStackCERNVirtualMachine page
IPv6 Site Survey
The results of the 2014 IPv6 Site Survey are reported in...
SAM migration to IPv6
This
page details the steps to be accomplished to use SAM to test IPv6 endpoints.
--
AndreaSciaba - 15-Jul-2013