Difference: SEmigration (1 vs. 10)

Revision 102013-02-26 - SandraSaornil

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"
Line: 184 to 184
 
Changed:
<
<
>
>
 
Changed:
<
<
>
>
 

Revision 92010-09-06 - PeterJones

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"
Line: 234 to 234
 
Changed:
<
<
>
>
 

Revision 82008-09-17 - RobertoSantinel

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"
Changed:
<
<

Contents

>
>
 
Added:
>
>
E ditW YSIWYGA ttachPDFP rintable
 
Changed:
<
<

Migration of <SE> to a new technology

>
>
<!-- /patternToolBar-->
 
Changed:
<
<
Proposal based on a discussion held on June 26:
>
>
<!-- /patternTop-->

r7 - 08 Sep 2008 - 19:28:23 - PhilippeCharpentier You are here: TWiki > LHCb Web > LHCbComputing > SEmigration

<!-- /patternHomePath-->

Contents

Migration of <SE> to a new technology

Proposal based on a discussion held on June 26:

 
  1. Define a DIRAC SE in the CS: e.g. <newSE> describing that new SE (e.g. RAL-tape_Castor)
  2. Add <newSE> in the SEs supported by the site
Line: 24 to 34
  -- PhilippeCharpentier - 08 Oct 2007
Changed:
<
<

RAL Migration: dCache to CASTOR: Intervention Plan

>
>

RAL Migration: dCache to CASTOR: Intervention Plan

 
Changed:
<
<

A. AIM

>
>

A. AIM

 The primary motivation for this intervention is the migration of RAL data to new storage technology: from dCache to Castor. This intervention does require an orchestrated action between LFC, FTS admins, RAL Storage admin and LHCb Data Manager.
Changed:
<
<

B. CO-ORDINATION

>
>

B. CO-ORDINATION

 Marianne Bargiotti will coordinate the intervention. She'll be responsible for making sure about its completeness, about everyone availability, and for the execution of the plan according to the schedule. People involved:
Changed:
<
<

C. DESCRIPTION

SRM endpoint SAPATH StorageClass
ralsrmb.rl.ac.uk:8443/srm/managerv1 /castor.ads.rl.ac.uk/prod/lhcb T1D0
ralsrmc.rl.ac.uk:8443/srm/managerv1 /castor.ads.rl.ac.uk/prod/lhcb T0D1
>
>

C. DESCRIPTION

SRM endpoint Sort SAPATH Sort StorageClass Sort
ralsrmb.rl.ac.uk:8443/srm/managerv1 /castor.ads.rl.ac.uk/prod/lhcb T1D0
ralsrmc.rl.ac.uk:8443/srm/managerv1 /castor.ads.rl.ac.uk/prod/lhcb T0D1
 
Changed:
<
<
  1. Installation of a new storage technology at RAL for the all storage classes at RAL (pool of CASTOR endpoints holding different service classes): the endpoint is already up and running and roughly tested by LHCb. See https://www.gridpp.ac.uk/wiki/RAL_Tier1_CASTOR_SRM for further reference. We envisage a later migration to SRM v2.2 that will reflect the need for space tokens. We note that space tokens should NOT depend on the namespace, as agreed at the SRM FNAL workshop.
>
>
  1. Installation of a new storage technology at RAL for the all storage classes at RAL (pool of CASTOR endpoints holding different service classes): the endpoint is already up and running and roughly tested by LHCb. See https://www.gridpp.ac.uk/wiki/RAL_Tier1_CASTOR_SRM for further reference. We envisage a later migration to SRM v2.2 that will reflect the need for space tokens. We note that space tokens should NOT depend on the namespace, as agreed at the SRM FNAL workshop.
 
  1. RAL downtime (~weeks) and update of the DIRAC Configuration System for the definition of the new DIRAC SEs.(LHCb DM)
Changed:
<
<
  1. Reconfiguration of the FTS service at all the involved T1 sites (CERN,SARA/NIKHEF, GridKa, IN2P3, CNAF, PIC, RAL) in the receiving channel. Time scale: after the announcement, right before the LFC substitution. (LHCb DM and site managers)
>
>
  1. Reconfiguration of the FTS service at all the involved T1 sites (CERN,SARA/NIKHEF, GridKa, IN2P3, CNAF, PIC, RAL) in the receiving channel. Time scale: after the announcement, right before the LFC substitution. (LHCb DM and site managers)
 
  1. Two operation in parallel:
    • RAL physical migration of data from dCache to Castor. Mechanism & tools to perform this need further discussion with RAL.
    • LFC update of hostname string and SA Path for all the replicas in the central LFC catalog (LFC-support): the addressing of the existing replicas to the correct new SRM endpoint and SAPath will be done in collaboration with M.Bargiotti, who will provide to LFC support the relevant data files divided accordingly.
  2. Testing of the new endpoint for data transfer and data access through LHCb applications.
  3. Unbanning of RAL from DIRAC CS
Changed:
<
<

D. PARAMETERS

>
>

D. PARAMETERS

 
  1. Type of intervention: long
  2. Duration: one week
Line: 60 to 70
 
  1. Announcement: monday operation meeting on october 29th.
  2. Broadcast frequency: at the intervention start/end.
Changed:
<
<

E. SERVICES INVOLVED

>
>

E. SERVICES INVOLVED

 
  1. FTS services at different T1 sites: communication report at the beginning and at the end of the intervention for the reconfiguration of the FTS services involved.
  2. DIRAC WMS
Line: 68 to 78
  -- Marianne.Bargiotti - 29 Oct 2007
Changed:
<
<

Intervention plan for LHCb migration at PIC for Tape storage class (T1D0) and change of SRM endpoint for Disk storage (T0D1)

>
>

Intervention plan for LHCb migration at PIC for Tape storage class (T1D0) and change of SRM endpoint for Disk storage (T0D1)

 
Changed:
<
<

A. AIM

>
>

A. AIM

 The primary motivation for this intervention is the migration of PIC tape data to new storage technology: from Castor to Enstore. During this intervention a modification on PIC disk SRM endpoint will be applied too. This intervention does require an orchestrated action between LFC, FTS admins, PIC Storage admin and LHCb Data Manager.
Changed:
<
<

B. CO-ORDINATION

>
>

B. CO-ORDINATION

 Marianne Bargiotti will coordinate the intervention. She'll be responsible for making sure about its completeness, about everyone availability, and for the execution of the plan according to the schedule. People involved:
Changed:
<
<
>
>
 
Changed:
<
<

C. DESCRIPTION

>
>

C. DESCRIPTION

 
  1. Installation of a new storage technology at PIC for tape storage class. Installation of new SRM endpoint for disk storage class.
  2. PIC downtime and both storage classes banned in the DIRAC Configuration System (CS) . Update of the DIRAC CS for the definition of the new DIRAC SEs.(LHCb DM).
Changed:
<
<
  1. Reconfiguration of the FTS service at all the involved T1 sites (CERN,SARA/NIKHEF, GridKA, IN2P3, CNAF, PIC, RAL) in the receiving channel. Time scale: after the announcement, right before the LFC substitution. (LHCb DM and site managers)
>
>
  1. Reconfiguration of the FTS service at all the involved T1 sites (CERN,SARA/NIKHEF, GridKA?, IN2P3?, CNAF, PIC, RAL) in the receiving channel. Time scale: after the announcement, right before the LFC substitution. (LHCb DM and site managers)
 
  1. Three operation in parallel: * PIC physical migration of data from Castor to dCache (PIC Admins) * LFC update of hostname string for all the tape replicas in the central LFC catalog (LFC-support): from castorsrm.pic.es/castor/pic.es/grid to srm.pic.es/pnfs/pic.es/data/lhcb/tape.
  2. LFC update: substitution of the hostname string for PIC-disk: from srm-disk.pic.es to srm.pic.es
  3. Testing of the new endpoints for data transfer and data access through LHCb applications.
  4. Unbanning of PIC-tape and disk from DIRAC CS
Changed:
<
<

D. PARAMETERS

>
>

D. PARAMETERS

 
  1. Type of intervention: short
  2. Duration (estimate): two days
Line: 99 to 109
 
  1. Announcement: monday operation meeting on XXXXX
  2. Broadcast frequency: at the intervention start/end.
Changed:
<
<

E. SERVICES INVOLVED

>
>

E. SERVICES INVOLVED

 
  1. FTS services at different T1 sites: communication report at the beginning and at the end of the intervention for the reconfiguration of the FTS services involved.
  2. DIRAC WMS
  3. LHCb DM

Added:
>
>

Intervention plan for the migration of LHCb from SRM v1 to SRM v2

 
Changed:
<
<

Intervention plan for the migration of LHCb from SRM v1 to SRM v2

A. AIM

>
>

A. AIM

 The primary motivation for this intervention is the definitive phase out of SRMv1 enpoints from the scope of LHCb (DIRAC2). The severity of this intervention is steered by several sites (mainly RAL and CERN) pushing for retiring the SRMv1 endpoints still used by DIRAC2. During this intervention a modification of the old Stager in DIRAC2 is required in order to be able to interact with newest endpoints that do not talk the same "slang" as the old SRMv1 endpoint. The main goal is to get it over by the end of September.
Changed:
<
<

B. CO-ORDINATION

>
>

B. CO-ORDINATION

  People involved:
  • LHCb DIRAC2 modification: Philippe Charpentier , Stuart Paterson and Raja Nandakumar
Changed:
<
<
>
>
 
  • LHCb-IT link: Joel Closier
Changed:
<
<
>
>
 
Changed:
<
<

C. DESCRIPTION

>
>

C. DESCRIPTION

 
Changed:
<
<
  1. Philippe: tests what SRMv2 returns and collect all request that must be forwarded to sites and changes to be rolled out in DIRAC. Tests details reported at http://lblogbook.cern.ch/Operations/246
>
>
  1. Philippe: tests what SRMv2 returns and collect all request that must be forwarded to sites and changes to be rolled out in DIRAC. Tests details reported at http://lblogbook.cern.ch/Operations/246
 
  1. On the analysis of these tests it results
Changed:
<
<
    1. Request RAL's default pool on srm-lhcb to be changed to lhcbRawRdst (or any other T1D0 pool) instead of ."lhcbUser". Done
    2. Deploy the new build of DIRAC2 on all VOBOXes (Raja, done at IN2P3)
>
>
    1. Request RAL's default pool on srm-lhcb to be changed to lhcbRawRdst (or any other T1D0? pool) instead of ."lhcbUser". Done
    2. Deploy the new build of DIRAC2 on all VOBOXes (Raja, done at IN2P3?)
 
    1. In DIRAC3's CS, define the <site>-tape SEs with the SRM v2 endpoints and STD= LHCb_RDST; define <site>-disk SEs with the SRM v2 endpoints and STD =LHCb_MC_DST except at CERN (LHCb_MC_M-DST). This will immediately allow DIRAC3 to access DC06 data without too many disk2disk copy.
  1. Going faster with the "durable" endpoints by stopping any production of DSTs in DIRAC2
  2. Modification of the Stager Agent to support SRMv2 syntax to be put in production.
Line: 136 to 146
 
    1. ralsrmb.rl.ac.uk to srm-lhcb.gridpp.rl.ac.uk
  1. Rename SURLs in the LFC with the above changes
Changed:
<
<

D. PARAMETERS

>
>

D. PARAMETERS

 
  1. Type of intervention: short
  2. Duration (estimate): two days
Line: 144 to 154
 
  1. Announcement: monday operation meeting on ???
  2. Broadcast frequency: at the intervention start/end.
Changed:
<
<

E. SERVICES INVOLVED

>
>

E. SERVICES INVOLVED

 
  1. LHCb DM and WMS
Changed:
<
<
  1. LFC
>
>
  1. LFC at CERN (master)
  2. LFC at various sites stream replicated
 

-- Marianne.Bargiotti - 12 Nov 2007

Added:
>
>
<!-- /patternTopic-->

<!-- /patternContent-->

E dit | W YSIWYG | A ttach | P rintable | C lone | R aw View | Backlinks: We b, A l l Webs | H istory: r7 < r6 < r5 < r4 < r3 | M ore topic actions

<!--/patternTopicActions-->

<!-- /patternMoved-->
<!-- /patternMainContents-->
<!-- /patternMain-->
<!-- /patternLeftBar-->
<!-- /patternFloatWrap-->

<!-- /patternOuter-->
<!-- /patternWrapper-->
| Powered by TWiki |

|
<!-- /patternTopBar-->
This site is powered by the TWiki collaboration platformCopyright & by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Ask a support question or Send feedback
<!-- /patternBottomBarContents-->
<!-- /patternBottomBar-->
<!-- /patternPage-->
<!-- /patternPageShadow-->
<!-- /patternScreen-->
 \ No newline at end of file

Revision 72008-09-08 - PhilippeCharpentier

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"
Added:
>
>

Contents

 

Migration of <SE> to a new technology

Changed:
<
<
Proposal based on a discussion held on June 26:
>
>
Proposal based on a discussion held on June 26:
 
  1. Define a DIRAC SE in the CS: e.g. <newSE> describing that new SE (e.g. RAL-tape_Castor)
  2. Add <newSE> in the SEs supported by the site
Line: 63 to 68
  -- Marianne.Bargiotti - 29 Oct 2007
Changed:
<
<

INTERVENTION PLAN ON LHCB MIGRATION AT PIC FOR TAPE STORAGE CLASS (T1D0) and CHANGE OF SRM ENDPOINT FOR DISK STORAGE CLASS (T0D1)

>
>

Intervention plan for LHCb migration at PIC for Tape storage class (T1D0) and change of SRM endpoint for Disk storage (T0D1)

 

A. AIM

The primary motivation for this intervention is the migration of PIC tape data to new storage technology: from Castor to Enstore. During this intervention a modification on PIC disk SRM endpoint will be applied too. This intervention does require an orchestrated action between LFC, FTS admins, PIC Storage admin and LHCb Data Manager.
Line: 100 to 105
 
  1. DIRAC WMS
  2. LHCb DM

Changed:
<
<

INTERVENTION PLAN ON DEFINITIVE LHCB MIGRATION FROM SRMv1 to SRMv2

>
>

Intervention plan for the migration of LHCb from SRM v1 to SRM v2

 

A. AIM

The primary motivation for this intervention is the definitive phase out of SRMv1 enpoints from the scope of LHCb (DIRAC2). The severity of this intervention is steered by several sites (mainly RAL and CERN) pushing for retiring the SRMv1 endpoints still used by DIRAC2. During this intervention a modification of the old Stager in DIRAC2 is required in order to be able to interact with newest endpoints that do not talk the same "slang" as the old SRMv1 endpoint. The main goal is to get it over by the end of September.

Revision 62008-09-08 - RobertoSantinel

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"

Migration of <SE> to a new technology

Line: 118 to 118
 
  1. On the analysis of these tests it results
    1. Request RAL's default pool on srm-lhcb to be changed to lhcbRawRdst (or any other T1D0 pool) instead of ."lhcbUser". Done
    2. Deploy the new build of DIRAC2 on all VOBOXes (Raja, done at IN2P3)
Changed:
<
<
    1. In DIRAC3's CS, define the <site>-tape SEs with the SRM v2 endpoints and STD= LHCb_RDST; define <site>-disk SEs with the SRM v2 endpoints and STD =LHCb_MC_DST except at CERN (LHCb_MC_M-DST). This will immediately allow DIRAC3 to access DC06 data without too many disk2disk copy.
  1. Modification of the Stager Agent to support SRMv2 syntax to be put in production.
>
>
    1. In DIRAC3's CS, define the <site>-tape SEs with the SRM v2 endpoints and STD= LHCb_RDST; define <site>-disk SEs with the SRM v2 endpoints and STD =LHCb_MC_DST except at CERN (LHCb_MC_M-DST). This will immediately allow DIRAC3 to access DC06 data without too many disk2disk copy.
  1. Going faster with the "durable" endpoints by stopping any production of DSTs in DIRAC2
  2. Modification of the Stager Agent to support SRMv2 syntax to be put in production.
 
  1. In DIRAC2's CS, change the endpoints of CERN, CNAF, PIC and RAL for the -tapeSE with the following changes:
    1. srm.cern.ch to srm-lhcb.cern.ch
    2. castorsrm.cr.cnaf.infn.it to srm-v2.cr.cnaf.infn.it

Revision 52008-09-08 - RobertoSantinel

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"

Migration of <SE> to a new technology

Proposal based on a discussion held on June 26:

Changed:
<
<
  1. Define a DIRAC SE in the CS: e.g. <newSE> describing that new SE (e.g. RAL-tape_Castor)
  2. Add <newSE> in the SEs supported by the site
  3. Replicate O(100) files from <SE> to <newSE>
  4. Submit test jobs reading those files, making sure the replica that is chosen is that on the new SE. Is this feasible?
  5. Define <oldSE> in the CS with the same characteristics as <SE>, put it as a close SE to the site
  6. Define <SE> as an alias of <newSE> in the CS. From that moment all transfers (pending or new ones) will go to the new SE but with <newSE> as name
  7. Rename hostname from <SE> to <oldSE> in the LFC (at NIKHEF <oldSE> could be SARA-disk, at RAL it could be RAL-tape_dCache)
  8. Define <SE> as <newSE> (no longer an alias) and remove <newSE> from CS
  9. Rename in LFC hostname <newSE> of files replicated in 3 to <SE>
  10. Complete the replications from <oldSE> to <SE>
  11. Remove replicas (in SE and LFC) for <oldSE>
  12. Remove <oldSE> from the CS
>
>
  1. Define a DIRAC SE in the CS: e.g. <newSE> describing that new SE (e.g. RAL-tape_Castor)
  2. Add <newSE> in the SEs supported by the site
  3. Replicate O(100) files from <SE> to <newSE>
  4. Submit test jobs reading those files, making sure the replica that is chosen is that on the new SE. Is this feasible?
  5. Define <oldSE> in the CS with the same characteristics as <SE>, put it as a close SE to the site
  6. Define <SE> as an alias of <newSE> in the CS. From that moment all transfers (pending or new ones) will go to the new SE but with <newSE> as name
  7. Rename hostname from <SE> to <oldSE> in the LFC (at NIKHEF <oldSE> could be SARA-disk, at RAL it could be RAL-tape_dCache)
  8. Define <SE> as <newSE> (no longer an alias) and remove <newSE> from CS
  9. Rename in LFC hostname <newSE> of files replicated in 3 to <SE>
  10. Complete the replications from <oldSE> to <SE>
  11. Remove replicas (in SE and LFC) for <oldSE>
  12. Remove <oldSE> from the CS
  -- PhilippeCharpentier - 08 Oct 2007
Line: 25 to 25
 The primary motivation for this intervention is the migration of RAL data to new storage technology: from dCache to Castor. This intervention does require an orchestrated action between LFC, FTS admins, RAL Storage admin and LHCb Data Manager.

B. CO-ORDINATION

Changed:
<
<
Marianne Bargiotti will coordinate the intervention. She'll be responsible for making sure about its completeness, about everyone availability, and for the execution of the plan according to the schedule. People involved:
>
>
Marianne Bargiotti will coordinate the intervention. She'll be responsible for making sure about its completeness, about everyone availability, and for the execution of the plan according to the schedule. People involved:
 
Line: 36 to 35
 

C. DESCRIPTION

SRM endpoint SAPATH StorageClass
Changed:
<
<
ralsrmb.rl.ac.uk:8443/srm/managerv1 /castor.ads.rl.ac.uk/prod/lhcb T1D0
ralsrmc.rl.ac.uk:8443/srm/managerv1 /castor.ads.rl.ac.uk/prod/lhcb T0D1
>
>
ralsrmb.rl.ac.uk:8443/srm/managerv1 /castor.ads.rl.ac.uk/prod/lhcb T1D0
ralsrmc.rl.ac.uk:8443/srm/managerv1 /castor.ads.rl.ac.uk/prod/lhcb T0D1
 
  1. Installation of a new storage technology at RAL for the all storage classes at RAL (pool of CASTOR endpoints holding different service classes): the endpoint is already up and running and roughly tested by LHCb. See https://www.gridpp.ac.uk/wiki/RAL_Tier1_CASTOR_SRM for further reference. We envisage a later migration to SRM v2.2 that will reflect the need for space tokens. We note that space tokens should NOT depend on the namespace, as agreed at the SRM FNAL workshop.
  2. RAL downtime (~weeks) and update of the DIRAC Configuration System for the definition of the new DIRAC SEs.(LHCb DM)
Changed:
<
<
  1. Reconfiguration of the FTS service at all the involved T1 sites (CERN,SARA/NIKHEF, GridKa, IN2P3, CNAF, PIC, RAL) in the receiving channel. Time scale: after the announcement, right before the LFC substitution. (LHCb DM and site managers)
>
>
  1. Reconfiguration of the FTS service at all the involved T1 sites (CERN,SARA/NIKHEF, GridKa, IN2P3, CNAF, PIC, RAL) in the receiving channel. Time scale: after the announcement, right before the LFC substitution. (LHCb DM and site managers)
 
  1. Two operation in parallel:
Changed:
<
<
    • RAL physical migration of data from dCache to Castor. Mechanism & tools to perform this need further discussion with RAL.
>
>
    • RAL physical migration of data from dCache to Castor. Mechanism & tools to perform this need further discussion with RAL.
 
    • LFC update of hostname string and SA Path for all the replicas in the central LFC catalog (LFC-support): the addressing of the existing replicas to the correct new SRM endpoint and SAPath will be done in collaboration with M.Bargiotti, who will provide to LFC support the relevant data files divided accordingly.
  1. Testing of the new endpoint for data transfer and data access through LHCb applications.
  2. Unbanning of RAL from DIRAC CS
Line: 64 to 63
  -- Marianne.Bargiotti - 29 Oct 2007
Deleted:
<
<
 

INTERVENTION PLAN ON LHCB MIGRATION AT PIC FOR TAPE STORAGE CLASS (T1D0) and CHANGE OF SRM ENDPOINT FOR DISK STORAGE CLASS (T0D1)

Changed:
<
<

A. AIM

The primary motivation for this intervention is the migration of PIC tape data to new storage technology: from Castor to Enstore. During this intervention a modification on PIC disk SRM endpoint will be applied too. This intervention does require an orchestrated action between LFC, FTS admins, PIC Storage admin and LHCb Data Manager.
>
>

A. AIM

The primary motivation for this intervention is the migration of PIC tape data to new storage technology: from Castor to Enstore. During this intervention a modification on PIC disk SRM endpoint will be applied too. This intervention does require an orchestrated action between LFC, FTS admins, PIC Storage admin and LHCb Data Manager.
 
Changed:
<
<

B. CO-ORDINATION

Marianne Bargiotti will coordinate the intervention. She'll be responsible for making sure about its completeness, about everyone availability, and for the execution of the plan according to the schedule. People involved:
>
>

B. CO-ORDINATION

Marianne Bargiotti will coordinate the intervention. She'll be responsible for making sure about its completeness, about everyone availability, and for the execution of the plan according to the schedule. People involved:
 
Changed:
<
<

C. DESCRIPTION

>
>

C. DESCRIPTION

 
  1. Installation of a new storage technology at PIC for tape storage class. Installation of new SRM endpoint for disk storage class.
  2. PIC downtime and both storage classes banned in the DIRAC Configuration System (CS) . Update of the DIRAC CS for the definition of the new DIRAC SEs.(LHCb DM).
  3. Reconfiguration of the FTS service at all the involved T1 sites (CERN,SARA/NIKHEF, GridKA, IN2P3, CNAF, PIC, RAL) in the receiving channel. Time scale: after the announcement, right before the LFC substitution. (LHCb DM and site managers)
Changed:
<
<
  1. Three operation in parallel: * PIC physical migration of data from Castor to dCache (PIC Admins) * LFC update of hostname string for all the tape replicas in the central LFC catalog (LFC-support): from castorsrm.pic.es/castor/pic.es/grid to srm.pic.es/pnfs/pic.es/data/lhcb/tape.
>
>
  1. Three operation in parallel: * PIC physical migration of data from Castor to dCache (PIC Admins) * LFC update of hostname string for all the tape replicas in the central LFC catalog (LFC-support): from castorsrm.pic.es/castor/pic.es/grid to srm.pic.es/pnfs/pic.es/data/lhcb/tape.
 
  1. LFC update: substitution of the hostname string for PIC-disk: from srm-disk.pic.es to srm.pic.es
  2. Testing of the new endpoints for data transfer and data access through LHCb applications.
  3. Unbanning of PIC-tape and disk from DIRAC CS
Changed:
<
<

D. PARAMETERS

>
>

D. PARAMETERS

 
  1. Type of intervention: short
  2. Duration (estimate): two days
Line: 100 to 94
 
  1. Announcement: monday operation meeting on XXXXX
  2. Broadcast frequency: at the intervention start/end.
Changed:
<
<

E. SERVICES INVOLVED

>
>

E. SERVICES INVOLVED

 
  1. FTS services at different T1 sites: communication report at the beginning and at the end of the intervention for the reconfiguration of the FTS services involved.
  2. DIRAC WMS
  3. LHCb DM
Added:
>
>

INTERVENTION PLAN ON DEFINITIVE LHCB MIGRATION FROM SRMv1 to SRMv2

A. AIM

The primary motivation for this intervention is the definitive phase out of SRMv1 enpoints from the scope of LHCb (DIRAC2). The severity of this intervention is steered by several sites (mainly RAL and CERN) pushing for retiring the SRMv1 endpoints still used by DIRAC2. During this intervention a modification of the old Stager in DIRAC2 is required in order to be able to interact with newest endpoints that do not talk the same "slang" as the old SRMv1 endpoint. The main goal is to get it over by the end of September.

B. CO-ORDINATION

People involved:

C. DESCRIPTION

  1. Philippe: tests what SRMv2 returns and collect all request that must be forwarded to sites and changes to be rolled out in DIRAC. Tests details reported at http://lblogbook.cern.ch/Operations/246
  2. On the analysis of these tests it results
    1. Request RAL's default pool on srm-lhcb to be changed to lhcbRawRdst (or any other T1D0 pool) instead of ."lhcbUser". Done
    2. Deploy the new build of DIRAC2 on all VOBOXes (Raja, done at IN2P3)
    3. In DIRAC3's CS, define the <site>-tape SEs with the SRM v2 endpoints and STD= LHCb_RDST; define <site>-disk SEs with the SRM v2 endpoints and STD =LHCb_MC_DST except at CERN (LHCb_MC_M-DST). This will immediately allow DIRAC3 to access DC06 data without too many disk2disk copy.
  3. Modification of the Stager Agent to support SRMv2 syntax to be put in production.
  4. In DIRAC2's CS, change the endpoints of CERN, CNAF, PIC and RAL for the -tapeSE with the following changes:
    1. srm.cern.ch to srm-lhcb.cern.ch
    2. castorsrm.cr.cnaf.infn.it to srm-v2.cr.cnaf.infn.it
    3. srm.pic.es to srmlhcb.pic.es
    4. ralsrmb.rl.ac.uk to srm-lhcb.gridpp.rl.ac.uk
  5. Rename SURLs in the LFC with the above changes

D. PARAMETERS

  1. Type of intervention: short
  2. Duration (estimate): two days
  3. Intervention starting date (before October, not yet fully decided)
  4. Announcement: monday operation meeting on ???
  5. Broadcast frequency: at the intervention start/end.

E. SERVICES INVOLVED

  1. LHCb DM and WMS
  2. LFC

  -- Marianne.Bargiotti - 12 Nov 2007

Revision 42007-11-12 - MarianneBargiotti

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"

Migration of <SE> to a new technology

Line: 63 to 63
 
  1. LHCb DM

-- Marianne.Bargiotti - 29 Oct 2007

Added:
>
>

INTERVENTION PLAN ON LHCB MIGRATION AT PIC FOR TAPE STORAGE CLASS (T1D0) and CHANGE OF SRM ENDPOINT FOR DISK STORAGE CLASS (T0D1)

A. AIM

The primary motivation for this intervention is the migration of PIC tape data to new storage technology: from Castor to Enstore. During this intervention a modification on PIC disk SRM endpoint will be applied too. This intervention does require an orchestrated action between LFC, FTS admins, PIC Storage admin and LHCb Data Manager.

B. CO-ORDINATION

Marianne Bargiotti will coordinate the intervention. She'll be responsible for making sure about its completeness, about everyone availability, and for the execution of the plan according to the schedule. People involved:

C. DESCRIPTION

  1. Installation of a new storage technology at PIC for tape storage class. Installation of new SRM endpoint for disk storage class.
  2. PIC downtime and both storage classes banned in the DIRAC Configuration System (CS) . Update of the DIRAC CS for the definition of the new DIRAC SEs.(LHCb DM).
  3. Reconfiguration of the FTS service at all the involved T1 sites (CERN,SARA/NIKHEF, GridKA, IN2P3, CNAF, PIC, RAL) in the receiving channel. Time scale: after the announcement, right before the LFC substitution. (LHCb DM and site managers)
  4. Three operation in parallel: * PIC physical migration of data from Castor to dCache (PIC Admins) * LFC update of hostname string for all the tape replicas in the central LFC catalog (LFC-support): from castorsrm.pic.es/castor/pic.es/grid to srm.pic.es/pnfs/pic.es/data/lhcb/tape.
  5. LFC update: substitution of the hostname string for PIC-disk: from srm-disk.pic.es to srm.pic.es
  6. Testing of the new endpoints for data transfer and data access through LHCb applications.
  7. Unbanning of PIC-tape and disk from DIRAC CS

D. PARAMETERS

  1. Type of intervention: short
  2. Duration (estimate): two days
  3. Intervention starting date: PIC migration from Castor to dCache starting on XXXX,
  4. Announcement: monday operation meeting on XXXXX
  5. Broadcast frequency: at the intervention start/end.

E. SERVICES INVOLVED

  1. FTS services at different T1 sites: communication report at the beginning and at the end of the intervention for the reconfiguration of the FTS services involved.
  2. DIRAC WMS
  3. LHCb DM

-- Marianne.Bargiotti - 12 Nov 2007

Revision 32007-10-31 - NickBrook

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"

Migration of <SE> to a new technology

Line: 35 to 35
 

C. DESCRIPTION

Added:
>
>
SRM endpoint SAPATH StorageClass
ralsrmb.rl.ac.uk:8443/srm/managerv1 /castor.ads.rl.ac.uk/prod/lhcb T1D0
ralsrmc.rl.ac.uk:8443/srm/managerv1 /castor.ads.rl.ac.uk/prod/lhcb T0D1
 
  1. Installation of a new storage technology at RAL for the all storage classes at RAL (pool of CASTOR endpoints holding different service classes): the endpoint is already up and running and roughly tested by LHCb. See https://www.gridpp.ac.uk/wiki/RAL_Tier1_CASTOR_SRM for further reference. We envisage a later migration to SRM v2.2 that will reflect the need for space tokens. We note that space tokens should NOT depend on the namespace, as agreed at the SRM FNAL workshop.
  2. RAL downtime (~weeks) and update of the DIRAC Configuration System for the definition of the new DIRAC SEs.(LHCb DM)
Changed:
<
<
  1. Reconfiguration of the FTS service at all the involved T1 sites (CERN,SARA/NIKHEF, GRIDKA, Lyon, CNAF, PIC, RAL) in the receiving channel. Time scale: after the announcement, right before the LFC substitution. (LHCb DM and site managers)
>
>
  1. Reconfiguration of the FTS service at all the involved T1 sites (CERN,SARA/NIKHEF, GridKa, IN2P3, CNAF, PIC, RAL) in the receiving channel. Time scale: after the announcement, right before the LFC substitution. (LHCb DM and site managers)
 
  1. Two operation in parallel:
    • RAL physical migration of data from dCache to Castor. Mechanism & tools to perform this need further discussion with RAL.
    • LFC update of hostname string and SA Path for all the replicas in the central LFC catalog (LFC-support): the addressing of the existing replicas to the correct new SRM endpoint and SAPath will be done in collaboration with M.Bargiotti, who will provide to LFC support the relevant data files divided accordingly.
Line: 57 to 61
 
  1. FTS services at different T1 sites: communication report at the beginning and at the end of the intervention for the reconfiguration of the FTS services involved.
  2. DIRAC WMS
  3. LHCb DM
Added:
>
>
-- Marianne.Bargiotti - 29 Oct 2007

Revision 22007-10-30 - NickBrook

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"

Migration of <SE> to a new technology

Line: 17 to 17
 
  1. Remove replicas (in SE and LFC) for <oldSE>
  2. Remove <oldSE> from the CS
Added:
>
>
-- PhilippeCharpentier - 08 Oct 2007
 
Added:
>
>

RAL Migration: dCache to CASTOR: Intervention Plan

 
Added:
>
>

A. AIM

The primary motivation for this intervention is the migration of RAL data to new storage technology: from dCache to Castor. This intervention does require an orchestrated action between LFC, FTS admins, RAL Storage admin and LHCb Data Manager.
 
Changed:
<
<
-- PhilippeCharpentier - 08 Oct 2007
>
>

B. CO-ORDINATION

Marianne Bargiotti will coordinate the intervention. She'll be responsible for making sure about its completeness, about everyone availability, and for the execution of the plan according to the schedule. People involved:

C. DESCRIPTION

  1. Installation of a new storage technology at RAL for the all storage classes at RAL (pool of CASTOR endpoints holding different service classes): the endpoint is already up and running and roughly tested by LHCb. See https://www.gridpp.ac.uk/wiki/RAL_Tier1_CASTOR_SRM for further reference. We envisage a later migration to SRM v2.2 that will reflect the need for space tokens. We note that space tokens should NOT depend on the namespace, as agreed at the SRM FNAL workshop.
  2. RAL downtime (~weeks) and update of the DIRAC Configuration System for the definition of the new DIRAC SEs.(LHCb DM)
  3. Reconfiguration of the FTS service at all the involved T1 sites (CERN,SARA/NIKHEF, GRIDKA, Lyon, CNAF, PIC, RAL) in the receiving channel. Time scale: after the announcement, right before the LFC substitution. (LHCb DM and site managers)
  4. Two operation in parallel:
    • RAL physical migration of data from dCache to Castor. Mechanism & tools to perform this need further discussion with RAL.
    • LFC update of hostname string and SA Path for all the replicas in the central LFC catalog (LFC-support): the addressing of the existing replicas to the correct new SRM endpoint and SAPath will be done in collaboration with M.Bargiotti, who will provide to LFC support the relevant data files divided accordingly.
  5. Testing of the new endpoint for data transfer and data access through LHCb applications.
  6. Unbanning of RAL from DIRAC CS

D. PARAMETERS

  1. Type of intervention: long
  2. Duration: one week
  3. Intervention starting date: RAL migration from dCache to Castor starting on ????,
  4. Announcement: monday operation meeting on october 29th.
  5. Broadcast frequency: at the intervention start/end.

E. SERVICES INVOLVED

  1. FTS services at different T1 sites: communication report at the beginning and at the end of the intervention for the reconfiguration of the FTS services involved.
  2. DIRAC WMS
  3. LHCb DM

Revision 12007-10-08 - PhilippeCharpentier

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="LHCbComputing"

Migration of <SE> to a new technology

Proposal based on a discussion held on June 26:

  1. Define a DIRAC SE in the CS: e.g. <newSE> describing that new SE (e.g. RAL-tape_Castor)
  2. Add <newSE> in the SEs supported by the site
  3. Replicate O(100) files from <SE> to <newSE>
  4. Submit test jobs reading those files, making sure the replica that is chosen is that on the new SE. Is this feasible?
  5. Define <oldSE> in the CS with the same characteristics as <SE>, put it as a close SE to the site
  6. Define <SE> as an alias of <newSE> in the CS. From that moment all transfers (pending or new ones) will go to the new SE but with <newSE> as name
  7. Rename hostname from <SE> to <oldSE> in the LFC (at NIKHEF <oldSE> could be SARA-disk, at RAL it could be RAL-tape_dCache)
  8. Define <SE> as <newSE> (no longer an alias) and remove <newSE> from CS
  9. Rename in LFC hostname <newSE> of files replicated in 3 to <SE>
  10. Complete the replications from <oldSE> to <SE>
  11. Remove replicas (in SE and LFC) for <oldSE>
  12. Remove <oldSE> from the CS

-- PhilippeCharpentier - 08 Oct 2007

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback