xCastor2 - 2nd generation of an xrootd interface to Castor2

Introduction

The 1st generation xrootd interface aimed to support the use cases of ALICE and didn't provide multi user/service class/stager support. Therefore it was not useable as a generic interface to all experiments in read/write mode. Goal of the 2nd generation implementation was to implement missing features and to simplify the setup and deployment model (PEER/MANGER setups). Moreover the new interface should fully integrate the Castor stager and nameserver including authAorization and secure authentication (krb5 & GSI/VOMS) and scheduling policies.

How does it work?

xCastor2.gif

A client connects and authenticates to a manager node. The manager authorizes the client access using the castor nameserver. Meta data operations are executed from the manager towards the nameserver. open operations are redirected with a signed token and the physical filename to a selected disk server. The manager applies policies to each request to define the location of a file either via a cache or castor services. The disk server verifies for every open the manager signature for the requested file and allows or denies access to a physical file. Disk server are passive and don't subscribe to manager nodes.

Status of implementation

The new interface implementation has been done during the last 2 weeks of August 08 and basic functional tests have been successful so far. With help of Dennis some smarter solutions have been found in some of the problematic areas to simplify communication between xrootd and castor services. This included few changes in the xrootd protocol implementation inside castor.

Diskserver

Every diskserver run's a generic xrootd configured with the XrdxCastor2Ofs filesystem plugin. There is no stager or service class specific configuration on a disk server. It is identical for all disk servers. The Ofs plugin is configured to use unix uid/host authentication on the disk server. The only valid supported commands are:

  • open
  • read/write
  • close

Every open has to provide as part of the URL query information a signed token which is issued by generic xCastor2 manager nodes. The token is only valid for a configurable lifetime and for the uid/host pair which was used to retrieve the token from the xCastor2 manager and for one physical filename. All other filesystem commands (rmdir/mkdir/opendir/readdir/rm/sync.....) contain an empty implementation. There is a special implementation of the 'stat' call:

  • stat

The stat call is used by the xCastor2 managers to verify that a physical filename is still on a diskserver. The call only succeeds if the requested physical file is on one of the file systems stated in /etc/castor/status - and the disk server and the filesystem is in production status. This allows in special configurations to bypass the scheduling of read operations but to honour the draining state of disk servers.

The deployment on all disk servers requires the installation of the base package, the x2castorfs plugin , the service scripts and a public key rpm for token verification. Only the xrootd service has to be started (no olbd or cmsd). The generic configuration file '/etc/xrd.cf' is part of the x2castorf plugin and no further NCM component is needed.

###########################################################
xrootd.fslib /opt/xrootd/lib/libXrdxCastor2Ofs.so
xrootd.async off
###########################################################
xrootd.seclib /opt/xrootd/lib/libXrdSec.so
sec.protocol /opt/xrootd/lib unix
###########################################################
all.export / nolock
all.trace none
ofs.trace none
###########################################################
xrd.port 1094
ofs.authlib /opt/xrootd/lib/libXrdxCastor2ServerAcc.so
ofs.authorize
###########################################################
xcastor2.fs /
xcastor2.capability true
xcastor2.publickey /opt/xrootd/keys/pkey.pem
xcastor2.ratelimiter true
###########################################################
xcastor2.proc /var/log/xroot/server/
xcastor2.procuser root@lxbra0301
xcastor2.procuser root@lxbra0302
###########################################################
xCastor2 Manager


The xCastor2 manager node(s) interfaces to Castor2 services. In the 1st interface this functionality was located in the peer node and on the disk servers. An xCastor2 manager is not anymore associated to a specific pool, stager or service class. It can serve all existing castor instances on a single node. The manager runs only the xrootd service and the XrdxCastor2Fs Ofs plugin library. The functionality is best explained along the lines of the manager configuration file.

Authentication
The manager supports unix/krb5/gsi & voms authentication. Unix authentication is currently not enabled anymore. CERN certificates can be mapped automatically to the AFS user names. External DNs & VOMS roles are mapped 'manually' via map files. The system supports secondary groups for all authentication schemes. These groups and the membership have to be defined only on xCastor2 manager nodes, either as real physical groups in /etc/group or virtual in the xCastor2 mapping & configuration files. The directives for authentication in the manager configuration file are shown here:

############################################################################
xcastor2.mapcerncertificates true
xcastor2.gridmapfile /etc/grid-security/grid-mapfile
xcastor2.vomsmapfile /etc/grid-security/voms-mapfile
# virtual roles
xcastor2.role apeters :root:

############################################################################
xrootd.seclib /opt/xrootd/lib/libXrdSec.so

# UNIX authentication
sec.protocol /opt/xrootd/lib/ unix

# SSL authentication
sec.protocol /opt/xrootd/lib ssl -d:10

# KRB  authentication
sec.protocol /opt/xrootd/lib krb5 host/<host>@CERN.CH
# you can tie the authentication methods to cern host reg expr.
sec.protbind * only krb5 ssl
# f.e.
sec.protbind lxfsrc* only krb5
############################################################################

Namespace Mapping
The configuration file allows to define namespace remappings, in the standard case we define '/' -> '/' e.g. '/castor/cern.ch' -> '/castor/cern.ch'

############################################################################
xcastor2.nsmap / /

Standard xrootd configuration parameters

############################################################################
# the Ofs plugin
xrootd.fslib /opt/xrootd/lib/libXrdxCastor2Fs.so
xrd.async off
xrd.sched mint 16 maxt 1024 idle 128
all.export /
all.role manager
# currently we have two xCastor2 manager nodes, which are loadbalanced as 'x2castor.cern.ch'
all.manager lxbra0301.cern.ch 2131
all.manager lxbra0302.cern.ch 2131
# the file descriptor limits
oss.fdlimit 16384 32768
############################################################################

Stager Policy Interface
The interface supports policies for each stager-serviceclass pair.
The following scheduling policies can be defined:

  • schedall
    • read and write operations are always scheduled via Castor. This allows to operate xrootd protocol in an identical manner like rfio is used today (f.e. for CDR).
  • schedwrite
    • only write operations are scheduled. read operations obtain the best disk server via a not scheduled 'prepare2get' call to the stager

There are two further policies, which can be combined with one of the scheduling polices:

  • nohsm
    • if a client opens a file which is currently not staged, the open returns and the file is not transparently staged. Explicit stage requests are allowed
  • ronly
    • the policy doesn't allow any writes
  • nostage
    • no staging is allowed e.g. explict stage requests are blocked
  • cache
    • file location on disk servers can be cached by manager nodes. The last recent location of every written file is stored in the manager cache. Every read which issued a prepare2get to locate the disk server/pfn for a specific lfn is also cached. The location cache is used only for reads. If a file has been seen in the location cache the manager issues a stat to the disk server to verify that the file is still on the node and the node/filesystem is not in drain status. If the stat failes the read triggers the normal unscheduled or scheduled read through Castor (perpare2get or get). Currently the cache is implemented in the local filesystem using symbolic links. If needed it could be also provided by the cluster management service daemon.
############################################################################
# note: syntax is <stagerhost>::<serviceclass> schedall|schedread|schedwrite
# [,nohsm][,cache]
xcastor2.stagerpolicy castorcms:default schedall
xcastor2.stagerpolicy castoralice::default schedwrite,nohsm
xcastor2.stagerpolicy castorcert1::xrootd schedwrite,nohsm
xcastor2.stagerpolicy castorcert2::xrootd schedwrite,nohsm,cache
# directory to cache locations
xcastor2.locationcache /pool
############################################################################

Default Stager Map
While rfcp and libshift honour the client settings of STAGE_HOST and STAGE_SVCCLASS, xrdcp and libXrdClient won't. Therefore it is useful to specify some defaults if the
client didn'ts explicitly set the stage host & svc class to use. This has been implemented in the following way (the listed mapping is not final):

############################################################################
# note: the directories have to be terminated with a trailing '/' 
#       the value can be <stagehost> or <stagehost>::<serviceclass>
xcastor2.stagermap /castor/cern.ch/alice/        castoralice
xcastor2.stagermap /castor/cern.ch/atlas/        castoratlas
xcastor2.stagermap /castor/cern.ch/cms/          castorcms
xcastor2.stagermap /castor/cern.ch/lhcb/         castorlhcb
xcastor2.stagermap /castor/cern.ch/dev/          castorcert2::xrootd
xcastor2.stagermap default                       castorpublic

The mapping is done from namespace directories to stager/svc class pairs.
The mapping interface allows now also fine grained assignment of service classes/stagers and lists of stagers to try by path, identity & path+identity mappings.
Moreover policies can now be defined for each stager/serviceclass/identity combination. Examples:

############################################################################
#everything under atlas goes to t3 stager
xcastor2.stagermap /castor/cern.ch/atlas/        cernt3::default

#everything under atlas for group 1337 tries first the t3 stager, then the t0 stager
xcastor2.stagermap /castor/cern.ch/atlas/::gid:1337 cernt3::default,castoratlas::atlasuser

############################################################################
# ALL people cannot stage in t3
xcastor2.stagerpolicy cernt3::default schedwrite,cache,nohsm,nostage

# BUT group 1337 can stage
xcastor2.stagerpolicy cernt3::default::gid:1337 schedwrite,cache,nohsm

# group 1337 can read from the t0 pool, but only via scheduling
xcastor2.stagerpolicy castoratlas::atlasuser::gid:1337 schedall,nohsm,nostage,ronly 
############################################################################

There is also the possiblity to supply wildcards in the stagermap entries. A wildcard in the service class means, that all user specified service classes are also ccepted. Some for stager host, so the pass all policy is therefore:

xcastor2.stagemap / *::*


The rfcp functionality can be reintroduced by definition of an 'x2cp' function which reads the environment variables and rewrites

x2cp <local-file> /castor/cern.ch

as:

xrdcp <local-file> root://x2castor.cern.ch//castor/cern.ch/..... -ODstageHost=<stagehost>&svcClass=<svcClass>.

For ROOT this could be negotiated to automatically append the query tags for '/castor/' URLs in TXNetFile in one of the next releases. Otherwise the default mapping on server side will probably cover the standard cases.

There is also the possibility to introduce fixed mappings for certain uid's or gid's:

xcastor2.stagermap uid:12345 castoralice
xcastor2.stagermap gid:1338  castorcms::xrootd

Notice that you have to use the ids[numbers] not the user name or group name.

Mapping & Policy Rule Matching

Mapping rules have the previously introduced syntax:

xcastor2fs.stagermap #key# #value# 


#key# can be

  • '#path#,
  • uid:#uid#
  • gid:#gid#
  • default
  • #path#::uid:#uid#
  • #path#::gid:#gid#

#value# can be a single entry or a comma seperated list of

  • #stager# [ assumes ::default ]
  • #stager#::#svcclass#
  • #stager#::*

The matching of map rules uses the following algorithm:

  • find the rule which matches the deepest path ( key = #path# )
  • if there is a mapping by uid it overwrites the previous matched rule ( key= uid:#uid# )
  • if there is a mapping by gid it overwrites the previous matched rule ( key= gid:#gid# )
  • the deepest mapping defined by path+uid it overwrites the previous matched rule ( key=#path#::uid:#uid# )
  • the deepest mapping defined by path+gid it overwrites the previous matched rule ( key=#path#::gid:#gid# )
  • if there was no match before use the 'default' mapping rule ( path set to 'default' ) ( key=default )

Only the last selected mapping is used to match the clients stager/svcClass request. If the user request is not allowed by this rule no other rule applies.
Policy rules have the previously introduced syntax:

xcastor2fs.stagerpolicy #key# #value# 

#key# can be:

  • #stager#::#svcclass#
  • #stager#::#svcclass#::uid:#uid#
  • #stager#::#svcclass#::gid:#gid#
  • #stager#::*

#value# has been described beforehand.

The matching of policy rules uses the following algorithm:

  • try to find a user specific policy ( #stager#::#svcclass#::uid:#uid# )
  • if not found try to find a group specific policy (#stager#::#svcclass#::gid:#gid# )
  • if not found try to find the exact policy (#stager#::#svcclass#)
  • if not found try to find a wildcardpolicy ( #stager#::* )



Other xCastor2 plugin parameters

############################################################################
# specifies the xrootd port on disk servers
xcastor2.targetport 1094
# issues token/capabilities to do authorization on the disk servers
xcastor2.capability true
# location of the private key for capabilities (signature private key)
xcastor2.privatekey /opt/xrootd/keys/key.pem
# location of the private key for a specific stager
xcastor2.privatekey /opt/xrootd/keys/key-stager1.pem castoralice
xcastor2.privatekey /opt/xrootd/keys/key-stager2.pem castorcms
# validity of issued tokens - synchronize the clocks!
xcastor2.tokenlocktime 10
# location of the proc interface to see access statistics
xcastor2.proc /var/log/xroot/manager1094
############################################################################

Multi Service Configuration and current Manager deployment
While most experiments can be served by the same manager node, ALICE requires a special authorization plugin on the manager to honour the ALICE file catalog security envelope and to map all clients to the 'aliprod' account. The service scripts have been modified to allow to start several instances of xrootd with individual configuration files. The service syntax is: service xrd [start|stop|restart ...] <port>. If no port is specified '/etc/xrd.cf' is used as config file and /var/log/xroot/manager is used as log directory. If e.g. port 1094 is specified, '/etc/xrd.1094.cf' is used and /var/log/xrootd/manager1094/ is used as log directory.

  • manager hosts
    • lxbra0301.cern.ch
    • lxbra0302.cern.ch
    • x2castor.cern.ch

The multi service configuration can be used f.e. to allow all authentication types on port 1094 while only krb5 on port 1095 and ALICE token authorization on port 1096.

Castor2 Modifications and Status Changes
  • Modification of /etc/castor/status
    • the status file has been modified to use real mounts points instead of FS1-FS<X> definitions for draining states
  • Castor2 xrootd protocol plugin
    • the protocol wrapper has been changed to work in a similiar manner like for rfio ( calls getUpdateDone, getPutDone etc ....)
    • the protocol wrapper executes the ' x2castorjob <reqid>' script (until 2.1.8-9)
    • the protocol wrapper acts as a localhost TCP/IP server allowing xrootd to communicate open/close messages (since 2.1.8-10)

x2castorjob (only until version 2.1.8-9):
This PERL script blocks an LSF slot during the scheduled access to a file via xrootd. When the client opens a file on a disk server, the disk server xrootd sends SIGUSR1 to the x2castorjob with the same request id as included in the client token. If the signal
does not arrive in a period of 10s, x2castorjob exits with -1 and the slot is freed. When the client closes the file a seconde SIGUSR1 is send to x2castorjob which exits with 0 and frees the job slot. If the second signal does not arrive within 48 hours, the script exits with -1 and frees the job slot. The logfile of x2castorjob can be found under /var/log/xroot/server/x2castorjob.log and is rotated by rotated.

first byte written:
When a file is opened in write mode and the first byte has been written the disk server informs the manager node that a file has been modified. The manager executes the FirstWrite call towards the stager to inform about the file modification.

ALICE Security Model

To fit the token authorization into the new setup, the token authorization plugin which was an OFS plugin has been rewritten to run as a pure Authorization library plugin in xrootd. Another manager instance is running for ALICE doing only unix authentication and enforcing ALICE tokens to open or delete files. Stat calls are always passing unauthenticated. All users contacting an ALICE redirector instance get mapped statically to one UID in Castor ( aliprod ).

Global Redirector Subscription

With small modfications to the cmsd code it was exercised to subscribe the Castor2 manager node to a global redirector. This will be soon used by ALICE to locate files in all subscribed storage elements.

Third Party Copy

Generic Implementation

Third party copy is implemented as a synchronous remote copy. The third party copy client (xrd3cp) uses the following sequence:

  • create a unique transfer id (uuid)
  • verify that source file is staged (isfileonline)
  • open destination file (standard file open with truncate) and keep the file open
  • open source file (stadard file open) and keep file open
  • set via the admin plugin command the 'schedule' state to the source
  • the source xrootd schedules a thread which run's a standard XrdClient towards the destination carrying a 'copy' tag in the url
  • the destination xrootd allows access to the destination if the client keeps the destination file still open and the transfer uuid matches
  • the client displays by default a progress bar and terminates when the transfer is finished or an error occured

The following figure shows the sequence (the 'isfileonline' is skipped). The transfer state control of the xrd3cp client is also not included.
thirdparty2.png

Server Implementation

Each disk server has a predefined number of transfer slots and target rate at which transfers are running. The transfers are PUSHs from source to destination and are executed by a scheduled thread running inside the xrootd server. Transfers run only, if the client has source and destination open. If the client disconnects running transfers will stop and go into error status. A daemon restart interrupts all running transfers and leads to a transfer error without possibility for recovery (sync implementation).

The OFS plug-in running on disk servers supports the following new configuration directives (given are the defaults – no change of configuration necessary to get these values):

xcastor2.thirdparty yes
xcastor2.thirdparty.slots 5
xcastor2.thirdparty.slotrate 10
xcastor2.thirdparty.statedirectory /var/log/xroot/server/transfer

An executed transfer creates typically 4 files in the state directory:

# the transfer definition
/var/log/xroot/server/transfer1/fff35a24-449b-11de-9e04-000ffe9c324f
# the transfer log file
/var/log/xroot/server/transfer1/fff35a24-449b-11de-9e04-000ffe9c324f.log
# the progress bar file ( only source side)
/var/log/xroot/server/transfer1/fff35a24-449b-11de-9e04-000ffe9c324f.progress
# the state of a transfer in (int) format
/var/log/xroot/server/transfer1/fff35a24-449b-11de-9e04-000ffe9c324f.state

These state file are automatically removed after 24 hours.
The number of slots and the nominal tranfer rate per slot can be modified on the fly (e.g. without daemon restart) via a new convenience script run on each disk server:

# set ten slots
x2proc thirdpartycopyslots 10

# set a rate of 25 Mb/s
x2proc thirdpartycopyslotrate 25

Client Tool

The CLI for third party copies has the following syntax:

xrd3cp root://<source-host>//<source> root://<dest-host//<dest>

usage: xrd3cp [-c] [-d] [-D] [-v] [-l <uuid>] [-s <uuid>] <SOURCE-URL> <DST-URL>
        -d       enable xrd3cp debug
        -D       enable XrdClient debug
        -v       verbose details about the transfer
        -n       disable the progress bar
        -l <uuid>        retrieve transfer logs from both ends for transfer <uuid>
        -s <uuid>        retrieve transfer status from both ends for transfer <uuid>


Deployment

Server-side deployment requires the usual xrootd-xcastor2fs RPM which now additionally depends on the xrootd-libtransfermanager RPM.
Client-side deployment requires the xrootd-xrd3cp RPM.

Performance Tests

Here are some results from the test running on ITDC testing the new Xrootd/Castor2 Interface. There are 12 disk servers with 50 LSF slots each.
I read & write one byte files with 240-700 xrdcp clients distributed over 6-12 client machines.

Open Rates with 480 clients:

Write (scheduled through Castor)                :               15  files/s [ LFS scheduling limit ]
Read with castor bypass (krb5 authenticated)    :               30  files/s [ krb5 replay cache limited ]
Read with castor bypass (unix authenticated)    :               480 files/s [ limited by single nameserver instance ] after reconfiguration of NS/strager > 700 (not yet limited, client limited)
Read with castor bypass (ssl+sessions)          :               430 files/s
Read with castor prep2get(unix authenticated)   :               300 files/s
Read with castor scheduling(unix authneticated) :               15  files/s [ LFS scheduling limit ]

Delays added by the redirector under 100% load:
Write (scheduled through Castor)                :               2.7 sec per open
Read with castor bypass(unix authenticated)     :               0.6 sec per open
Read with castor prep2get(unix authenticated)   :               1.1 sec per open
Read with castor scheduling(unix authenticated) :               9.5 sec per open

Delays added by the redirector when (quasi) idle:
Write (scheduled through Castor)                :               500 ms per open
Read with castor bypass (unix)                  :               14  ms per open (13ms Cns_stat)
Read with castor bypass (x509)                  :               14  ms per open (13ms Cns_stat)
Read with castor prep2get                       :               80  ms per open
Read with castor scheduling                     :               510 ms per open

Total 1-byte copy time seen at the client (server idle) for unix/krb5/ssl/ssl-session authentication
Unix    44ms
krb5    53ms
ssl     66ms (first authentication to establish session)
ssl*    49ms (using session mechanism)


[ ssl supports grid & voms proxies - in the test I use a CERN proxy certificate ]

Remarks

  1. the open for read/write scheduled is not limited by LSF but by lifetime of each scheduled wrapper job! An increase of the number of slots per server (currently 10) would improve this rates until we reach the LSF scheduling limitation.
  2. the kerberos limitation is not too bad since applications keep the security context up and don't authenticate with each file open like xrdcp.
  3. configuring security towards the nameserver will have a very strong performance impact


Stager API Issue

There is some feature in the api because under certain circumstances it run's into a 800% CPU loop while normally also under heavy client pressure it is just using 5%. The problem comes from the fact that the default port range for callbacks was only 100 ports. Extending the port range helped to avoid the high load situation. I discussed already with Guiseppe how one could improve the code to choose a free port, but a huge port range helps to avoid the bind loops.

Third Party Copy Tests

Phase 1 - Software Validation


The first test consisted of the validation of the tool itself. Therefore 200 clients where used to inject files of 500 Mb size as fast as possible into C2ITDC::DISKONLY [6 disk server]. From C2ITDC::DISKONLY data was transfered to C2ITDC::ITDC [9 disk server] via third party transfers. In 14 hours ~ 16 TB have been injected and 16 TB have been transfered from C2ITDC::DISKONLY to C2ITDC::ITDC. As visible in the second plot, not all data has been moved to tape in real-time due to lack of available tapes (&drives). No errors have been observed. A small memory leak has been identitfied on the receiving end and was fixed in the code. The third party parameters have been set to 5 slots per diskserver and a rate maximum of 20 Mb/s. The LSF slot parameters on this service classes where 8 slots per disk server. The write was scheduled on both pools while the read (the source) was not scheduled on the DISKONLY pool. The IO rates show some degredation after 8 hours. The reason where two full filesystems.[the injection speed is coupled to he third party transfer speed in this test because every client does 1. 'injection into diskonly' 2. 'third-party-transfer to tape pool' ]

Lemon Monitoring C2ITDC::DISKONLY Phase 1



thirdparty.png

Lemon Monitoring C2ITDC::ITDC

thirdparty1.png

Phase 2 - Software Longterm-Stability Validation

The second phase of testing will run on identical hardware using an equal mix of three file sizes: 10 Mb, 500 Mb & 2,5 GB files. The ratio injection:third-party-transfer will be again 1:1. The test should run minimum 3 days, but will be extended if the space allows it. For practical purposes clients have to be restarted every 24 hours (krb5 limit), server side will stay untouched.
During Phase 2 the flux regulation from C2ITDC::DISKONLY to C2ITDC::ITDC should be demonstrated modifying the stream rate parameters:

wassh -l root -c c2itdc/diskonly '/opt/xrootd/bin/x2proc thirdpartycopyslotrate 1

This should result in a transfer reduction to of 6 x 5 x 1 Mb/s = 30 Mb/s

Phase 2 - Report

During phase 2 a blocker bug in the xrootd/stagerJob communication has been revieled. Although transfers were running, a high percentage of files on the target end had an inconsistency between stager & namespace file size due to a race condition in the stagerJob startup (see Savannah bug #51101). These inconsistencies slow down the migrator for the tape export.

Phase 3 - Rerun of Phase 1

Phase 1 with 1 GB files was running since 2.6. ~ 6:00 o'clock for 5 1/2 hours. An evident change in the tape export rate is visible. The rate follows now precisely the pool input rate. No missing state transitions to write-open and write-close are visible now in log files. All injected files in the pool have now proper filesize and checksum in the namespace. The 3rd-party copy up to now has a success rate of 100%. The transfer times for a single 1GB file range from 1 to 12 minutes. 4000 1GB files have been uploaded and transfered in 5 1/2 hours.

Lemon-Monitoring Disk-Only Pool

thirdparty3.png

Lemon-Monitoring Tape Pool

thirdparty4.png

Lemon Monitoring - Tape Backlog at Test end


thirdparty5.png
When the third party copies fade-out the tape backlog get's visible with a short peak where the full bandwidth is taken for tape migration.

Phase 4 - Rerun of Phase 2

Phase 4 has started 2.6. at 15:30 o'clock. The client data injection is setup with 120 clients in parallel:

  • 40 clients uploading 160x 500 Mb files to the diskonly pool, 3rd party transfer to tape pool, deletion on diskonly pool
  • 40 clients uploading 32 x 2.5 Gb files to the diskonly pool, 3rd party transfer to tape pool, deletion on diskonly pool
  • 40 clients uploading 800 x 100 Mb files to the diskonly pool, 3rd party transfer to tape pool, deletion on diskonly pool

This patterns lasts 8-12 hours (assuming 200-300Mb/s) and has been repeated several times to extend the testing phase and get
enough confidence in the stability. The total testing time has been around 72hours.


Deployment & Configuration Model

Hardware

  • Head nodes don't require special hardware in terms of CPU or memory
    • if configured for caching (for open rates --> 1000/s) a filesystem should be formatted for many inodes as a cache directory (only symbolic links are stored - no files) * if configured for caching the more RAM the better
  • Head nodes can have simple round-robin alias - load balancing alias is not needed

Package Installation

  • All disk server deploy the following RPMs
    • xrootd base RPM
    • xrootd xCastor2 plugin RPM
    • xrootd service RPM
    • xrootd monitoring RPM
    • (depending on the deployment model also xrootd public key RPM)
  • Head node installation is identical to disk server installation

Configuration

  • Disk Server Configuration is generic and included in the plugin RPM ( once the head node names are known the RPM has to be 'finalized' )
  • Head Node Configuration is suggested to be done 'manual' in the beginning

Security Infrastructure Setup

  • Disk Server need the public (service) key of the head node(s) [ only one identical key for all headnodes ]
    /opt/xrootd/etc/keys/pkey.pem
  • Head nodes store a private (service) key with the public partner distributed on disk servers
    /opt/xrootd/etc/keys/key.pem
    
  • 512 bit key [0.5ms latency] - 4096 bit key [8 ms latency] ==> 512 bit keys are fine
    • Creation Command
      rm -rf key.pem cert.pem certreq.pem pkey.pem &&  openssl genrsa -rand 12938467 -out key.pem 512 '\' 
      openssl req -new -inform PEM -key key.pem -outform PEM -out certreq.pem &&  '\'
      openssl x509 -days 3650 -signkey key.pem -in certreq.pem -req -out cert.pem && '\'
      openssl x509 -pubkey -in cert.pem > pkey.pem && rm -rf cert.pem certreq.pem &&  '\'
      echo "Your new keypair is private-key: ${PWD}/key.pem public-key: ${PWD}/pkey.pem"
      

Security Setup

  • Unix Authentication
    • needs CERN /etc/passwd /etc/group files
  • Krb5 Authentication
    • needs CERN /etc/krb5.conf only on headnodes only on headnodes
  • SSL Authentication
    • needs host certificate in /etc/grid-security/ only on headnodes
    • needs /etc/grid-security/certificates directory only on headnodes

Multiple Instances on Manager Nodes

The service script /etc/init.d/xrd supports the startup of multiple xrootd managers on individual ports. If only one service needs to be configured the startup script uses /etc/xrd.cf as the xrootd configuration file. If you want more than one service,
write the port numbers one per line into the file /etc/sysconfig/xrd-service-ports . Then provide for each port number a configuration file /etc/xrd.<port#>.cf. The startup script will always deal with all configured instances. If you want to modify only one instance yyou can write f.e. '/etc/init.d/xrd start 1094' to start the xrootd instance on port 1094. The log files are then autmatically going into directories like /var/log/manager<port>/xrdlog. The core & admin paths are changed respectivly from ./manager/. to ./manager<port>/.

Configuration of ALICE Redirectors

############################################################################
xcastor2.nsmap / /
xcastor2.fs /
# -- load ALICE authorization library
xcastor2.authlib /opt/xrootd/lib/libXrdAliceTokenAcc.so
# -- enforce authorization in xcastor2fs plugin
xcastor2.authorize 1
# -- some special hosts of the DAQ system can access without authorization,
# they are specified via exact, ??, * or range matches
alicetokenacc.noauthzhost alicedaq[01-10].cern.ch
alicetokenacc.noauthzhost alicedaq*.cern.ch
alicetokenacc.noauthzhost alicedaq??.cern.ch
alicetokenacc.noauthzhost alicedaq01.cern.ch
# -- ALICE acts always as user 'aliprod' in Castor
# all users are mapped to aliprod
xcastor2.role * :aliprod:
############################################################################
xrootd.seclib /opt/xrootd/lib/libXrdSec.so
# Only UNIX authentication ! - No Krb5 !
sec.protocol /opt/xrootd/lib/ unix
############################################################################
sec.protbind * only unix
############################################################################

If the storage defines a mapping e.g. the storage manages an internal prefix, the authorization plugin needs an additional parameter to 'undo' the prefix, e.g.:

xcastor2.nsmap / /castor/cern.ch/alice/storage/
alicetokenacc.truncateprefix /castor/cern.ch/alice/storage

Configuration of Global Redirectors

To setup global redirection, every manager node has additionially to the xrd service to run the cmsd service. Global redirector nodes also run xrd + cmsd service with a custom configuration file.
Global redirector configuration files are very simple. Global redirectors don't configure any authentication mechanism. Via the 'export' directive the redirecting namespace can be limited further.

#################################################################
# ---------------------------------------------------------------
# our exported namespace
all.export /castor
# ---------------------------------------------------------------
# we are a global redirector
all.role meta manager
# ---------------------------------------------------------------
# if we want debug ....
#all.trace all debug
# ---------------------------------------------------------------
# we are the global redirector hosts
all.meta lxbra0301.cern.ch
all.meta lxbra0302.cern.ch
# ---------------------------------------------------------------
# use libXrdOfs to enable global redirection
xrootd.fslib /opt/xrootd/lib/libXrdOfs.so
# ---------------------------------------------------------------
# we use a fast startup and wait atleast 500ms for fast responses
cms.delay startup 5 hold 500
# ---------------------------------------------------------------
#################################################################


You can deploy redundant global redirectors. The attaching manager nodes have to reference the global redirectors via entries in there configuration file like:

all.manager meta <global redirector name>:2131

There is currently one restriction for the setup. Manager nodes subscribing to global redirectors are advertised to run on port 1094. Therefore don't change the manager node's xrootd port.
It might be sufficient to deploy a single global redirector pair combining ALL experiment pools.

Configuration of 'real-time' filesize on close

The default configuration make the filesize appear approximately 3s after a file has been closed in the namespace. For certain use cases this is not acceptable. To enable in-time updates of the file size on close add the following parameters to the configuration files.
The disk server configuration file:

xcastor2.setfilesizeonclose true

The manager configuration file has to allow ALL disk servers to authenticate with UNIX authentication. You can specify hostlists with wildcards

sec.protbind lxc2disk* unix

Configuration of a location cache for read performance boosts

If a stager policy defines the 'cache' keyword, every file created and every file looked up for a stager/service class pair is cached in a local directory containing symbolic links. If there is cache information for a file the manager node verifies the existance of the cached location via a stat call towards the disk server. This usually takes only 1ms and doesn't interact with the stager daemon. To allow manager nodes to do the stat, each manager node has to be listed as a proc user in the disk server configuration file:

Disk Server Configuration File Example:

xcastor2.procuser root@managernode1
xcastor2.procuser root@managernode2

Manager Configuration Example:

xcastor2.locationcache /var/tmp/xroot-locationcache
xcastor2.stagerpolicy castorcms::* schedwrite,nohsm,cache

Be careful! The location cache directory has to exist before you startup the manager xrd service!

Configuration to use individual signatures in different stagers

One redirector can create capabilities for several stagers with individual private keys. The mapping of private keys to stager names has to be defined in the configuration file like:

<span id="hidsubpartcontentdiscussion">xcastor2.privatekey /opt/xrootd/keys/key-atlas.pem castoratlas
xcastor2.privatekey /opt/xrootd/keys/key-cms.pem castorcms
xcastor2.privatekey /opt/xrootd/keys/key-t3.pem cernt3
</span>

If no stager is given in the configuration line, the key is assumed to be a default key which has to be used if no other key definition matches.

Configuration of 'Persistency on successful close'

xrootd native supports the POSC functionality. If a file upload is interrupted (Control-C, crash etc...) a file get's cleaned up if the POSC flag is turned on. The same functionality is possible for 'xrdcp' and 'xrd3cp' uploads in CASTOR if the server config file is extended with the following directive:

xcastor2.posc true

A prerequisite for this to work is, that manager nodes allow 'unix' authentication for disk server nodes. The disk server nodes callback to the manager to do a cleanup of an interrupted transfer under the ID of the initiating client. The unix authentication is allowed by adding as the first protbind rule

sec.protbind lxfs*.cern.ch only unix

There is an additional security mechanism in place: only hosts which recieved previously a redirection can cleanup a file in the namespace.

xCastor2 Client Usage/Testing

File Copy Commands

Upload a file to a xCastor2 pool using automatic stager/svcclass mapping:

xrdcp /tmp/myfile root://<stagerhost>//castor/cern.ch/.....

Upload a file to a xCastor2 pool providing manual stager/svcclass mapping:

xrdcp /tmp/myfile root://<stagerhost>//castor/cern.ch/..... -ODstageHost=<stagerHost>\&svcClass=<svcClass> 

Download a file from a xCastor2 pool using automatic stager/svcclass mapping:

xrdcp root://<stagerhost>//castor/cern.ch/..... /tmp/myfile

Download a file from a xCastor2 pool using manual stager/svcclass mapping:

xrdcp root://<stagerhost>//castor/cern.ch/..... /tmp/myfile -OSstageHost=<stagerHost>\&svcClass=<svcClass>

To debug the sequence of redirection, add the '-d' flag at the end of the command line.

To set automatically the mode bits during a file upload use:

xrdcp /tmp/myfile root://<stagerhost>//castor/cern.ch/..... -ODmode=444

This example set's the permissions to read-only for all.

Meta Data Commands

The meta data commands are exported via the xrd busy box function. The typical syntax is 'xrd <cmd> [args...]'. The xrd command of versions older than 03/2009 is a pure interactive shell. Since 03/2009 it can be used as a command line function:
Examples:

# stat a directory
xrd castorcms stat /castor/cern.ch/cms
# stat a file - it also returns if the file is staged in the mode bits
xrd castorcms stat /castor/cern.ch/cms/higgs.root
# check if a file is online
xrd castorcms isfileonline /castor/cern.ch/cms/higgs.root
# create a directory
xrd castorcms mkdir /castor/cern.ch/cms/newdir
# delete a file
xrd castorcms rm /castor/cern.ch/cms/higgs.root
# change permissions ---- Attention: it is impossible to set a file world writable via xrd chmod
xrd castorcms chmod /castor/cern.ch/cms/higgs.root 4
# check if a file is staged
xrd castorcms isfileonline /castor/cern.ch/cms/higgs.root

For stager deletions two additional opaque flags can be specified for an 'xrd rm' call:

# remove a file from the associated stager but leave in the namespace
xrd rm /castor/cern.ch/cms/higgs.root?stagerm=1&nodelete
# remove a file from teh associated stager and force removal from the namespace
xrd rm /castor/cern.ch/cms/higgs.root?stagerm=1
# remove a file from the namespace - the physical space on a pool get's freed later asynchronous
xrd rm /castor/cern.ch/cms/higgs.root

To free space immedeatly, the 1st or 2nd example should be used.


xCastor2 Packages- Release for Castor Release 2.1.8-2 - platform x86_64 RPMs are available on afs and in the software repository. The CVS Tag of the plugin code is v2_1_8_2.

  • xrootd base
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-CVS20080517_pext-8.x86_64.rpm
  • xrootd castor plugin
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-xcastor2fs-1.0.2-12.x86_64.rpm
  • xrootd castor monitoring service [ deployed only on manager/headnodes ]
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-castormon-1.0.2-3.x86_64.rpm 
  • xrootd service scripts
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-service-simple-1.0.3-3.noarch.rpm

xCastor2 Packages- Release for Castor Release 2.1.8-6 - platform x86_64

RPMs are available on afs and in the software repository. The CVS Tag of the plugin code is v2_1_8_6. The /afs paths point to SLC4 rpms. For SLC5, use /afs/..../rpms/slc5/..

  • xrootd base
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-20090306.1107.x86_64.rpm
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-debuginfo-20090306.1107-1.x86_64.rpm
    
  • xrootd castor plugin
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-xcastor2fs-1.0.5-1.x86_64.rpm
    
  • xrootd castor utilities ( stagerget, stagerqry equivalent & x2cp )
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-xcastor2util-1.0.5-1.x86_64.rpm
  • xrootd authz plugin (alice specific)
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-alicetokenacc-1.0.0-1.x86_64.rpm
    /afs/cern.ch/project/dm/xCastor2/rpms/tokenauthz-1.1.5-1.x86_64.rpm

xCastor2 Packages- Release for Castor Release 2.1.8-7 - platform x86_64

RPMs are available on afs and in the software repository. The CVS Tag of the plugin code revision 4 is v2_1_8_7d. The /afs paths point to SLC4 rpms. For SLC5, use /afs/..../rpms/slc5/...

  • xrootd base
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-20090306.1107-2.x86_64.rpm
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-debuginfo-20090306.1107-2.x86_64.rpm
    
  • xrootd castor plugin
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-xcastor2fs-1.0.6-11.x86_64.rpm
    
  • xrootd castor utilities ( stagerget, stagerqry equivalent & x2cp )
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-xcastor2util-1.0.6-3.x86_64.rpm
  • xrootd authz plugin (alice specific)
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-alicetokenacc-1.1.0-1.x86_64.rpm
    /afs/cern.ch/project/dm/xCastor2/rpms/tokenauthz-1.1.5-1.x86_64.rpm

xCastor2 Packages- Release for Castor Release 2.1.8-8 - platform x86_64

RPMs are available on afs and in the software repository. The /afs paths point to SLC4 rpms. For SLC5, use /afs/..../rpms/slc5/...

  • xrootd base
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-20090306.1107-2.x86_64.rpm
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-debuginfo-20090306.1107-2.x86_64.rpm
    
  • xrootd castor plugin
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-xcastor2fs-1.0.8-2.x86_64.rpm
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-libtransfermanager-1.0.1-2.x86_64.rpm
  • xrootd castor utilities ( stagerget, stagerqry equivalent & x2cp )
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-xcastor2util-1.0.6-3.x86_64.rpm
    
  • xrootd 3rd party copy client ( xrd3cp )
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-xrd3cp-1.0.0-2.x86_64.rpm
    
  • xrootd authz plugin (alice specific)
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-alicetokenacc-1.1.0-1.x86_64.rpm
    /afs/cern.ch/project/dm/xCastor2/rpms/tokenauthz-1.1.5-1.x86_64.rpm

The source code will not be commited to the main branch in the CVS but a source tar ball is available here: /afs/cern.ch/project/dm/xCastor2/src/2.1.8-8/xrootd-xcastor2fs-1.0.8.tar.gz.

xCastor2 Packages- Release for Castor Release 2.1.8-10 - platform x86_64

RPMs are available on afs and in the software repository. The /afs paths point to SLC4 rpms. For SLC5, use /afs/..../rpms/slc5/...

  • xrootd base
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-20090629.1236-1.x86_64.rpm
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-debuginfo-20090629.1236-1.x86_64.rpm
    
  • xrootd castor plugin (tag: v2_1_9_4)
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-xcastor2fs-1.0.9-5.x86_64.rpm
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-libtransfermanager-1.0.1-4.x86_64.rpm
  • xrootd castor utilities ( stagerget, stagerqry equivalent & x2cp )
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-xcastor2util-1.0.6-3.x86_64.rpm
    
  • xrootd 3rd party copy client ( xrd3cp )
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-xrd3cp-1.0.0-4.x86_64.rpm
    
  • xrootd authz plugin (alice specific)
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-alicetokenacc-1.1.0-1.x86_64.rpm
    /afs/cern.ch/project/dm/xCastor2/rpms/tokenauthz-1.1.5-1.x86_64.rpm
  • xrootd ssl authentication plugin
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-secssl-3.0.4-5.x86_64.rpm

xCastor2 Packages- Release for Castor Release 2.1.9 - platform x86_64

RPMs are available on afs and in the software repository. The /afs paths point to SLC4 rpms. For SLC5, use /afs/..../rpms/slc5/...

  • xrootd base
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-20090629.1236-9.x86_64.rpm
    
  • xrootd castor plugin
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-xcastor2fs-1.0.9-15.x86_64.rpm
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-libtransfermanager-1.0.1-6.x86_64.rpm
  • xrootd castor utilities ( stagerget, stagerqry equivalent & x2cp )
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-xcastor2util-1.0.6-3.x86_64.rpm
    
  • xrootd 3rd party copy client ( xrd3cp )
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-xrd3cp-1.0.0-4.x86_64.rpm
    
  • xrootd authz plugin (alice specific)
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-alicetokenacc-1.1.0-3.x86_64.rpm
    /afs/cern.ch/project/dm/xCastor2/rpms/tokenauthz-1.1.5-1.x86_64.rpm
  • xrootd ssl authentication plugin
    /afs/cern.ch/project/dm/xCastor2/rpms/xrootd-secssl-4.4.0-0.x86_64.rpm

xCastor2 Packages- Release for Castor Release 2.1.10 - platform x86_64

RPMs are available in the software repository.

  • xrootd base
    xrootd-server-3.0.2 xrootd-devel-3.0.2
    
  • xrootd castor plugin
    xrootd-xcastor2fs-1.0.9-18 xrootd-libtransfermanager-1.0.3-0
    

The ALICE plugins are available here:

  * SLC4
 
http://project-arda-dev.web.cern.ch/project-arda-dev/xrootd/rpms/slc4/tokenauthz-1.1.5-1.x86_64.rpm

http://project-arda-dev.web.cern.ch/project-arda-dev/xrootd/rpms/slc4/xrootd-alicetokenacc-1.1.0-6.x86_64.rpm

  * SLC5
   
http://project-arda-dev.web.cern.ch/project-arda-dev/xrootd/rpms/slc5/tokenauthz-1.1.5-1.x86_64.rpm http://project-arda-dev.web.cern.ch/project-arda-dev/xrootd/rpms/slc5/xrootd-alicetokenacc-1.1.0-6.x86_64.rpm

xCastor2 Packages- Release for Castor Release 2.1.11 - platform x86_64

RPMs are available in the software repository.

  • xrootd base
    xrootd-server-3.0.4 xrootd-devel-3.0.4
    
  • xrootd castor plugin
    xrootd-xcastor2fs_2111-1.1.0-1 xrootd-libtransfermanager-1.0.4-0
    

xCastor2 Packages- Release for Castor Release 2.1.12 - platform x86_64

RPMs are available in the software repository.

  • xrootd base
    xrootd-server-3.0.4 xrootd-devel-3.0.4
    
  • xrootd castor plugin
    xrootd-xcastor2fs_2112-1.1.0-1 xrootd-libtransfermanager-1.0.4-0
    

The ALICE plugins are available here:

  * SLC4
 
http://project-arda-dev.web.cern.ch/project-arda-dev/xrootd/rpms/slc4/tokenauthz-1.1.5-1.x86_64.rpm

http://project-arda-dev.web.cern.ch/project-arda-dev/xrootd/rpms/slc4/xrootd-alicetokenacc-1.1.0-6.x86_64.rpm

  * SLC5
   
http://project-arda-dev.web.cern.ch/project-arda-dev/xrootd/rpms/slc5/tokenauthz-1.1.5-1.x86_64.rpm http://project-arda-dev.web.cern.ch/project-arda-dev/xrootd/rpms/slc5/xrootd-alicetokenacc-1.1.0-6.x86_64.rpm

-- AndreasPeters - 13 June 2011

Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng thirdparty.png r1 manage 40.4 K 2009-05-27 - 15:47 AndreasPeters  
PNGpng thirdparty1.png r1 manage 36.7 K 2009-05-27 - 15:50 AndreasPeters  
PNGpng thirdparty2.png r1 manage 4.5 K 2009-05-27 - 18:00 AndreasPeters  
PNGpng thirdparty3.png r1 manage 25.3 K 2009-06-02 - 11:20 AndreasPeters  
PNGpng thirdparty4.png r1 manage 23.9 K 2009-06-02 - 11:22 AndreasPeters  
PNGpng thirdparty5.png r1 manage 5.1 K 2009-06-02 - 14:35 AndreasPeters  
PDFpdf xCastor2-FIO.pdf r1 manage 137.0 K 2008-10-15 - 12:12 AndreasPeters Presentation about Deployment Issues for xCastor2 (morning meeting)
GIFgif xCastor2.gif r1 manage 66.8 K 2008-09-18 - 12:34 AndreasPeters  
Unknown file formatexample xrd.example r5 r4 r3 r2 r1 manage 2.3 K 2012-02-17 - 14:31 AndreasPeters Manager Configuration File Template
Texttxt xrootd-release-notes-2.1.10.txt r3 r2 r1 manage 28.9 K 2011-03-10 - 17:10 AndreasPeters xrootd release notes v2.1.10
Texttxt xrootd-release-notes-2.1.11.txt r7 r6 r5 r4 r3 manage 30.5 K 2011-08-01 - 11:22 AndreasPeters xrootd release notes v2.1.11
Texttxt xrootd-release-notes-2.1.11_12.txt r2 r1 manage 31.3 K 2012-02-17 - 14:40 AndreasPeters xrootd release notes v2.1.11/12
Texttxt xrootd-release-notes-2.1.8-10.txt r1 manage 18.0 K 2010-01-11 - 10:14 UnknownUser xrootd release notes v2.1.8-10
Texttxt xrootd-release-notes-2.1.8-2.txt r4 r3 r2 r1 manage 1.9 K 2008-10-09 - 10:59 AndreasPeters xrootd release notes v2.1.8-2
Texttxt xrootd-release-notes-2.1.8-6.txt r2 r1 manage 4.7 K 2009-03-10 - 11:56 AndreasPeters xrootd release notes v2.1.8-6
Texttxt xrootd-release-notes-2.1.8-7.txt r9 r8 r7 r6 r5 manage 10.3 K 2009-06-02 - 11:19 AndreasPeters  
Texttxt xrootd-release-notes-2.1.8-8.txt r1 manage 13.0 K 2009-06-08 - 16:46 AndreasPeters xrootd release notes v2.1.8-8
Texttxt xrootd-release-notes-2.1.8-p6.txt r1 manage 2.4 K 2009-02-12 - 10:06 AndreasPeters Release Notes for the 2.1.8-6 pre-release
Texttxt xrootd-release-notes-2.1.9.txt r22 r21 r20 r19 r18 manage 25.4 K 2010-10-21 - 03:52 AndreasPeters xrootd release notes v2.1.9
Edit | Attach | Watch | Print version | History: r78 < r77 < r76 < r75 < r74 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r78 - 2012-02-17 - AndreasPeters
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    DataManagement All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback