Joining a dCache-based SE to the Xrootd service.

This document covers joining a dCache-based storage element to the CMS Xrootd service. This page assumes three things:

  1. You are using dCache.
  2. All your pool nodes are on the public internet.
  3. The LFN->PFN mapping for your site is as simple as adding a prefix.
  4. The machine is EL7/Centos7

The architecture setup is diagrammed below:

XroodDcacheIntegrationV2.png

This architecture uses the built-in dCache Xrootd door and adds a "federation host". This host integrates the native dCache door with the global federation, but all clients are redirected first to the dCache xrootd door, then to the individual pools. GSI security and namespace translation are performed by dCache itself. At no point does data have to be "proxied", which should improve the scalability and remove complexity from the entire system.

Installation

rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -Uvh https://repo.opensciencegrid.org/osg/3.5/osg-3.5-el7-release-latest.rpm
If you want to install mostly from EPEL, set the repository to a priority lower than 98, by adding or adjusting "priority=95" in /etc/yum.repos.d/epel.repo. If you want all the components from OSG, set the priority for EPEL to "priority=99".

Next, install the xrootd RPM. This will add the xrootd user if it does not already exist - sites using centralized account management may want to create this user beforehand.

yum install --enablerepo=osg-upcoming-development  xrootd
yum install --enablerepo=osg-upcoming-development  xrootd-lcmaps
yum install --enablerepo=osg-upcoming-development xrootd-cmstfc    # Only required, if LFN->PFN mapping is not a simple pre-fix.
yum install --enablerepo=osg-upcoming-development xrootd-selinux
yum install --enablerepo=osg-upcoming-development osg-ca-certs-updater
yum install --enablerepo=osg-upcoming-development fetch-crl
yum install fuse
yum install fuse-libs
yum install --enablerepo=osg-upcoming-development xrootd*

yum install --enablerepo=osg-upcoming-development scitokens-cpp xrootd-scitokens x509-scitokens-issuer-client

Configuration of dCache Xrootd Door

You will need to setup your dCache Xrootd door according to the instructions in the dCache book. For the simple unauthenticated access, it is sufficient to add a proper prefix in order to make sure you set the root path so dCache will do the LFN to PFN translation. Add something according to your local setup to the layout file of the Xrootd door.

xrootdRootPath=/pnfs/example.com/data/cms

The following configuration parameters should be added to /etc/dcache/dcache.conf. The site name should be your CMS site name. Note that these settings need to be available also on all Pool nodes to generate proper monitoring messages.

pool.mover.xrootd.plugins=edu.uchicago.monitor
# The following two lines are the values for EU sites
xrootd.monitor.detailed=cms-aaa-eu-collector.cern.ch:9330:60
xrootd.monitor.summary=xrootd.t2.ucsd.edu:9931:60
xrootd.monitor.vo=CMS
xrootd.monitor.site=T2_XY_MySite

The following should be added to the layout file of the machine(s) that host(s) the xrootd door(s), /etc/dcache/layouts/dcache-my-xrootd-door.layout.conf (adjust the host name). The location of the TFC file (typically named storage.xml) might be adjusted. The protocol might also be different for you TFC, it is just an identifier in the end.

 [xrootd-${host.name}Domain]
 [xrootd-${host.name}Domain/xrootd]
 xrootd.plugins=gplazma:gsi,authz:cms-tfc
 xrootd.cms.tfc.path=/etc/dcache/storage.xml
 xrootd.cms.tfc.protocol=xrootd

Configuration of Xrootd Redirector

To configure the xrootd redirector, /etc/xrootd/xrootd-clustered.cfg needs to be created and edited.
oss.localroot /pnfs/example.com/data/cms
xrootd.redirect xrootd-door.example.com:1094 /
Set xrootd-door.example.com to the hostname of dCache's xrootd door and /pnfs/example.com/data/cms to match your xrootdRootPath above. Here's an example of /etc/xrootd/xrootd-clustered.cfg:
xrd.port 1094
all.role server
all.manager any xrootd-cms.infn.it+ 1213
all.sitename T2_DE_DESY
xrootd.redirect dcache-cms-xrootd.desy.de:1094 /
all.export / nostage
cms.allow host *
xrootd.trace emsg login stall redirect
ofs.trace all
xrd.trace conn
cms.trace all
cms.space linger 0 recalc 30 min 2% 1g 5% 2g
oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=dir
ect
ofs.authorize 1
acc.authdb /etc/xrootd/Authfile
sec.protocol /usr/lib64 gsi -d:1 -crl:0 -authzfun:libXrdLcmaps.so -authzfunparms
:--loglevel,1 -gmapopt:10 -gmapto:0
xrootd.seclib /usr/lib64/libXrdSec.so
xrootd.fslib /usr/lib64/libXrdOfs.so
all.adminpath /var/run/xrootd
all.pidpath /var/run/xrootd
cms.delay startup 10
cms.fxhold 60s
if exec xrootd
   xrd.report xrootd.t2.ucsd.edu:9931 every 60s all sync
   xrootd.monitor all fstat 60s lfn ops ssq xfr 5 ident 5m dest fstat info user 
CMS-AAA-EU-COLLECTOR.cern.ch:9330
fi

Configuration of CRL

If you used the RPM version of fetch-crl, you will need to enable and start the fetch-crl-cron and fetch-crl-boot services. To start:

systemctl start fetch-crl-boot # This may take awhile to run
systemctl start fetch-crl-cron

To enable on boot:

systemctl enable fetch-crl-cron
systemctl enable fetch-crl-boot

Configuration of a Federation Host

You can also run this on the dCache xrootd door, in this case make sure that different ports are used for the Federation and the dCache xrootd door.

Setup the configuration /etc/xrootd/xrootd-clustered.cfg

# Port specifications; only the redirector needs to use a well-known port
# Change as needed for firewalls.
# Make sure that dCache xrootd door and Federation use different ports, when depoyed on same host.
xrd.port 1094

# The roles this server will play.
all.role server

# European Redirector
all.manager any xrootd-cms.infn.it+ 1213

# Site name for monitoring
all.sitename T2_XY_SiteName

# redirect to dCache xrootd door (adjust hostname and port) 
xrootd.redirect dcache-xrootd-door.mysite.com:1094 /

# Allow any path to be exported
all.export / nostage

# Hosts allowed to use this xrootd cluster
cms.allow host *

### Standard directives
# Simple sites probably don't need to touch these.
# Logging verbosity
xrootd.trace emsg login stall redirect
ofs.trace all
xrd.trace conn
cms.trace all

# Some tuning for disk space monitoring
# This is a pure redirector needs no real storage space
cms.space linger 0 recalc 30 min 2% 1g 5% 2g

# Integrate with CMS TFC, placed in /etc/storage.xml - protocol might be different, depends on actual TFC
oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=direct

# Turn on authorization
ofs.authorize 1
acc.authdb /etc/xrootd/Authfile
#acc.audit deny grant

# Require GSI on Federation host
sec.protocol /usr/lib64 gsi -d:1 -crl:0 -authzfun:libXrdLcmaps.so -authzfunparms:--loglevel,1 -gmapopt:10 -gmapto:0

xrootd.seclib /usr/lib64/libXrdSec.so
xrootd.fslib /usr/lib64/libXrdOfs.so
all.adminpath /var/run/xrootd
all.pidpath /var/run/xrootd

cms.delay startup 10
cms.fxhold 60s
#cms.perf int 30s pgm /usr/bin/XrdOlbMonPerf 30

if exec xrootd
# Summary monitoring configuration
   xrd.report xrootd.t2.ucsd.edu:9931 every 60s all sync
# Detailed monitoring configuration
   xrootd.monitor all fstat 60s lfn ops ssq xfr 5 ident 5m dest fstat info user CMS-AAA-EU-COLLECTOR.cern.ch:9330
fi

Configuring Authenticated Access

Authentication in D-Cache is (usually) done using GPLAZMA. The door for GSI enabled access needs a host certificate. The dCache xrootd door can be deployed with other doors, e.g. GridFTP. Please follow the dCache book to configure GPLAZMA.

Put proper mappings and usernames in /etc/grid-security/grid-vorolemap. Needs adaption to local setup!. (Only CMS part is shown, if other VOs are needed on the Xrootd door, add them accordingly.)

## CMS ##
# Need mapping for each VOMS Group(!), roles only for special mapping
"*" "/cms/Role=lcgadmin" cmsusr001
"*" "/cms/Role=production" cmsprd001
"*" "/cms/Role=priorityuser" cmsana001
"*" "/cms/Role=pilot" cmsusr001
"*" "/cms/Role=hiproduction" cmsprd001
"*" "/cms/dcms/Role=cmsphedex" cmsprd001
"*" "/cms/integration" cmsusr001
"*" "/cms/becms" cmsusr001
"*" "/cms/dcms" cmsusr001
"*" "/cms/escms" cmsusr001
"*" "/cms/ptcms" cmsusr001
"*" "/cms/itcms" cmsusr001
"*" "/cms/frcms" cmsusr001
"*" "/cms/production" cmsusr001
"*" "/cms/muon" cmsusr001
"*" "/cms/twcms" cmsusr001
"*" "/cms/uscms" cmsusr001
"*" "/cms/ALARM" cmsusr001
"*" "/cms/TEAM" cmsusr001
"*" "/cms/dbs" cmsusr001
"*" "/cms/uscms/Role=cmsphedex" cmsusr001
"*" "/cms" cmsusr001

Setup /etc/grid-security/storage-authzdb. Carefully check the usernames and UIDs GIDs, they must fit your local setup. (Again only CMS part is shown.) Please refer once more to the dCache book for details, how to set it up.

authorize cmsusr001 read-write 40501 4050 / / /
authorize cmsprd001 read-write 40751 4075 / / /
authorize cmsana001 read-write 40951 4060 / / / 

You can do some first testing of the GSI enabled Xrootd door:

xrdcp -d 2 -f xroot://xrootd-door.mydomain.org:/store/user/<Your_HN_name>/<Your_Testfile> /dev/null

Some useful debugging results are usually found in the billing logs of your D-Cache instance. The host is usually not the host you are installing the Xrootd door on.

/var/lib/dcache/billing/<YEAR>/

Configuring the CMS TFC Plugin in D-Cache

D-Cache provides in recent releases a TFC Plugin such that you can send an LFN open request to the D-Cache xrootd-door and the door will resolve it to a PFN based on TFC rules.

Configuration Example of a dCache common

An example of the dCache common configuration, /etc/dcache/dcache.conf, looks like:
dcache.layout=${host.name}
dcache.namespace=chimera
chimera.db.user = chimera
chimera.db.url = jdbc:postgresql://t3dcachedb04.psi.ch/chimera?prepareThreshold=3
dcache.user=dcache
dcache.paths.billing=/var/log/dcache
pnfsVerifyAllLookups=true
dcache.java.memory.heap=2048m
dcache.java.memory.direct=2048m
net.inetaddr.lifetime=1800
net.wan.port.min=20000
net.wan.port.max=25000
net.lan.port.min=33115
net.lan.port.max=33145
broker.host=t3se02.psi.ch
poolIoQueue=wan,%BLUE%xrootd%ENDCOLOR%
waitForFiles=${path}/setup
lfs=precious
tags=hostname=${host.name}
metaDataRepository=org.dcache.pool.repository.meta.db.BerkeleyDBMetaDataRepository
useGPlazmaAuthorizationModule=false
useGPlazmaAuthorizationCell=true
gsiftpIoQueue=wan
xrootdIoQueue=%BLUE%xrootd%ENDCOLOR%
remoteGsiftpIoQueue=wan
srmDatabaseHost=t3dcachedb04.psi.ch
srmDbName=dcache
srmDbUser=srmdcache
srmDbPassword=
srmSpaceManagerEnabled=yes
srmDbLogEnabled=true
srmRequestHistoryDatabaseEnabled=true
ftpPort=${portBase}126
kerberosFtpPort=${portBase}127
spaceManagerDatabaseHost=t3dcachedb04.psi.ch
pinManagerDbHost=t3dcachedb04.psi.ch
defaultPnfsServer=t3dcachedb04.psi.ch
SpaceManagerReserveSpaceForNonSRMTransfers=true
SpaceManagerLinkGroupAuthorizationFileName=/etc/dcache/LinkGroupAuthorization.conf
dcache.log.dir=/var/log/dcache
billingDbHost=t3dcachedb04.psi.ch
billingDbUser=srmdcache
billingDbPass=
billingDbName=billing
billingMaxInsertsBeforeCommit=10000
billingMaxTimeBeforeCommitInSecs=5
info-provider.site-unique-id=T3_CH_PSI
info-provider.se-unique-id=t3se02.psi.ch
info-provider.se-name=SRM endpoint for T3_CH_PSI
info-provider.glue-se-status=Production
info-provider.dcache-quality-level=production
info-provider.dcache-architecture=multidisk
info-provider.http.host = t3dcachedb04
poolmanager.cache-hit-messages.enabled=true
dcache.log.server.host=t3dcachedb04
alarms.store.db.type=rdbms
webadmin.alarm.cleaner.enabled=false
poolqplots.enabled=true
dcache.log.mode=new

Configuration of dCache gPlazma2

An example of the dCache gPlazma2 configuration, /etc/dcache/gplazma.conf, looks like:
auth     optional   x509
auth     optional   voms
map      requisite  vorolemap
map      requisite  authzdb
session  requisite  authzdb

Operating xrootd

PNFS must be mounted for the xrootd federation host to function. Mount this manually, and configure /etc/fstab so this happens on boot if desired.

There are two init services, xrootd and cmsd, which must both be working for the site to participate in the xrootd service:

systemctl start xrootd
systemctl start cmsd

Log files are kept in /var/log/xrootd/clustered/{cmsd,xrootd}.log, and are auto-rotated.

After startup, the xrootd and cmsd daemons drop privilege to the xrootd user.

Port usage:

The following information is probably needed for sites with strict firewalls:
  • The xrootd server listens on TCP port 1095 (this is not the default port for Xrootd; we assume that dCache Xrootd door uses the default).
  • The cmsd server needs outgoing TCP port 1213 to xrootd-cms.infn.it (EU) or cmsxrootd.fnal.gov (US).
  • Usage statistics are sent to xrootd.t2.ucsd.edu on UDP ports 9931 and 9930.

Testing the Installation

The newly installed server can be tested directly using:
xrdcp -d 1 -f xroot://local_hostname.example.com//store/foo/bar /dev/null
You will need a grid certificate installed in your user account for the above to work

You can then see if your server is participating properly in the xrootd service by checking:

xrdcp root://xrootd-itb.unl.edu//store/foo/bar /tmp/bar2
where /store/foo/bar is unique to your site
Edit | Attach | Watch | Print version | History: r8 < r7 < r6 < r5 < r4 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r8 - 2021-11-17 - ChristophWissing
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    CMSPublic All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback