Joining a dCache-based SE to the Xrootd service.

This document covers joining a dCache-based storage element to the CMS Xrootd storage federation (AAA). This page assumes three things:

  1. You are using a recent dCache version (any thing beyond 3.2).
  2. All your pool nodes are on the public internet.
  3. Only EL7 installations are covered (although other Linux flavors should be fine).

If you have pool nodes on a private network, you can still use this page to configure a proxy. For scalability reasons this not really recommended.

The architecture setup is diagrammed below:


This architecture uses the built-in dCache Xrootd door and adds a "federation host", which runs native xrootd components. This host integrates the dCache door with the global federation, but effectively all clients are redirected directly to the dCache xrootd door, then to the individual pools. GSI security and namespace translation are performed by dCache itself. Optionally also the xrootd federation host can be GSI enabled, to avoid exposing the namespace unprotected. At no point does data have to be "proxied", which should improve the scalability and remove complexity from the entire system.

Installation of Federation Host

For almost all configurations you need at least a few RPMs that are provided via the OSG repository. The xrootd components can be installed also via EPEL, which is likely the preferred way for non-OSG sites.

The federation host needs a Grid host certificate to authenticate itself. Procedures to obtain Grid certificates vary from country to country and are therefore not covered here.

Please also refer to the OSG admin documentation.

Since some OSG have also dependencies on EPEL, you need to install it:

yum install

Install the OSG software repository.

yum install

If you want to install mostly from EPEL, set the repository to a priority lower than 98, by adding or adjusting "priority=95" in /etc/yum.repos.d/epel.repo. If you want all the components from OSG, set the priority for EPEL to "priority=99".

The host needs also some root certificates of the various certification authorities. There are several ways to obtain those.

Install and enable certificate revocation lists

yum install fetch-crl

systemctl start fetch-crl-cron

systemctl enable fetch-crl-cron

Install xrootd and components...


First, setup your dCache Xrootd door according to the instructions in the dCache book. For the simple unauthenticated access it sufficient to add a proper prefix in order to make sure you set the root path so dCache will do the LFN to PFN translation. Add something according to your local setup to the layout file of the Xrootd door.


Configuring Authenticated Access is a bit more complex.

Next, cp /etc/xrootd/xrootd.sample.dcache.cfg /etc/xrootd/xrootd-clustered.cfg and edit the resulting config file.

oss.localroot /pnfs/
xrootd.redirect /
Set to the hostname of dCache's xrootd door and /pnfs/ to match your xrootdRootPath above.

Operating xrootd

PNFS must be mounted for the xrootd federation host to function. Mount this manually, and configure /etc/fstab so this happens on boot if desired.

There are two init services, xrootd and cmsd, which must both be working for the site to participate in the xrootd service:

service xrootd start
service cmsd start

Everything is controlled by a proper init script (available commands are start, stop, restart, status, and condrestart). To enable these on boot, run:

chkconfig --level 345 xroot on
chkconfig --level 345 cmsd on

Log files are kept in /var/log/xrootd/{cmsd,xrootd}.log, and are auto-rotated.

After startup, the xrootd and cmsd daemons drop privilege to the xrootd user.

If you used the RPM version of fetch-crl, you will need to enable and start the fetch-crl-cron and fetch-crl-boot services. To start:

service fetch-crl-cron
service fetch-crl-boot # This may take awhile to run

To enable on boot:

chkconfig --level 345 fetch-crl-cron on
chkconfig --level 345 fetch-crl-boot on

Port usage:

The following information is probably needed for sites with strict firewalls:
  • The xrootd server listens on TCP port 1095 (this is not the default port for Xrootd; we assume that dCache Xrootd door uses the default).
  • The cmsd server needs outgoing TCP port 1213 to
  • Usage statistics are sent to on UDP ports 9931 and 9930.

Testing the install.

The newly installed server can be tested directly using:
xrdcp -d 1 -f xroot:// /dev/null
You will need a grid certificate installed in your user account for the above to work

You can then see if your server is participating properly in the xrootd service by checking:

xrdcp root:// /tmp/bar2
where /store/foo/bar is unique to your site

Configuring Authenticated Access

Authentication in D-Cache is (usually) done using GPLAZMA. The door for GSI enabled access needs a host certificate. This howto covers GPLAZMA version 1 only. Since you need special rules for the Xrootd door used in the CMS redirector business, you need to configure usage of a GPLAZMA module for this door, while the remaining instance can use the same GPLAZMA cell. Note you need a recent 1.9.12 release of D-Cache, 1.9.12-*21* is known to work. (There are early 1.9.12 releases that had issues with configuring module over cell usage.) Add the following to the layout file (usually found in /opt/d-cache/etc/layout/)

# Adjust the path according to your site:
# You might consider to have xrootd in a selected queue (adjust to your setup):
# xrootdIoQueue=dcap-q 
# You might want to put timeouts - optimal value is matter of tuning
# xrootdMoverTimeout=28800000 

For GPLAZMA you need to adjust some settings stored in /opt/d-cache/etc/dcachesrm-gplazma.policy or in /etc/dcache/dcachesrm-gplazma.policy:

# All others are OFF


# Built-in gPLAZMAlite grid VO role mapping

Put proper mappings and usernames in /etc/grid-security/grid-vorolemap. Needs adaption to local setup!. (Only CMS part is shown, if other VOs are needed on the Xrootd door, add them accordingly.)

## CMS ##
# Need mapping for each VOMS Group(!), roles only for special mapping
"*" "/cms/Role=lcgadmin" cmsusr001
"*" "/cms/Role=production" cmsprd001
"*" "/cms/Role=priorityuser" cmsana001
"*" "/cms/Role=pilot" cmsusr001
"*" "/cms/Role=hiproduction" cmsprd001
"*" "/cms/dcms/Role=cmsphedex" cmsprd001
"*" "/cms/integration" cmsusr001
"*" "/cms/becms" cmsusr001
"*" "/cms/dcms" cmsusr001
"*" "/cms/escms" cmsusr001
"*" "/cms/ptcms" cmsusr001
"*" "/cms/itcms" cmsusr001
"*" "/cms/frcms" cmsusr001
"*" "/cms/production" cmsusr001
"*" "/cms/muon" cmsusr001
"*" "/cms/twcms" cmsusr001
"*" "/cms/uscms" cmsusr001
"*" "/cms/ALARM" cmsusr001
"*" "/cms/TEAM" cmsusr001
"*" "/cms/dbs" cmsusr001
"*" "/cms/uscms/Role=cmsphedex" cmsusr001
"*" "/cms" cmsusr001

Now comes the important part for path prefix in /etc/grid-security/storage-authzdb. Carefully check the usernames and UIDs GIDs, they must fit your local setup. (Again only CMS part is shown.)

authorize cmsusr001 read-write 40501 4050 /pnfs/ /pnfs/ /
authorize cmsprd001 read-write 40751 4075 /pnfs/ /pnfs/ /
authorize cmsana001 read-write 40951 4060 /pnfs/ /pnfs/ / 

You can do some first testing of the GSI enabled Xrootd door:

xrdcp -d 2 -f xroot://<Your_HN_name>/<Your_Testfile> /dev/null

Some useful debugging results are usually found in the billing logs of your D-Cache instance. The host is usually not the host you are installing the Xrootd door on.


Configuring the CMS TFC Plugin in D-Cache

D-Cache provides in recent releases a TFC Plugin such that you can send an LFN open request to the D-Cache xrootd-door and the door will resolve it to a PFN based on TFC rules.

Older dCache Releases (up to 2.4)

The following information is not valid for recent supported releases. They are just kept for reference.

You need D-Cache 1.9.12-25 or beyond, 2.2 or 2.4. For the recent 1.9.12 and 2.2 You need to install the "Xrootd4j-Plugin" from D-Cache, which provides some xrootd features of the 2.4 release in 1.9.12-25+ and 2.2.

# Download the xrootd4j-backport package
cd /tmp
wget -O xrootd4j-backport-2.4-SNAPSHOT.tar.gz
# Install into /usr/local/share/dcache/plugins
mkdir -p  /usr/local/share/dcache/plugins
cd /usr/local/share/dcache/plugins
tar -xzvf /tmp/xrootd4j-backport-2.4-SNAPSHOT.tar.gz

Install the cmstfc plugin.

cd /tmp
wget -O xrootd4j-cms-plugin-1.0-SNAPSHOT.tar.gz
cd  /usr/local/share/dcache/plugins
tar -xzvf /tmp/xrootd4j-cms-plugin-1.0-SNAPSHOT.tar.gz

In the layout file (found typically in /opt/d-cache/etc/layouts) of the door, you have to add these lines:

# Unauthenticated
# Authenticated according to gplazma
# xrootdPlugins=gplazma:gsi,authz:cms-tfc
# Change this according to your location:
# Must be coherent with your TFC in storage.xml:

On the xrootd federation host you can use the xrootd CMS TFC plugin, by configuring it in /etc/xrootd/xrootd.cfg (or similar like /etc/xrootd/xrootd-clustered.cfg ). Make sure that there is no oss.localroot statement, which you might have from an old setup that works with a prefix only.

# Integrate with CMS TFC, placed in /etc/xrootd/storage.xml
oss.namelib /usr/lib64/ file:/etc/xrootd/storage.xml?protocol=direct

Recent dCache Releases 2.6, 2.10, 2.13

For the host that runs the xrootd door you need the TFC plugin. It is provided in the Download Area from The RPM can be installed like this

rpm -ivh xrootd4j-cms-plugin-1.3.7-1.noarch.rpm

The following configuration parameters should be added to /etc/dcache/dcache.conf. The site name should be your CMS site name.

# The following two lines are the values for EU sites

The following should be added to the layout file of the machine(s) that host(s) the xrootd door(s), /etc/dcache/layouts/dcache-my-xrootd-door.layout.conf (adjust the host name). The location of the TFC file (typically named storage.xml) might be adjusted. The protocol might also be different for you TFC, it is just an identifier in the end.


Test your setup smile

Configuring the Monitoring Plugin

dCache can emit monitoring information similar to the SLAC Xrootd implementation. The process of enabling this is documented on the following page:

Useful Links.

Xrootd, gPlazma2 and dcache-2.6.19-1


Jan 14th 2014, Fabio Martinelli: This is my personal experience with the triple [ SLAC Xrootd, gPlazma2 and dcache-2.6.19-1 ] and it was not approved by CMS, it simply worked for me and I thought it was worth to report my experiences..


The dCache Admin can avoid to manage both a gPlazma1 and a gPlazma2 configuration and simply use the gPlazma2 cell also for the dCache Xrootd cell; to achieve that make the following configurations; be aware that writes by xrootd are not allowed because of the empty list xrootdAllowedWritePaths=

The Xrootd service requires /pnfs

The Xrootd service strictly requires the mount point /pnfs in order to find the /pnfs files ( the dCache services don't need /pnfs instead ) More... Close
# mount | grep pnfs
dcachedb:/pnfs on /pnfs type nfs (ro,nolock,intr,noac,hard,nfsvers=3,addr=XXX.XXX.XXX.XXX)

Xrootd conf

More... Close
[root@t3se02 dcache]# grep -v \# /etc/xrootd/xrootd-clustered.cfg | tr -s '\n'
xrd.port 1095
all.role server
all.manager any 1213
xrootd.redirect /
all.export / nostage readonly
cms.allow host *
xrootd.trace emsg login stall redirect
ofs.trace all
xrd.trace all
cms.trace all
#oss.namelib /usr/lib64/ file:/etc/xrootd/storage.xml?protocol=direct
oss.namelib  /usr/lib64/ file:/cvmfs/
ofs.authorize 1 
acc.authdb /etc/xrootd/Authfile
xrootd.seclib /usr/lib64/
xrootd.fslib /usr/lib64/
all.adminpath /var/run/xrootd
all.pidpath /var/run/xrootd
cms.delay startup 10
cms.fxhold 60s every 60s all sync
xrootd.monitor all auth flush io 60s ident 5m mbuff 8k rbuff 4k rnums 3 window 10s dest files io info user redir
all.sitename  T3_CH_PSI



<lfn-to-pfn protocol="direct" destination-match=".*" path-match="/+(.*)" result="/pnfs/$1"/>

<lfn-to-pfn protocol="dcap" destination-match=".*" chain="direct" path-match="/+(.*)" result="dcap://$1"/>

<lfn-to-pfn protocol="srm" destination-match=".*" chain="direct" path-match="/+(.*)" result="srm://$1"/>

<lfn-to-pfn protocol="srmv2" destination-match=".*" chain="direct" path-match="/+(.*)" result="srm://$1"/>

<lfn-to-pfn protocol="xrootd" destination-match=".*" path-match="/+store/(.*)" result="root://$1"/>

<pfn-to-lfn protocol="direct" destination-match=".*" path-match="/pnfs/*)" result="/$1"/>

<pfn-to-lfn protocol="dcap" destination-match=".*" chain="direct" path-match="dcap://*)" result="$1"/>

<pfn-to-lfn protocol="srm" destination-match=".*" path-match="srm://\?SFN=/pnfs/*)" result="/$1"/>

<pfn-to-lfn protocol="srmv2" destination-match=".*" path-match="srm://\?SFN=/pnfs/*)" result="/$1"/>

<pfn-to-lfn protocol="xrootd" destination-match=".*" path-match="root://*)" result="/store/$1"/>


dCache common conf

More... Close
[root@t3se02 dcache]# grep -v \# /etc/dcache/dcache.conf | tr -s '\n'
chimera.db.user = chimera
chimera.db.url = jdbc:postgresql://
billingMaxTimeBeforeCommitInSecs=5 endpoint for T3_CH_PSI
info-provider.dcache-architecture=multidisk = t3dcachedb04

dCache Xrootd node

The dCache Xrootd service is listening on the same node where I switched on the SLAC Xrootd service More... Close
[root@t3se02 dcache]# grep -v \# /etc/dcache/layouts/t3se02.conf | tr -s '\n'

dCache gPlazma2 node

More... Close
[root@t3dcachedb04 dcache]# grep -v \# /etc/dcache/layouts/t3dcachedb04.conf | tr -s '\n'


dCache gPlazma2 conf

More... Close
[root@t3dcachedb04 dcache]# cat /etc/dcache/gplazma.conf
auth     optional   x509 
auth     optional   voms 
map      requisite  vorolemap 
map      requisite  authzdb 
session  requisite  authzdb

dCache gPlazma2 logs

During a xrdcp interaction you will find rows like these in the gPlazma2 logs: More... Close
01.29.20  [pool-2-thread-28] [Xrootd-t3se02 Login AUTH voms] Certificate verification: Verifying certificate 'DC=ch,DC=cern,OU=computers,'
01.29.20  [pool-2-thread-28] [Xrootd-t3se02 Login MAP vorolemap] Source changed. Recreating map.
01.29.20  [pool-2-thread-28] [Xrootd-t3se02 Login MAP vorolemap] VOMS authorization successful for user with DN: /DC=com/DC=quovadisglobal/DC=grid/DC=switch/DC=users/C=CH/O=Paul-Scherrer-Institut (PSI)/CN=Fabio Martinelli and FQAN: /cms for user name: martinelli_f.
01.29.20  [pool-2-thread-28] [Xrootd-t3se02 Login MAP authzdb] Source changed. Recreating map.

xrdcp example

More... Close
[martinel@lxplus0485 ~]$ xrdcp -d 1 -f root://  /tmp && rm -f /tmp/test.root
131121 11:37:32 21109 Xrd: main: (C) 2004-2011 by the XRootD collaboration. Version: v3.3.4
131121 11:37:32 21109 Xrd: Create: (C) 2004-2010 by the Xrootd group. XrdClient $Revision$ - Xrootd version: v3.3.4
131121 11:37:32 21109 Xrd: ShowUrls: The converted URLs count is 1
131121 11:37:32 21109 Xrd: ShowUrls: URL n.1: root://
131121 11:37:32 21109 Xrd: ShowUrls: The converted URLs count is 1
131121 11:37:32 21109 Xrd: ShowUrls: URL n.1: root://
sec_Client: protocol request for host token='&P=gsi,v:10300,c:ssl,ca:2f3fadf6.0'
sec_PM: Loading gsi protocol object from
131121 11:37:32 21109 secgsi_InitOpts: *** ------------------------------------------------------------ ***
131121 11:37:32 21109 secgsi_InitOpts:  Mode: client
131121 11:37:32 21109 secgsi_InitOpts:  Debug: 1
131121 11:37:32 21109 secgsi_InitOpts:  CA dir: /etc/grid-security/certificates/
131121 11:37:32 21109 secgsi_InitOpts:  CA verification level: 1
131121 11:37:32 21109 secgsi_InitOpts:  CRL dir: /etc/grid-security/certificates/
131121 11:37:32 21109 secgsi_InitOpts:  CRL extension: .r0
131121 11:37:32 21109 secgsi_InitOpts:  CRL check level: 1
131121 11:37:32 21109 secgsi_InitOpts:  CRL refresh time: 86400
131121 11:37:32 21109 secgsi_InitOpts:  Certificate: /afs/
131121 11:37:32 21109 secgsi_InitOpts:  Key: /afs/
131121 11:37:32 21109 secgsi_InitOpts:  Proxy file: //afs/
131121 11:37:32 21109 secgsi_InitOpts:  Proxy validity: 12:00
131121 11:37:32 21109 secgsi_InitOpts:  Proxy dep length: 0
131121 11:37:32 21109 secgsi_InitOpts:  Proxy bits: 512
131121 11:37:32 21109 secgsi_InitOpts:  Proxy sign option: 1
131121 11:37:32 21109 secgsi_InitOpts:  Proxy delegation option: 0
131121 11:37:32 21109 secgsi_InitOpts:  Allowed server names: [*/][/*]
131121 11:37:32 21109 secgsi_InitOpts:  Crypto modules: ssl
131121 11:37:32 21109 secgsi_InitOpts:  Ciphers: aes-128-cbc:bf-cbc:des-ede3-cbc
131121 11:37:32 21109 secgsi_InitOpts:  MDigests: sha1:md5
131121 11:37:32 21109 secgsi_InitOpts: *** ------------------------------------------------------------ ***
sec_PM: Using gsi protocol, args='v:10300,c:ssl,ca:2f3fadf6.0'
131121 11:37:32 21109 cryptossl_X509::IsCA: certificate has 4 extensions
131121 11:37:32 21109 cryptossl_X509::IsCA: certificate has 4 extensions
131121 11:37:32 21109 cryptossl_X509::IsCA: certificate has 4 extensions
131121 11:37:32 21109 cryptossl_X509::IsCA: certificate has 8 extensions
131121 11:37:32 21109 cryptossl_X509::IsCA: certificate has 8 extensions
131121 11:37:32 21109 Xrd: Open: Access to server granted.
131121 11:37:32 21109 Xrd: Open: Opening the remote file /store/user/martinelli_f/test.root
131121 11:37:32 21109 Xrd: Open: File open in progress.
131121 11:37:32 21112 Xrd: HandleServerError: Received redirection to []. Token=[]]. Opaque=[].
131121 11:37:33 21112 Xrd: HandleServerError: Received redirection to []. Token=[]]. Opaque=[].
131121 11:37:33 21112 Xrd: Connect: can't open connection to []
131121 11:37:33 21112 Xrd: XrdNetFile: Error creating logical connection to
131121 11:37:33 21112 Xrd: GoToAnotherServer: Error connecting to [
131121 11:37:38 21112 Xrd: HandleServerError: Received redirection to []. Token=[]]. Opaque=[].
131121 11:37:38 21112 Xrd: HandleServerError: Received redirection to []. Token=[]]. Opaque=[].
131121 11:37:38 21112 Xrd: HandleServerError: Received redirection to []. Token=[]]. Opaque=[].
sec_Client: protocol request for host token='&P=gsi,v:10200,c:ssl,ca:e72045ce'
sec_PM: Using gsi protocol, args='v:10200,c:ssl,ca:e72045ce'
131121 11:37:38 21112 cryptossl_X509::IsCA: certificate has 7 extensions
131121 11:37:38 21112 secgsi_VerifyCA: Warning: CA certificate not self-signed and integrity not checked: assuming OK (d800b164.0)
131121 11:37:38 21112 cryptossl_X509::IsCA: certificate has 8 extensions
131121 11:37:38 21112 Xrd: HandleServerError: Received redirection to []. Token=[]]. Opaque=[&org.dcache.uuid=38ea88f9-6f38-47d8-95e3-76b90a1eacbc].
131121 11:37:38 21109 Xrd: main: root:// --> /tmp//test.root
131121 11:37:38 21119 Xrd: Read: Hole in the cache: offs=0, len=8388608
[xrootd] Total 460.67 MB	|====================| 100.00 % [27.4 MB/s]
Low level caching info:
 BytesSubmitted=483049545 BytesHit=483049545

XrdClient counters:
 ReadBytes:                 483049545
 WrittenBytes:              0
 WriteRequests:             0
 ReadRequests:              58
 ReadMisses:                1
 ReadHits:                  57
 ReadMissRate:              0.017241
 ReadVRequests:             0
 ReadVSubRequests:          0
 ReadVSubChunks:            0
 ReadVBytes:                0
 ReadVAsyncRequests:        0
 ReadVAsyncSubRequests:     0
 ReadVAsyncSubChunks:       0
 ReadVAsyncBytes:           0
 ReadAsyncRequests:         114
 ReadAsyncBytes:            474660937
Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng XroodDcacheIntegrationV2.png r3 r2 r1 manage 28.5 K 2011-11-02 - 03:17 BrianBockelman  
PNGpng XrootdDcacheIntegration.png r1 manage 44.3 K 2011-03-09 - 19:13 BrianBockelman  
Edit | Attach | Watch | Print version | History: r33 < r32 < r31 < r30 < r29 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r32 - 2019-08-06 - ChristophWissing
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback