Difference: DcacheXrootd (1 vs. 30)

Revision 302016-11-04 - FabioMartinelli

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Line: 319 to 319
 ofs.trace all xrd.trace all cms.trace all
Changed:
<
<
oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=direct
>
>
#oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=direct oss.namelib /usr/lib64/libXrdCmsTfc.so file:/cvmfs/cms.cern.ch/SITECONF/local/PhEDEx/storage.xml?protocol=direct
 ofs.authorize 1 acc.authdb /etc/xrootd/Authfile xrootd.seclib /usr/lib64/libXrdSec.so
Line: 332 to 333
 xrootd.monitor all auth flush io 60s ident 5m mbuff 8k rbuff 4k rnums 3 window 10s dest files io info user redir CMS-AAA-EU-COLLECTOR.cern.ch:9330 all.sitename T3_CH_PSI </>
<!--/twistyPlugin-->
Changed:
<
<

storage.xml

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->
[root@t3se02 xrootd]# ls -l /etc/xrootd/storage.xml
lrwxrwxrwx 1 root root 46 Oct 30 13:25 /etc/xrootd/storage.xml -> /swshare/cms/SITECONF/local/PhEDEx/storage.xml
<!--/twistyPlugin-->
>
>

/cvmfs/cms.cern.ch/SITECONF/local/PhEDEx/storage.xml

<storage-mapping>

<!--
  <lfn-to-pfn protocol="direct" destination-match=".*"
    path-match="/+store/PhEDEx_LoadTest07/LoadTest07_[^_]+_CSCS/[^/]+/[^/]+/(.*)_.*_.*"
       result="/pnfs/psi.ch/cms/trivcat/store/phedex_monarctest/monarctest_CSCS-DISK1/$1"/>
  -->

<lfn-to-pfn protocol="direct" destination-match=".*" path-match="/+(.*)" result="/pnfs/psi.ch/cms/trivcat/$1"/>

<lfn-to-pfn protocol="dcap" destination-match=".*" chain="direct" path-match="/+(.*)" result="dcap://t3se01.psi.ch/$1"/>

<lfn-to-pfn protocol="srm" destination-match=".*" chain="direct" path-match="/+(.*)" result="srm://t3se01.psi.ch:8443/srm/managerv1?SFN=/$1"/>

<lfn-to-pfn protocol="srmv2" destination-match=".*" chain="direct" path-match="/+(.*)" result="srm://t3se01.psi.ch:8443/srm/managerv2?SFN=/$1"/>

<!-- https://twiki.cern.ch/twiki/bin/view/Main/ConfiguringFallback  -->
<lfn-to-pfn protocol="xrootd" destination-match=".*" path-match="/+store/(.*)" result="root://xrootd-cms.infn.it//store/$1"/>

<pfn-to-lfn protocol="direct" destination-match=".*" path-match="/pnfs/psi.ch/cms/trivcat/(.*)" result="/$1"/>

<pfn-to-lfn protocol="dcap" destination-match=".*" chain="direct" path-match="dcap://t3se01.psi.ch(.*)" result="$1"/>

<pfn-to-lfn protocol="srm" destination-match=".*" path-match="srm://t3se01.psi.ch:8443/srm/managerv1\?SFN=/pnfs/psi.ch/cms/trivcat/(.*)" result="/$1"/>

<pfn-to-lfn protocol="srmv2" destination-match=".*" path-match="srm://t3se01.psi.ch:8443/srm/managerv2\?SFN=/pnfs/psi.ch/cms/trivcat/(.*)" result="/$1"/>

<!-- https://twiki.cern.ch/twiki/bin/view/Main/ConfiguringFallback  -->
<pfn-to-lfn protocol="xrootd" destination-match=".*" path-match="root://xrootd-cms.infn.it//store/(.*)" result="/store/$1"/>

</storage-mapping>

 

dCache common conf

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->
[root@t3se02 dcache]# grep -v \# /etc/dcache/dcache.conf | tr -s '\n'

Revision 292016-10-05 - FabioMartinelli

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Line: 295 to 295
 

Xrootd, gPlazma2 and dcache-2.6.19-1

Disclaimer

Changed:
<
<
14th Jan 2014 Fabio Martinelli: This is my personal experience with the triple [ SLAC Xrootd, gPlazma2 and dcache-2.6.19-1 ] and it was not approved by CMS, it simply worked for me and I thought it was worth to report my experiences here ( it took me a day of tests ).
>
>
Jan 14th 2014, Fabio Martinelli: This is my personal experience with the triple [ SLAC Xrootd, gPlazma2 and dcache-2.6.19-1 ] and it was not approved by CMS, it simply worked for me and I thought it was worth to report my experiences..
 

Intro

Changed:
<
<
The dCache 2.6 Admin can avoid to manage both a gPlazma1 and a gPlazma2 configuration and simply use the gPlazma2 cell also for the dCache Xrootd cell; to achieve that I did the following configurations; be aware that writes by xrootd are not allowed because of the empty list xrootdAllowedWritePaths=
>
>
The dCache Admin can avoid to manage both a gPlazma1 and a gPlazma2 configuration and simply use the gPlazma2 cell also for the dCache Xrootd cell; to achieve that make the following configurations; be aware that writes by xrootd are not allowed because of the empty list xrootdAllowedWritePaths=
 

The Xrootd service requires /pnfs

Changed:
<
<
The Xrootd service strictly requires the mount point /pnfs to find the /pnfs files ( the dCache services don't need it instead )
>
>
The Xrootd service strictly requires the mount point /pnfs in order to find the /pnfs files ( the dCache services don't need /pnfs instead )
  More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->
# mount | grep pnfs

Revision 272016-09-09 - EnginEren

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Line: 320 to 320
 xrd.trace all cms.trace all oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=direct
Added:
>
>
ofs.authorize 1 acc.authdb /etc/xrootd/Authfile
 xrootd.seclib /usr/lib64/libXrdSec.so xrootd.fslib /usr/lib64/libXrdOfs.so all.adminpath /var/run/xrootd
Line: 327 to 329
 cms.delay startup 10 cms.fxhold 60s xrd.report xrootd.t2.ucsd.edu:9931 every 60s all sync
Changed:
<
<
xrootd.monitor all auth flush io 60s ident 5m mbuff 8k rbuff 4k rnums 3 window 10s dest files io info user redir xrootd.t2.ucsd.edu:9930
>
>
xrootd.monitor all auth flush io 60s ident 5m mbuff 8k rbuff 4k rnums 3 window 10s dest files io info user redir CMS-AAA-EU-COLLECTOR.cern.ch:9330
 all.sitename T3_CH_PSI
<!--/twistyPlugin-->

storage.xml

Revision 262016-08-15 - ChristophWissing

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Line: 203 to 203
  D-Cache provides in recent releases a TFC Plugin such that you can send an LFN open request to the D-Cache xrootd-door and the door will resolve it to a PFN based on TFC rules.
Changed:
<
<
This is quite new, released in Dec 2012, and not yet tested in production. Sites are encouraged to try it. Report your experiences to WAN access Hypernews.
>
>

Older dCache Releases (up to 2.4)

The following information is not valid for recent supported releases. They are just kept for reference.

<!--/twistyPlugin twikiMakeVisibleInline-->
  You need D-Cache 1.9.12-25 or beyond, 2.2 or 2.4. For the recent 1.9.12 and 2.2 You need to install the "Xrootd4j-Plugin" from D-Cache, which provides some xrootd features of the 2.4 release in 1.9.12-25+ and 2.2.
Line: 245 to 254
 # Integrate with CMS TFC, placed in /etc/xrootd/storage.xml oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=direct
Added:
>
>
<!--/twistyPlugin-->

Recent dCache Releases 2.6, 2.10, 2.13

For the host that runs the xrootd door you need the TFC plugin. It is provided in the Download Area from dcache.org. The RPM can be installed like this

rpm -ivh xrootd4j-cms-plugin-1.3.7-1.noarch.rpm

The following configuration parameters should be added to /etc/dcache/dcache.conf. The site name should be your CMS site name.

pool.mover.xrootd.plugins=edu.uchicago.monitor
# The following two lines are the values for EU sites
xrootd.monitor.detailed=cms-aaa-eu-collector.cern.ch:9330:60
xrootd.monitor.summary=xrootd.t2.ucsd.edu:9931:60
xrootd.monitor.vo=CMS
xrootd.monitor.site=T2_XY_MySite

The following should be added to the layout file of the machine(s) that host(s) the xrootd door(s), /etc/dcache/layouts/dcache-my-xrootd-door.layout.conf (adjust the host name). The location of the TFC file (typically named storage.xml) might be adjusted. The protocol might also be different for you TFC, it is just an identifier in the end.

 [xrootd-${host.name}Domain]
 [xrootd-${host.name}Domain/xrootd]
 xrootd.plugins=gplazma:gsi,authz:cms-tfc
 xrootd.cms.tfc.path=/etc/dcache/storage.xml
 xrootd.cms.tfc.protocol=xrootd
  Test your setup smile

Revision 252015-02-05 - FabioMartinelli

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Line: 261 to 261
 14th Jan 2014 Fabio Martinelli: This is my personal experience with the triple [ SLAC Xrootd, gPlazma2 and dcache-2.6.19-1 ] and it was not approved by CMS, it simply worked for me and I thought it was worth to report my experiences here ( it took me a day of tests ).

Intro

The dCache 2.6 Admin can avoid to manage both a gPlazma1 and a gPlazma2 configuration and simply use the gPlazma2 cell also for the dCache Xrootd cell; to achieve that I did the following configurations; be aware that writes by xrootd are not allowed because of the empty list xrootdAllowedWritePaths=
Added:
>
>

The Xrootd service requires /pnfs

The Xrootd service strictly requires the mount point /pnfs to find the /pnfs files ( the dCache services don't need it instead ) More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->
# mount | grep pnfs
dcachedb:/pnfs on /pnfs type nfs (ro,nolock,intr,noac,hard,nfsvers=3,addr=XXX.XXX.XXX.XXX)
<!--/twistyPlugin-->
 

Xrootd conf

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->
[root@t3se02 dcache]# grep -v \# /etc/xrootd/xrootd-clustered.cfg | tr -s '\n'
xrd.port 1095
all.role server

Changed:
<
<
all.manager xrootd.ba.infn.it:1213
>
>
all.manager any xrootd-cms.infn.it+ 1213
 xrootd.redirect t3se02.psi.ch:1094 / all.export / nostage readonly cms.allow host *

Revision 222014-01-14 - FabioMartinelli

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Line: 256 to 256
 

Useful Links.

Changed:
<
<

Xrootd, gPlazma2 and dcache-2.6.16-1

>
>

Xrootd, gPlazma2 and dcache-2.6.19-1

 

Disclaimer

Changed:
<
<
21-11-2013 Fabio Martinelli: This is my personal experience with the triple [ SLAC Xrootd, gPlazma2 and dcache-2.6.16-1 ], it was not tested by other sites, it was not approved by CMS, it simply worked for me and I thought it was worth to report my experiences here ( it took me 1 day of tests ).
>
>
14th Jan 2014 Fabio Martinelli: This is my personal experience with the triple [ SLAC Xrootd, gPlazma2 and dcache-2.6.19-1 ] and it was not approved by CMS, it simply worked for me and I thought it was worth to report my experiences here ( it took me a day of tests ).
 

Intro

Changed:
<
<
The dCache 2.6 Admin can avoid to manage both a gPlazma1 and a gPlazma2 configuration and simply use the gPlazma2 cell also for the dCache Xrootd cell; to achieve that I did the following configurations on my T3; be aware that writes by xrootd are not allowed because of the empty list xrootdAllowedWritePaths=
>
>
The dCache 2.6 Admin can avoid to manage both a gPlazma1 and a gPlazma2 configuration and simply use the gPlazma2 cell also for the dCache Xrootd cell; to achieve that I did the following configurations; be aware that writes by xrootd are not allowed because of the empty list xrootdAllowedWritePaths=
 

Xrootd conf

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

Revision 212013-11-21 - FabioMartinelli

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Line: 260 to 260
 

Disclaimer

21-11-2013 Fabio Martinelli: This is my personal experience with the triple [ SLAC Xrootd, gPlazma2 and dcache-2.6.16-1 ], it was not tested by other sites, it was not approved by CMS, it simply worked for me and I thought it was worth to report my experiences here ( it took me 1 day of tests ).

Intro

Changed:
<
<
The dCache 2.6 Admin can avoid to manage both a gPlazma1 and a gPlazma2 configuration and simply use the gPlazma2 cell also for the dCache Xrootd cell; to achieve that I did the following configurations on my T3; be aware that writes by xrootd are not allowed because of the empty list xrootdAllowedWritePaths
>
>
The dCache 2.6 Admin can avoid to manage both a gPlazma1 and a gPlazma2 configuration and simply use the gPlazma2 cell also for the dCache Xrootd cell; to achieve that I did the following configurations on my T3; be aware that writes by xrootd are not allowed because of the empty list xrootdAllowedWritePaths=
 

Xrootd conf

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

Line: 492 to 492
 131121 11:37:38 21112 cryptossl_X509::IsCA: certificate has 7 extensions 131121 11:37:38 21112 secgsi_VerifyCA: Warning: CA certificate not self-signed and integrity not checked: assuming OK (d800b164.0) 131121 11:37:38 21112 cryptossl_X509::IsCA: certificate has 8 extensions
Changed:
<
<
131121 11:37:38 21112 Xrd: HandleServerError: Received redirection to [192.33.123.52:20533]. Token=[]]. Opaque=[&org.dcache.uuid=38ea88f9-6f38-47d8-95e3-76b90a1eacbc].
>
>
131121 11:37:38 21112 Xrd: HandleServerError: Received redirection to [192.33.123.52:20533]. Token=[]]. Opaque=[&org.dcache.uuid=38ea88f9-6f38-47d8-95e3-76b90a1eacbc].
 
131121 11:37:38 21109 Xrd: main: root://xrootd.ba.infn.it//store/user/martinelli_f/test.root -->
/tmp//test.root 131121 11:37:38 21119 Xrd: Read: Hole in the cache: offs=0, len=8388608 [xrootd] Total 460.67 MB |====================| 100.00 % [27.4 MB/s]

Revision 202013-11-21 - FabioMartinelli

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Line: 256 to 256
 

Useful Links.

Changed:
<
<

Xrootd, gPlazma2 and dcache-2.6.11-1

>
>

Xrootd, gPlazma2 and dcache-2.6.16-1

 

Disclaimer

Changed:
<
<
30-10-2013 Fabio Martinelli: This is my personal experience with the triple [ SLAC Xrootd, gPlazma2 and dcache-2.6.11-1 ], it was not tested by other sites, it was not approved by CMS, it simply worked for me and I thought it was worth to report my experience here ( it took me 1 day of tests ).
>
>
21-11-2013 Fabio Martinelli: This is my personal experience with the triple [ SLAC Xrootd, gPlazma2 and dcache-2.6.16-1 ], it was not tested by other sites, it was not approved by CMS, it simply worked for me and I thought it was worth to report my experiences here ( it took me 1 day of tests ).
 

Intro

The dCache 2.6 Admin can avoid to manage both a gPlazma1 and a gPlazma2 configuration and simply use the gPlazma2 cell also for the dCache Xrootd cell; to achieve that I did the following configurations on my T3; be aware that writes by xrootd are not allowed because of the empty list xrootdAllowedWritePaths

Xrootd conf

Line: 388 to 388
 xrootdMoverTimeout=28800000 xrootdPlugins=gplazma:gsi,authz:cms-tfc xrootd.cms.tfc.path=/etc/xrootd/storage.xml
Changed:
<
<
xrootd.cms.tfc.protocol=direct
>
>
xrootd.cms.tfc.protocol=direct
 
<!--/twistyPlugin-->

dCache gPlazma2 node

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

Line: 436 to 436
 01.29.20 [pool-2-thread-28] [Xrootd-t3se02 Login MAP vorolemap] VOMS authorization successful for user with DN: /DC=com/DC=quovadisglobal/DC=grid/DC=switch/DC=users/C=CH/O=Paul-Scherrer-Institut (PSI)/CN=Fabio Martinelli and FQAN: /cms for user name: martinelli_f. 01.29.20 [pool-2-thread-28] [Xrootd-t3se02 Login MAP authzdb] Source changed. Recreating map.
<!--/twistyPlugin-->
Added:
>
>

xrdcp example

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->
[martinel@lxplus0485 ~]$ xrdcp -d 1 -f root://xrootd.ba.infn.it//store/user/martinelli_f/test.root  /tmp && rm -f /tmp/test.root
131121 11:37:32 21109 Xrd: main: (C) 2004-2011 by the XRootD collaboration. Version: v3.3.4
131121 11:37:32 21109 Xrd: Create: (C) 2004-2010 by the Xrootd group. XrdClient $Revision$ - Xrootd version: v3.3.4
131121 11:37:32 21109 Xrd: ShowUrls: The converted URLs count is 1
131121 11:37:32 21109 Xrd: ShowUrls: URL n.1: root://xrootd.ba.infn.it:1094//store/user/martinelli_f/test.root.
131121 11:37:32 21109 Xrd: ShowUrls: The converted URLs count is 1
131121 11:37:32 21109 Xrd: ShowUrls: URL n.1: root://xrootd.ba.infn.it:1094//store/user/martinelli_f/test.root.
sec_Client: protocol request for host xrootd.ba.infn.it token='&P=gsi,v:10300,c:ssl,ca:2f3fadf6.0'
sec_PM: Loading gsi protocol object from libXrdSecgsi.so
131121 11:37:32 21109 secgsi_InitOpts: *** ------------------------------------------------------------ ***
131121 11:37:32 21109 secgsi_InitOpts:  Mode: client
131121 11:37:32 21109 secgsi_InitOpts:  Debug: 1
131121 11:37:32 21109 secgsi_InitOpts:  CA dir: /etc/grid-security/certificates/
131121 11:37:32 21109 secgsi_InitOpts:  CA verification level: 1
131121 11:37:32 21109 secgsi_InitOpts:  CRL dir: /etc/grid-security/certificates/
131121 11:37:32 21109 secgsi_InitOpts:  CRL extension: .r0
131121 11:37:32 21109 secgsi_InitOpts:  CRL check level: 1
131121 11:37:32 21109 secgsi_InitOpts:  CRL refresh time: 86400
131121 11:37:32 21109 secgsi_InitOpts:  Certificate: /afs/cern.ch/user/m/martinel/.globus/usercert.pem
131121 11:37:32 21109 secgsi_InitOpts:  Key: /afs/cern.ch/user/m/martinel/.globus/userkey.pem
131121 11:37:32 21109 secgsi_InitOpts:  Proxy file: //afs/cern.ch/user/m/martinel/.x509up_u17202
131121 11:37:32 21109 secgsi_InitOpts:  Proxy validity: 12:00
131121 11:37:32 21109 secgsi_InitOpts:  Proxy dep length: 0
131121 11:37:32 21109 secgsi_InitOpts:  Proxy bits: 512
131121 11:37:32 21109 secgsi_InitOpts:  Proxy sign option: 1
131121 11:37:32 21109 secgsi_InitOpts:  Proxy delegation option: 0
131121 11:37:32 21109 secgsi_InitOpts:  Allowed server names: [*/][/*]
131121 11:37:32 21109 secgsi_InitOpts:  Crypto modules: ssl
131121 11:37:32 21109 secgsi_InitOpts:  Ciphers: aes-128-cbc:bf-cbc:des-ede3-cbc
131121 11:37:32 21109 secgsi_InitOpts:  MDigests: sha1:md5
131121 11:37:32 21109 secgsi_InitOpts: *** ------------------------------------------------------------ ***
sec_PM: Using gsi protocol, args='v:10300,c:ssl,ca:2f3fadf6.0'
131121 11:37:32 21109 cryptossl_X509::IsCA: certificate has 4 extensions
131121 11:37:32 21109 cryptossl_X509::IsCA: certificate has 4 extensions
131121 11:37:32 21109 cryptossl_X509::IsCA: certificate has 4 extensions
131121 11:37:32 21109 cryptossl_X509::IsCA: certificate has 8 extensions
131121 11:37:32 21109 cryptossl_X509::IsCA: certificate has 8 extensions
131121 11:37:32 21109 Xrd: Open: Access to server granted.
131121 11:37:32 21109 Xrd: Open: Opening the remote file /store/user/martinelli_f/test.root
131121 11:37:32 21109 Xrd: Open: File open in progress.
131121 11:37:32 21112 Xrd: HandleServerError: Received redirection to [t3se01.psi.ch:1095]. Token=[]]. Opaque=[].
131121 11:37:33 21112 Xrd: HandleServerError: Received redirection to [t3se01.psi.ch:1094]. Token=[]]. Opaque=[].
131121 11:37:33 21112 Xrd: Connect: can't open connection to [t3se01.psi.ch:1094]
131121 11:37:33 21112 Xrd: XrdNetFile: Error creating logical connection to t3se01.psi.ch:1094
131121 11:37:33 21112 Xrd: GoToAnotherServer: Error connecting to [t3se01.psi.ch:1094
131121 11:37:38 21112 Xrd: HandleServerError: Received redirection to [xrootd.ba.infn.it:1094]. Token=[]]. Opaque=[].
131121 11:37:38 21112 Xrd: HandleServerError: Received redirection to [t3se02.psi.ch:1095]. Token=[]]. Opaque=[].
131121 11:37:38 21112 Xrd: HandleServerError: Received redirection to [t3se02.psi.ch:1094]. Token=[]]. Opaque=[].
sec_Client: protocol request for host t3se02.psi.ch token='&P=gsi,v:10200,c:ssl,ca:e72045ce'
sec_PM: Using gsi protocol, args='v:10200,c:ssl,ca:e72045ce'
131121 11:37:38 21112 cryptossl_X509::IsCA: certificate has 7 extensions
131121 11:37:38 21112 secgsi_VerifyCA: Warning: CA certificate not self-signed and integrity not checked: assuming OK (d800b164.0)
131121 11:37:38 21112 cryptossl_X509::IsCA: certificate has 8 extensions
131121 11:37:38 21112 Xrd: HandleServerError: Received redirection to [192.33.123.52:20533]. Token=[]]. Opaque=[&org.dcache.uuid=38ea88f9-6f38-47d8-95e3-76b90a1eacbc].
131121 11:37:38 21109 Xrd: main: root://xrootd.ba.infn.it//store/user/martinelli_f/test.root -->
/tmp//test.root 131121 11:37:38 21119 Xrd: Read: Hole in the cache: offs=0, len=8388608 [xrootd] Total 460.67 MB |====================| 100.00 % [27.4 MB/s] Low level caching info: StallsRate=0.797909 StallsCount=229 ReadsCounter=287 BytesUsefulness=1 BytesSubmitted=483049545 BytesHit=483049545 XrdClient counters: ReadBytes: 483049545 WrittenBytes: 0 WriteRequests: 0 ReadRequests: 58 ReadMisses: 1 ReadHits: 57 ReadMissRate: 0.017241 ReadVRequests: 0 ReadVSubRequests: 0 ReadVSubChunks: 0 ReadVBytes: 0 ReadVAsyncRequests: 0 ReadVAsyncSubRequests: 0 ReadVAsyncSubChunks: 0 ReadVAsyncBytes: 0 ReadAsyncRequests: 114 ReadAsyncBytes: 474660937
<!--/twistyPlugin-->
 
META FILEATTACHMENT attachment="XrootdDcacheIntegration.png" attr="" comment="" date="1299694394" name="XrootdDcacheIntegration.png" path="XrootdDcacheIntegration.png" size="45366" stream="XrootdDcacheIntegration.png" tmpFilename="/usr/tmp/CGItemp29671" user="bbockelm" version="1"
META FILEATTACHMENT attachment="XroodDcacheIntegrationV2.png" attr="" comment="" date="1320200246" name="XroodDcacheIntegrationV2.png" path="XroodDcacheIntegrationV2.png" size="29202" stream="XroodDcacheIntegrationV2.png" tmpFilename="/usr/tmp/CGItemp42552" user="bbockelm" version="3"

Revision 192013-11-12 - FabioMartinelli

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Line: 275 to 275
 ofs.trace all xrd.trace all cms.trace all
Changed:
<
<
oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=direct
>
>
oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=direct
 xrootd.seclib /usr/lib64/libXrdSec.so xrootd.fslib /usr/lib64/libXrdOfs.so all.adminpath /var/run/xrootd
Line: 386 to 386
 xrootdAllowedReadPaths=/ xrootdAllowedWritePaths= xrootdMoverTimeout=28800000
Changed:
<
<
xrootdPlugins=gplazma:gsi,authz:cms-tfc
>
>
xrootdPlugins=gplazma:gsi,authz:cms-tfc
 xrootd.cms.tfc.path=/etc/xrootd/storage.xml xrootd.cms.tfc.protocol=direct
<!--/twistyPlugin-->

Revision 182013-11-04 - FabioMartinelli

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Line: 258 to 258
 

Xrootd, gPlazma2 and dcache-2.6.11-1

Disclaimer

Changed:
<
<
30-10-2013 Fabio Martinelli: This is my personal experience with the triple [ Xrootd, gPlazma2 and dcache-2.6.11-1 ], it was not tested by other sites, it was not approved by CMS, it simply worked for me and I thought it was worth to be reported here ( took me 1 day of tests ).
>
>
30-10-2013 Fabio Martinelli: This is my personal experience with the triple [ SLAC Xrootd, gPlazma2 and dcache-2.6.11-1 ], it was not tested by other sites, it was not approved by CMS, it simply worked for me and I thought it was worth to report my experience here ( it took me 1 day of tests ).
 

Intro

Changed:
<
<
The dCache 2.6 Admin can avoid to manage both a gPlazma1 and a gPlazma2 configuration and simply use the gPlazma2 cell also for the dCache Xrootd cell: to achieve that I made the following configurations:
>
>
The dCache 2.6 Admin can avoid to manage both a gPlazma1 and a gPlazma2 configuration and simply use the gPlazma2 cell also for the dCache Xrootd cell; to achieve that I did the following configurations on my T3; be aware that writes by xrootd are not allowed because of the empty list xrootdAllowedWritePaths
 

Xrootd conf

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->

Line: 350 to 350
 dcache.log.server.host=t3dcachedb04 alarms.store.db.type=rdbms webadmin.alarm.cleaner.enabled=false
Deleted:
<
<
generatePlots=true
 poolqplots.enabled=true dcache.log.mode=new
<!--/twistyPlugin-->
Line: 411 to 410
 [${host.name}-Domain-nfs/nfsv3] [${host.name}-Domain-httpd] authenticated=false
Added:
>
>
billingToDb=yes generatePlots=true
 [${host.name}-Domain-httpd/httpd] [${host.name}-Domain-httpd/statistics] [${host.name}-Domain-httpd/billing]

Revision 172013-10-30 - FabioMartinelli

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Line: 256 to 256
 

Useful Links.

Added:
>
>

Xrootd, gPlazma2 and dcache-2.6.11-1

Disclaimer

30-10-2013 Fabio Martinelli: This is my personal experience with the triple [ Xrootd, gPlazma2 and dcache-2.6.11-1 ], it was not tested by other sites, it was not approved by CMS, it simply worked for me and I thought it was worth to be reported here ( took me 1 day of tests ).

Intro

The dCache 2.6 Admin can avoid to manage both a gPlazma1 and a gPlazma2 configuration and simply use the gPlazma2 cell also for the dCache Xrootd cell: to achieve that I made the following configurations:

Xrootd conf

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->
[root@t3se02 dcache]# grep -v \# /etc/xrootd/xrootd-clustered.cfg | tr -s '\n'
xrd.port 1095
all.role server
all.manager xrootd.ba.infn.it:1213
xrootd.redirect t3se02.psi.ch:1094 /
all.export / nostage readonly
cms.allow host *
xrootd.trace emsg login stall redirect
ofs.trace all
xrd.trace all
cms.trace all
oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=direct
xrootd.seclib /usr/lib64/libXrdSec.so
xrootd.fslib /usr/lib64/libXrdOfs.so
all.adminpath /var/run/xrootd
all.pidpath /var/run/xrootd
cms.delay startup 10
cms.fxhold 60s
xrd.report xrootd.t2.ucsd.edu:9931 every 60s all sync
xrootd.monitor all auth flush io 60s ident 5m mbuff 8k rbuff 4k rnums 3 window 10s dest files io info user redir xrootd.t2.ucsd.edu:9930
<!--/twistyPlugin-->

storage.xml

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->
[root@t3se02 xrootd]# ls -l /etc/xrootd/storage.xml
lrwxrwxrwx 1 root root 46 Oct 30 13:25 /etc/xrootd/storage.xml -> /swshare/cms/SITECONF/local/PhEDEx/storage.xml
<!--/twistyPlugin-->

dCache common conf

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->
[root@t3se02 dcache]# grep -v \# /etc/dcache/dcache.conf | tr -s '\n'
dcache.layout=${host.name}
dcache.namespace=chimera
chimera.db.user = chimera
chimera.db.url = jdbc:postgresql://t3dcachedb04.psi.ch/chimera?prepareThreshold=3
dcache.user=dcache
dcache.paths.billing=/var/log/dcache
pnfsVerifyAllLookups=true
dcache.java.memory.heap=2048m
dcache.java.memory.direct=2048m
net.inetaddr.lifetime=1800
net.wan.port.min=20000
net.wan.port.max=25000
net.lan.port.min=33115
net.lan.port.max=33145
broker.host=t3se02.psi.ch
poolIoQueue=wan,xrootd
waitForFiles=${path}/setup
lfs=precious
tags=hostname=${host.name}
metaDataRepository=org.dcache.pool.repository.meta.db.BerkeleyDBMetaDataRepository
useGPlazmaAuthorizationModule=false
useGPlazmaAuthorizationCell=true
gsiftpIoQueue=wan
xrootdIoQueue=xrootd
remoteGsiftpIoQueue=wan
srmDatabaseHost=t3dcachedb04.psi.ch
srmDbName=dcache
srmDbUser=srmdcache
srmDbPassword=
srmSpaceManagerEnabled=yes
srmDbLogEnabled=true
srmRequestHistoryDatabaseEnabled=true
ftpPort=${portBase}126
kerberosFtpPort=${portBase}127
spaceManagerDatabaseHost=t3dcachedb04.psi.ch
pinManagerDbHost=t3dcachedb04.psi.ch
defaultPnfsServer=t3dcachedb04.psi.ch
SpaceManagerReserveSpaceForNonSRMTransfers=true
SpaceManagerLinkGroupAuthorizationFileName=/etc/dcache/LinkGroupAuthorization.conf
dcache.log.dir=/var/log/dcache
billingDbHost=t3dcachedb04.psi.ch
billingDbUser=srmdcache
billingDbPass=
billingDbName=billing
billingMaxInsertsBeforeCommit=10000
billingMaxTimeBeforeCommitInSecs=5
info-provider.site-unique-id=T3_CH_PSI
info-provider.se-unique-id=t3se02.psi.ch
info-provider.se-name=SRM endpoint for T3_CH_PSI
info-provider.glue-se-status=Production
info-provider.dcache-quality-level=production
info-provider.dcache-architecture=multidisk
info-provider.http.host = t3dcachedb04
poolmanager.cache-hit-messages.enabled=true
dcache.log.server.host=t3dcachedb04
alarms.store.db.type=rdbms
webadmin.alarm.cleaner.enabled=false
generatePlots=true
poolqplots.enabled=true
dcache.log.mode=new
<!--/twistyPlugin-->

dCache Xrootd node

The dCache Xrootd service is listening on the same node where I switched on the SLAC Xrootd service More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->
[root@t3se02 dcache]# grep -v \# /etc/dcache/layouts/t3se02.conf | tr -s '\n'
dcache.log.level.file=debug
[${host.name}-Domain-dcap]
[${host.name}-Domain-dcap/dcap]
[${host.name}-Domain-gridftp]
[${host.name}-Domain-gridftp/gridftp]
[${host.name}-Domain-gsidcap]
[${host.name}-Domain-gsidcap/gsidcap]
[${host.name}-Domain-srm]
[${host.name}-Domain-srm/srm]
[${host.name}-Domain-srm/spacemanager]
[${host.name}-Domain-srm/transfermanagers]
[${host.name}-Domain-utility]
[${host.name}-Domain-utility/gsi-pam]
[${host.name}-Domain-utility/pinmanager]
[${host.name}-Domain-dir]
[${host.name}-Domain-dir/dir]
[${host.name}-Domain-info]
[${host.name}-Domain-info/info]
[dCacheDomain]
[dCacheDomain/poolmanager]
[dCacheDomain/broadcast]
[dCacheDomain/loginbroker]
[dCacheDomain/topo]
[${host.name}-Domain-xrootd]
[${host.name}-Domain-xrootd/xrootd]
xrootdPort=1094
xrootdAllowedReadPaths=/
xrootdAllowedWritePaths=
xrootdMoverTimeout=28800000 
xrootdPlugins=gplazma:gsi,authz:cms-tfc
xrootd.cms.tfc.path=/etc/xrootd/storage.xml
xrootd.cms.tfc.protocol=direct
<!--/twistyPlugin-->

dCache gPlazma2 node

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->
[root@t3dcachedb04 dcache]# grep -v \# /etc/dcache/layouts/t3dcachedb04.conf | tr -s '\n'

dcache.log.level.file=debug
[${host.name}-Domain-gPlazma]
[${host.name}-Domain-gPlazma/gplazma]
[${host.name}-Domain-namespace]
[${host.name}-Domain-namespace/pnfsmanager]
[${host.name}-Domain-namespace/cleaner]
[${host.name}-Domain-adminDoor]
[${host.name}-Domain-adminDoor/admin]
sshVersion=ssh2
admin.ssh2AdminPort=22224
adminHistoryFile=/var/log/dcache/adminshell_history
[${host.name}-Domain-nfs]
dcache.user=root
[${host.name}-Domain-nfs/nfsv3]
[${host.name}-Domain-httpd]
authenticated=false
[${host.name}-Domain-httpd/httpd]
[${host.name}-Domain-httpd/statistics]
[${host.name}-Domain-httpd/billing]
[${host.name}-Domain-httpd/srm-loginbroker]
[${host.name}-Domain-alarms]
[${host.name}-Domain-alarms/alarms]
<!--/twistyPlugin-->

dCache gPlazma2 conf

More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->
[root@t3dcachedb04 dcache]# cat /etc/dcache/gplazma.conf
auth     optional   x509 
auth     optional   voms 
map      requisite  vorolemap 
map      requisite  authzdb 
session  requisite  authzdb
<!--/twistyPlugin-->

dCache gPlazma2 logs

During a xrdcp interaction you will find rows like these in the gPlazma2 logs: More... Close
<!--/twistyPlugin twikiMakeVisibleInline-->
01.29.20  [pool-2-thread-28] [Xrootd-t3se02 Login AUTH voms] Certificate verification: Verifying certificate 'DC=ch,DC=cern,OU=computers,CN=voms.cern.ch'
01.29.20  [pool-2-thread-28] [Xrootd-t3se02 Login MAP vorolemap] Source changed. Recreating map.
01.29.20  [pool-2-thread-28] [Xrootd-t3se02 Login MAP vorolemap] VOMS authorization successful for user with DN: /DC=com/DC=quovadisglobal/DC=grid/DC=switch/DC=users/C=CH/O=Paul-Scherrer-Institut (PSI)/CN=Fabio Martinelli and FQAN: /cms for user name: martinelli_f.
01.29.20  [pool-2-thread-28] [Xrootd-t3se02 Login MAP authzdb] Source changed. Recreating map.
<!--/twistyPlugin-->
 
META FILEATTACHMENT attachment="XrootdDcacheIntegration.png" attr="" comment="" date="1299694394" name="XrootdDcacheIntegration.png" path="XrootdDcacheIntegration.png" size="45366" stream="XrootdDcacheIntegration.png" tmpFilename="/usr/tmp/CGItemp29671" user="bbockelm" version="1"
META FILEATTACHMENT attachment="XroodDcacheIntegrationV2.png" attr="" comment="" date="1320200246" name="XroodDcacheIntegrationV2.png" path="XroodDcacheIntegrationV2.png" size="29202" stream="XroodDcacheIntegrationV2.png" tmpFilename="/usr/tmp/CGItemp42552" user="bbockelm" version="3"

Revision 162013-10-22 - FabioMartinelli

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Line: 52 to 52
 

Configuration

First, setup your dCache Xrootd door according to the instructions in the dCache book.

Changed:
<
<
For the simple unauthenticated access it sufficient to add a proper prefix in order to make sure you set the root path so dCache will do the LFN to PFN translation.
>
>
For the simple unauthenticated access it sufficient to add a proper prefix in order to make sure you set the root path so dCache will do the LFN to PFN translation.
 Add something according to your local setup to the layout file of the Xrootd door.
xrootdRootPath=/pnfs/example.com/data/cms
Line: 141 to 141
 # xrootdMoverTimeout=28800000
Changed:
<
<
For GPLAZMA you need to adjust some settings in /opt/d-cache/etc/dcachesrm-gplazma.policy:
>
>
For GPLAZMA you need to adjust some settings stored in /opt/d-cache/etc/dcachesrm-gplazma.policy or in /etc/dcache/dcachesrm-gplazma.policy:
 
 gplazmalite-vorole-mapping="ON"
# All others are OFF
Line: 239 to 239
 xrootd.cms.tfc.protocol=root
Changed:
<
<
On the xrootd federation host you can use the xrootd CMS TFC plugin, by configuring it in /etc/xrootd/xrootd.cfg (or similar). Make sure that there is no oss.localroot statement, which you might have from an old setup that works with a prefix only.
>
>
On the xrootd federation host you can use the xrootd CMS TFC plugin, by configuring it in /etc/xrootd/xrootd.cfg (or similar like /etc/xrootd/xrootd-clustered.cfg ). Make sure that there is no oss.localroot statement, which you might have from an old setup that works with a prefix only.
 
# Integrate with CMS TFC, placed in /etc/xrootd/storage.xml

Revision 142013-03-18 - BrianBockelman

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Line: 19 to 19
 

Installation

Changed:
<
<
First, install the OSG software repository:
>
>
First, install the OSG software repository. For SL6:
 
Changed:
<
<
rpm -Uhv http://repo.grid.iu.edu/osg-release-latest.rpm
>
>
rpm -Uhv http://repo.grid.iu.edu/osg-el6-release-latest.rpm

For SL5:

rpm -Uhv http://repo.grid.iu.edu/osg-el5-release-latest.rpm
 

Next, install the xrootd RPM. This will add the xrootd user if it does not already exist - sites using centralized account management may want to create this user beforehand.

Line: 32 to 37
  Warning: The CMS transition to 3.1.0 from previous versions is not a clean upgrade (as we switched to the CERN-based packaging). We believe this is a one-time-only event. Unfortunately, folks will need to remove all local copies of xrootd before installing if you have xrootd < 3.1.0.
Changed:
<
<
If the node does not already have CA certificates and fetch-crl installed, you can also do this from the OSG repo:
>
>
If the node does not already have CA certificates and fetch-crl installed, you can also do this from the OSG repo. For SL6
yum install fetch-crl3 osg-ca-certs

For SL5:

 
yum install fetch-crl osg-ca-certs
Changed:
<
<
If this is a brand new host, you may need to run fetch-crl to update CRLs before starting Xrootd.
>
>
If this is a brand new host, you may need to run fetch-crl or fetch-crl3 to update CRLs before starting Xrootd.
 

Configuration

Revision 132013-01-14 - ChristophWissing

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Line: 189 to 189
 /opt/d-cache/billing/2012/09/
Added:
>
>

Configuring the CMS TFC Plugin in D-Cache

D-Cache provides in recent releases a TFC Plugin such that you can send an LFN open request to the D-Cache xrootd-door and the door will resolve it to a PFN based on TFC rules.

This is quite new, released in Dec 2012, and not yet tested in production. Sites are encouraged to try it. Report your experiences to WAN access Hypernews.

You need D-Cache 1.9.12-25 or beyond, 2.2 or 2.4. For the recent 1.9.12 and 2.2 You need to install the "Xrootd4j-Plugin" from D-Cache, which provides some xrootd features of the 2.4 release in 1.9.12-25+ and 2.2.

# Download the xrootd4j-backport package
cd /tmp
wget -O xrootd4j-backport-2.4-SNAPSHOT.tar.gz http://ftp1.ndgf.org:2880/behrmann/downloads/xrootd4j-backport-2.4-SNAPSHOT.tar.gz
# Install into /usr/local/share/dcache/plugins
mkdir -p  /usr/local/share/dcache/plugins
cd /usr/local/share/dcache/plugins
tar -xzvf /tmp/xrootd4j-backport-2.4-SNAPSHOT.tar.gz

Install the cmstfc plugin.

cd /tmp
wget -O xrootd4j-cms-plugin-1.0-SNAPSHOT.tar.gz https://github.com/downloads/dCache/xrootd4j-cms-plugin/xrootd4j-cms-plugin-1.0-SNAPSHOT.tar.gz
cd  /usr/local/share/dcache/plugins
tar -xzvf /tmp/xrootd4j-cms-plugin-1.0-SNAPSHOT.tar.gz

In the layout file (found typically in /opt/d-cache/etc/layouts) of the door, you have to add these lines:

# Unauthenticated
xrootdPlugins=gplazma:none,authz:cms-tfc
# Authenticated according to gplazma
# xrootdPlugins=gplazma:gsi,authz:cms-tfc
# Change this according to your location:
xrootd.cms.tfc.path=/etc/dcache/storage.xml
# Must be coherent with your TFC in storage.xml:
xrootd.cms.tfc.protocol=root

On the xrootd federation host you can use the xrootd CMS TFC plugin, by configuring it in /etc/xrootd/xrootd.cfg (or similar). Make sure that there is no oss.localroot statement, which you might have from an old setup that works with a prefix only.

# Integrate with CMS TFC, placed in /etc/xrootd/storage.xml
oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=direct

Test your setup smile

 

Useful Links.

Revision 122013-01-07 - JieChen

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Line: 189 to 189
 /opt/d-cache/billing/2012/09/
Added:
>
>

Useful Links.

 
META FILEATTACHMENT attachment="XrootdDcacheIntegration.png" attr="" comment="" date="1299694394" name="XrootdDcacheIntegration.png" path="XrootdDcacheIntegration.png" size="45366" stream="XrootdDcacheIntegration.png" tmpFilename="/usr/tmp/CGItemp29671" user="bbockelm" version="1"
META FILEATTACHMENT attachment="XroodDcacheIntegrationV2.png" attr="" comment="" date="1320200246" name="XroodDcacheIntegrationV2.png" path="XroodDcacheIntegrationV2.png" size="29202" stream="XroodDcacheIntegrationV2.png" tmpFilename="/usr/tmp/CGItemp42552" user="bbockelm" version="3"

Revision 102012-10-05 - ChristophWissing

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Added:
>
>
 

Joining a dCache-based SE to the Xrootd service.

This document covers joining a dCache-based storage element to the CMS Xrootd service based on the redirector xrootd-itb.unl.edu. This page assumes three things:

Line: 37 to 39
 

Configuration

Changed:
<
<
First, setup your dCache Xrootd door according to the instructions in the dCache book. Make sure you set the root path so dCache will do the LFN to PFN translation.
>
>
First, setup your dCache Xrootd door according to the instructions in the dCache book. For the simple unauthenticated access it sufficient to add a proper prefix in order to make sure you set the root path so dCache will do the LFN to PFN translation. Add something according to your local setup to the layout file of the Xrootd door.
 
xrootdRootPath=/pnfs/example.com/data/cms
Added:
>
>
Configuring Authenticated Access is a bit more complex.
 Next, cp /etc/xrootd/xrootd.sample.dcache.cfg /etc/xrootd/xrootd-clustered.cfg and edit the resulting config file.
oss.localroot /pnfs/example.com/data/cms
Line: 103 to 109
  where /store/foo/bar is unique to your site
Added:
>
>

Configuring Authenticated Access

Authentication in D-Cache is (usually) done using GPLAZMA. The door for GSI enabled access needs a host certificate. This howto covers GPLAZMA version 1 only. Since you need special rules for the Xrootd door used in the CMS redirector business, you need to configure usage of a GPLAZMA module for this door, while the remaining instance can use the same GPLAZMA cell. Note you need a recent 1.9.12 release of D-Cache, 1.9.12-*21* is known to work. (There are early 1.9.12 releases that had issues with configuring module over cell usage.) Add the following to the layout file (usually found in /opt/d-cache/etc/layout/)

useGPlazmaAuthorizationCell=false
useGPlazmaAuthorizationModule=true
xrootdIsReadOnly=true
# Adjust the path according to your site:
xrootdRootPath=/pnfs/desy.de/cms/tier2
xrootdAuthNPlugin=gsi
# You might consider to have xrootd in a selected queue (adjust to your setup):
# xrootdIoQueue=dcap-q 
# You might want to put timeouts - optimal value is matter of tuning
# xrootdMoverTimeout=28800000 

For GPLAZMA you need to adjust some settings in /opt/d-cache/etc/dcachesrm-gplazma.policy:

 gplazmalite-vorole-mapping="ON"
# All others are OFF

[...]

# Built-in gPLAZMAlite grid VO role mapping
gridVoRolemapPath="/etc/grid-security/grid-vorolemap"
gridVoRoleStorageAuthzPath="/etc/grid-security/storage-authzdb" 

Put proper mappings and usernames in /etc/grid-security/grid-vorolemap. Needs adaption to local setup!. (Only CMS part is shown, if other VOs are needed on the Xrootd door, add them accordingly.)

## CMS ##
# Need mapping for each VOMS Group(!), roles only for special mapping
"*" "/cms/Role=lcgadmin" cmsusr001
"*" "/cms/Role=production" cmsprd001
"*" "/cms/Role=priorityuser" cmsana001
"*" "/cms/Role=pilot" cmsusr001
"*" "/cms/Role=hiproduction" cmsprd001
"*" "/cms/dcms/Role=cmsphedex" cmsprd001
"*" "/cms/integration" cmsusr001
"*" "/cms/becms" cmsusr001
"*" "/cms/dcms" cmsusr001
"*" "/cms/escms" cmsusr001
"*" "/cms/ptcms" cmsusr001
"*" "/cms/itcms" cmsusr001
"*" "/cms/frcms" cmsusr001
"*" "/cms/production" cmsusr001
"*" "/cms/muon" cmsusr001
"*" "/cms/twcms" cmsusr001
"*" "/cms/uscms" cmsusr001
"*" "/cms/ALARM" cmsusr001
"*" "/cms/TEAM" cmsusr001
"*" "/cms/dbs" cmsusr001
"*" "/cms/uscms/Role=cmsphedex" cmsusr001
"*" "/cms" cmsusr001

Now comes the important part for path prefix in /etc/grid-security/storage-authzdb. Carefully check the usernames and UIDs GIDs, they must fit your local setup. (Again only CMS part is shown.)

authorize cmsusr001 read-write 40501 4050 /pnfs/desy.de/cms/tier2 /pnfs/desy.de/cms/tier2 /
authorize cmsprd001 read-write 40751 4075 /pnfs/desy.de/cms/tier2 /pnfs/desy.de/cms/tier2 /
authorize cmsana001 read-write 40951 4060 /pnfs/desy.de/cms/tier2 /pnfs/desy.de/cms/tier2 / 

You can do some first testing of the GSI enabled Xrootd door:

xrdcp -d 2 -f xroot://xrootd-door.mydomain.org:/store/user/<Your_HN_name>/<Your_Testfile> /dev/null

Some useful debugging results are usually found in the billing logs of your D-Cache instance. The host is usually not the host you are installing the Xrootd door on.

/opt/d-cache/billing/2012/09/
 
META FILEATTACHMENT attachment="XrootdDcacheIntegration.png" attr="" comment="" date="1299694394" name="XrootdDcacheIntegration.png" path="XrootdDcacheIntegration.png" size="45366" stream="XrootdDcacheIntegration.png" tmpFilename="/usr/tmp/CGItemp29671" user="bbockelm" version="1"
META FILEATTACHMENT attachment="XroodDcacheIntegrationV2.png" attr="" comment="" date="1320200246" name="XroodDcacheIntegrationV2.png" path="XroodDcacheIntegrationV2.png" size="29202" stream="XroodDcacheIntegrationV2.png" tmpFilename="/usr/tmp/CGItemp42552" user="bbockelm" version="3"

Revision 92012-09-04 - BrianBockelman

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"

Joining a dCache-based SE to the Xrootd service.

Line: 26 to 26
 
yum install --enablerepo=osg-contrib,osg-testing cms-xrootd-dcache
Changed:
<
<
The version of xrootd-server should be at least 3.1.0.
>
>
The version of xrootd-server should be at least 3.2.2.
  Warning: The CMS transition to 3.1.0 from previous versions is not a clean upgrade (as we switched to the CERN-based packaging). We believe this is a one-time-only event. Unfortunately, folks will need to remove all local copies of xrootd before installing if you have xrootd < 3.1.0.
Line: 42 to 42
 xrootdRootPath=/pnfs/example.com/data/cms
Changed:
<
<
Next, cp /etc/xrootd/xrootd.sample.dcache.cfg /etc/xrootd/xrootd.cfg and edit the resulting config file.
>
>
Next, cp /etc/xrootd/xrootd.sample.dcache.cfg /etc/xrootd/xrootd-clustered.cfg and edit the resulting config file.
 
oss.localroot /pnfs/example.com/data/cms
xrootd.redirect xrootd-door.example.com:1094 /
Line: 86 to 86
 

Port usage:

The following information is probably needed for sites with strict firewalls:
Changed:
<
<
  • The xrootd server listens on TCP port 1094.
>
>
  • The xrootd server listens on TCP port 1095 (this is not the default port for Xrootd; we assume that dCache Xrootd door uses the default).
 
  • The cmsd server needs outgoing TCP port 1213 to xrootd.unl.edu.
  • Usage statistics are sent to xrootd.t2.ucsd.edu on UDP ports 9931 and 9930.

Revision 82011-11-10 - ChristophWissing

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"

Joining a dCache-based SE to the Xrootd service.

Line: 24 to 24
  Next, install the xrootd RPM. This will add the xrootd user if it does not already exist - sites using centralized account management may want to create this user beforehand.
Changed:
<
<
yum install --enablerepo=osg-contrib,osg-testing cms-xrood-dcache
>
>
yum install --enablerepo=osg-contrib,osg-testing cms-xrootd-dcache
  The version of xrootd-server should be at least 3.1.0.

Revision 72011-11-02 - BrianBockelman

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"

Joining a dCache-based SE to the Xrootd service.

Changed:
<
<
This document covers joining a dCache-based storage element to the CMS Xrootd service based on the redirector xrootd-itb.unl.edu. The architecture setup is diagrammed below:
>
>
This document covers joining a dCache-based storage element to the CMS Xrootd service based on the redirector xrootd-itb.unl.edu. This page assumes three things:
  1. You are using dCache 1.9.12 or later.
  2. All your pool nodes are on the public internet.
  3. The LFN->PFN mapping for your site is as simple as adding a prefix.
 
Changed:
<
<
XrootdDcacheIntegration.png
>
>
If one of these is not true, use this page.
 
Changed:
<
<
This architecture uses the built-in dCache Xrootd door and adds a "proxy host". The proxy host gives the following missing functionality:
  1. GSI security
  2. Namespace translation (from site namespace to CMS namespace and vice-versa)
  3. Integration with the global federation.
It additionally allows a site to keep its dCache pools behind a firewall. Note that future versions of dCache provide (1); it may be possible to serve data directly from pools in the future.
>
>
The architecture setup is diagrammed below:
 
Changed:
<
<
After being redirected to the site, a user will contact the xrootd daemon of the Xrootd proxy host. The filesystem plugin for this host is libXrdPss.so; this library is simply a wrapper around the xrootd client. The proxy then acts as a client inside the network boundary; it first contacts the dCache door to locate the file (or other metadata operations), then contacts a data pool directly to receive the file.
>
>
XroodDcacheIntegrationV2.png

This architecture uses the built-in dCache Xrootd door and adds a "federation host". This host integrates the native dCache door with the global federation, but all clients are redirected first to the dCache xrootd door, then to the individual pools. GSI security and namespace translation are performed by dCache itself. At no point does data have to be "proxied", which should improve the scalability and remove complexity from the entire system.

 

Installation

Changed:
<
<
First, install the OSG Xrootd repository:
>
>
First, install the OSG software repository:
 
Changed:
<
<
if [ ! -e /etc/yum.repos.d/osg-xrootd.repo ]; then rpm -Uvh http://newman.ultralight.org/repos/xrootd/x86_64/osg-xrootd-1-1.noarch.rpm fi
>
>
rpm -Uhv http://repo.grid.iu.edu/osg-release-latest.rpm
 
Changed:
<
<
Then, install Xrootd using yum. This will add the xrootd user if it does not already exist - ROCKS users might want to create this user beforehand.
>
>
Next, install the xrootd RPM. This will add the xrootd user if it does not already exist - sites using centralized account management may want to create this user beforehand.
 
Changed:
<
<
yum install xrootd xrootd-cmstfc
>
>
yum install --enablerepo=osg-contrib,osg-testing cms-xrood-dcache
 
Changed:
<
<
The version should be at least 3.0.3-0.pre9. If the node does not already have CA certificates and fetch-crl installed, you can also do this from the OSG Xrootd repo:
yum install fetch-crl osg-ca-certs

Copy the template config file, /etc/xrootd/xrootd.sample.dcache.cfg to /etc/xrootd/xrootd.cfg.

>
>
The version of xrootd-server should be at least 3.1.0.
 
Changed:
<
<
Copy your site's storage.xml to /etc/xrootd/storage.xml. If you are unsure of what this means, please contact your site's CMS representative. Uncomment and update the oss.namelib line in xrootd.cfg to read:
>
>
Warning: The CMS transition to 3.1.0 from previous versions is not a clean upgrade (as we switched to the CERN-based packaging). We believe this is a one-time-only event. Unfortunately, folks will need to remove all local copies of xrootd before installing if you have xrootd < 3.1.0.
 
Changed:
<
<
oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=direct

When using the protocol direct with the storage.xml, it should map CMS filenames starting with /store to something starting with /pnfs. Adjust the protocol name accordingly for your site.

Finally, create a copy of the host certs to be xrootd service certs:

>
>
If the node does not already have CA certificates and fetch-crl installed, you can also do this from the OSG repo:
 
Changed:
<
<
mkdir -p /etc/grid-security/xrd cp /etc/grid-security/hostcert.pem /etc/grid-security/xrd/xrdcert.pem cp /etc/grid-security/hostkey.pem /etc/grid-security/xrd/xrdkey.pem chown xrootd: -R /etc/grid-security/xrd chmod 400 /etc/grid-security/xrd/xrdkey.pem # Yes, 400 is required
>
>
yum install fetch-crl osg-ca-certs
 
Changed:
<
<

Integrating with GUMS or SCAS

>
>

Configuration

 
Changed:
<
<
In order to integrate xrootd with GUMS (v1.3 or higher) or SCAS, install the following RPM:
>
>
First, setup your dCache Xrootd door according to the instructions in the dCache book. Make sure you set the root path so dCache will do the LFN to PFN translation.
 
Changed:
<
<
yum install xrootd-lcmaps
>
>
xrootdRootPath=/pnfs/example.com/data/cms
 
Deleted:
<
<
This will bring in several dependencies, including Globus libraries. This does not appear to conflict with OSG or gLite installs of these libraries.
 
Changed:
<
<
Next, copy/paste the following line from /etc/xrootd/lcmaps.cfg into /etc/xrootd/xrootd.cfg:
>
>
Next, cp /etc/xrootd/xrootd.sample.dcache.cfg /etc/xrootd/xrootd.cfg and edit the resulting config file.
 
Changed:
<
<
# sec.protocol /usr/lib64 gsi -certdir:/etc/grid-security/certificates -cert:/etc/grid-security/xrd/xrdcert.pem -key:/etc/grid-security/xrd/xrdkey.pem -crl:3 -authzfun:libXrdLcmaps.so -authzfunparms:--osg,--lcmapscfg,/etc/xrootd/lcmaps.cfg,--loglevel,0|useglobals --gmapopt:2 --gmapto:0
>
>
oss.localroot /pnfs/example.com/data/cms xrootd.redirect xrootd-door.example.com:1094 /
 
Changed:
<
<
Further, update /etc/xrootd/lcmaps.cfg so the endpoint properly references your GUMS or SCAS server's XACML endpoint.
>
>
Set xrootd-door.example.com to the hostname of dCache's xrootd door and /pnfs/example.com/data/cms to match your xrootdRootPath above.
 

Operating xrootd

Added:
>
>
PNFS must be mounted for the xrootd federation host to function. Mount this manually, and configure /etc/fstab so this happens on boot if desired.
 There are two init services, xrootd and cmsd, which must both be working for the site to participate in the xrootd service:
Line: 103 to 88
 The following information is probably needed for sites with strict firewalls:
  • The xrootd server listens on TCP port 1094.
  • The cmsd server needs outgoing TCP port 1213 to xrootd.unl.edu.
Changed:
<
<
  • Usage statistics are sent to xrootd.t2.ucsd.edu on UDP ports 3333 and 3334.
>
>
  • Usage statistics are sent to xrootd.t2.ucsd.edu on UDP ports 9931 and 9930.
 

Testing the install.

The newly installed server can be tested directly using:
Line: 114 to 99
  You can then see if your server is participating properly in the xrootd service by checking:
Changed:
<
<
xrdcp root://xrootd.unl.edu//store/foo/bar /tmp/bar2
>
>
xrdcp root://xrootd-itb.unl.edu//store/foo/bar /tmp/bar2
  where /store/foo/bar is unique to your site

META FILEATTACHMENT attachment="XrootdDcacheIntegration.png" attr="" comment="" date="1299694394" name="XrootdDcacheIntegration.png" path="XrootdDcacheIntegration.png" size="45366" stream="XrootdDcacheIntegration.png" tmpFilename="/usr/tmp/CGItemp29671" user="bbockelm" version="1"
Added:
>
>
META FILEATTACHMENT attachment="XroodDcacheIntegrationV2.png" attr="" comment="" date="1320200246" name="XroodDcacheIntegrationV2.png" path="XroodDcacheIntegrationV2.png" size="29202" stream="XroodDcacheIntegrationV2.png" tmpFilename="/usr/tmp/CGItemp42552" user="bbockelm" version="3"

Revision 62011-06-06 - BrianBockelman

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"

Joining a dCache-based SE to the Xrootd service.

Line: 64 to 64
 # sec.protocol /usr/lib64 gsi -certdir:/etc/grid-security/certificates -cert:/etc/grid-security/xrd/xrdcert.pem -key:/etc/grid-security/xrd/xrdkey.pem -crl:3 -authzfun:libXrdLcmaps.so -authzfunparms:--osg,--lcmapscfg,/etc/xrootd/lcmaps.cfg,--loglevel,0|useglobals --gmapopt:2 --gmapto:0
Added:
>
>
Further, update /etc/xrootd/lcmaps.cfg so the endpoint properly references your GUMS or SCAS server's XACML endpoint.
 

Operating xrootd

There are two init services, xrootd and cmsd, which must both be working for the site to participate in the xrootd service:

Revision 52011-03-09 - BrianBockelman

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"
Changed:
<
<

Joining a dCache-based cluster to the Xrootd service.

>
>

Joining a dCache-based SE to the Xrootd service.

 
Changed:
<
<
This document covers joining a dCache-based storage element to the CMS Xrootd service based on the redirector xrootd.unl.edu. The architecture setup is diagrammed below:
>
>
This document covers joining a dCache-based storage element to the CMS Xrootd service based on the redirector xrootd-itb.unl.edu. The architecture setup is diagrammed below:
  XrootdDcacheIntegration.png
Line: 27 to 27
 
yum install xrootd xrootd-cmstfc
Changed:
<
<
The version should be at least 3.0.3-0.pre8. If the node does not already have CA certificates and fetch-crl installed, you can also do this from the OSG Xrootd repo:
>
>
The version should be at least 3.0.3-0.pre9. If the node does not already have CA certificates and fetch-crl installed, you can also do this from the OSG Xrootd repo:
 
yum install fetch-crl osg-ca-certs
Line: 51 to 51
 chmod 400 /etc/grid-security/xrd/xrdkey.pem # Yes, 400 is required
Changed:
<
<

Integration with SCAS or GUMS

Integration with SCAS or GUMS can be done by following the instructions in HdfsXrootdInstall. Otherwise, a grid-mapfile will be used.
>
>

Integrating with GUMS or SCAS

In order to integrate xrootd with GUMS (v1.3 or higher) or SCAS, install the following RPM:

yum install xrootd-lcmaps
This will bring in several dependencies, including Globus libraries. This does not appear to conflict with OSG or gLite installs of these libraries.

Next, copy/paste the following line from /etc/xrootd/lcmaps.cfg into /etc/xrootd/xrootd.cfg:

# sec.protocol /usr/lib64 gsi -certdir:/etc/grid-security/certificates -cert:/etc/grid-security/xrd/xrdcert.pem -key:/etc/grid-security/xrd/xrdkey.pem -crl:3 -authzfun:libXrdLcmaps.so -authzfunparms:--osg,--lcmapscfg,/etc/xrootd/lcmaps.cfg,--loglevel,0|useglobals --gmapopt:2 --gmapto:0
 

Operating xrootd

Line: 96 to 106
 

Testing the install.

The newly installed server can be tested directly using:
Changed:
<
<
xrdcp xroot://local_hostname.example.com//store/foo/bar /tmp/bar
>
>
xrdcp -d 1 -f xroot://local_hostname.example.com//store/foo/bar /dev/null
  You will need a grid certificate installed in your user account for the above to work

Revision 42011-03-09 - BrianBockelman

Line: 1 to 1
 
META TOPICPARENT name="CmsXrootdArchitecture"

Joining a dCache-based cluster to the Xrootd service.

Changed:
<
<
This document covers joining a dCache-based storage element to the CMS Xrootd service based on the redirector xrootd.unl.edu.
>
>
This document covers joining a dCache-based storage element to the CMS Xrootd service based on the redirector xrootd.unl.edu. The architecture setup is diagrammed below:

XrootdDcacheIntegration.png

This architecture uses the built-in dCache Xrootd door and adds a "proxy host". The proxy host gives the following missing functionality:

  1. GSI security
  2. Namespace translation (from site namespace to CMS namespace and vice-versa)
  3. Integration with the global federation.
It additionally allows a site to keep its dCache pools behind a firewall. Note that future versions of dCache provide (1); it may be possible to serve data directly from pools in the future.

After being redirected to the site, a user will contact the xrootd daemon of the Xrootd proxy host. The filesystem plugin for this host is libXrdPss.so; this library is simply a wrapper around the xrootd client. The proxy then acts as a client inside the network boundary; it first contacts the dCache door to locate the file (or other metadata operations), then contacts a data pool directly to receive the file.

 

Installation

Line: 15 to 25
  Then, install Xrootd using yum. This will add the xrootd user if it does not already exist - ROCKS users might want to create this user beforehand.
Changed:
<
<
yum install xrootd-dcap
>
>
yum install xrootd xrootd-cmstfc
 
Changed:
<
<
The version should be at least 1.4.2-4. If the node does not already have CA certificates and fetch-crl installed, you can also do this from the OSG Xrootd repo:
>
>
The version should be at least 3.0.3-0.pre8. If the node does not already have CA certificates and fetch-crl installed, you can also do this from the OSG Xrootd repo:
 
yum install fetch-crl osg-ca-certs
Changed:
<
<
Copy the template config file, /etc/xrootd/xrootd.sample.dcap.cfg to /etc/xrootd/xrootd.cfg.
>
>
Copy the template config file, /etc/xrootd/xrootd.sample.dcache.cfg to /etc/xrootd/xrootd.cfg.
  Copy your site's storage.xml to /etc/xrootd/storage.xml. If you are unsure of what this means, please contact your site's CMS representative. Uncomment and update the oss.namelib line in xrootd.cfg to read:


Changed:
<
<
oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=dcap
>
>
oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=direct
 
Added:
>
>
When using the protocol direct with the storage.xml, it should map CMS filenames starting with /store to something starting with /pnfs. Adjust the protocol name accordingly for your site.
 Finally, create a copy of the host certs to be xrootd service certs:
mkdir -p /etc/grid-security/xrd
Line: 39 to 51
 chmod 400 /etc/grid-security/xrd/xrdkey.pem # Yes, 400 is required
Added:
>
>

Integration with SCAS or GUMS

Integration with SCAS or GUMS can be done by following the instructions in HdfsXrootdInstall. Otherwise, a grid-mapfile will be used.
 

Operating xrootd

There are two init services, xrootd and cmsd, which must both be working for the site to participate in the xrootd service:

Line: 48 to 63
 service cmsd start
Changed:
<
<
Everything is controlled by a proper init script (available commands are start, stop, restart, status, and condrestart).
>
>
Everything is controlled by a proper init script (available commands are start, stop, restart, status, and condrestart). To enable these on boot, run:

chkconfig --level 345 xroot on
chkconfig --level 345 cmsd on
  Log files are kept in /var/log/xrootd/{cmsd,xrootd}.log, and are auto-rotated.

After startup, the xrootd and cmsd daemons drop privilege to the xrootd user.

Added:
>
>
If you used the RPM version of fetch-crl, you will need to enable and start the fetch-crl-cron and fetch-crl-boot services. To start:

service fetch-crl-cron
service fetch-crl-boot # This may take awhile to run

To enable on boot:

chkconfig --level 345 fetch-crl-cron on
chkconfig --level 345 fetch-crl-boot on
 

Port usage:

The following information is probably needed for sites with strict firewalls:
  • The xrootd server listens on TCP port 1094.
  • The cmsd server needs outgoing TCP port 1213 to xrootd.unl.edu.
Changed:
<
<
  • Usage statistics are sent to xrootd.unl.edu on UDP ports 3333 and 3334.
>
>
  • Usage statistics are sent to xrootd.t2.ucsd.edu on UDP ports 3333 and 3334.
 

Testing the install.

The newly installed server can be tested directly using:
Line: 72 to 105
 xrdcp root://xrootd.unl.edu//store/foo/bar /tmp/bar2 where /store/foo/bar is unique to your site
Added:
>
>
META FILEATTACHMENT attachment="XrootdDcacheIntegration.png" attr="" comment="" date="1299694394" name="XrootdDcacheIntegration.png" path="XrootdDcacheIntegration.png" size="45366" stream="XrootdDcacheIntegration.png" tmpFilename="/usr/tmp/CGItemp29671" user="bbockelm" version="1"

Revision 32010-10-17 - BrianBockelman

Line: 1 to 1
Changed:
<
<
META TOPICPARENT name="BrianBockelman"
>
>
META TOPICPARENT name="CmsXrootdArchitecture"
 

Joining a dCache-based cluster to the Xrootd service.

This document covers joining a dCache-based storage element to the CMS Xrootd service based on the redirector xrootd.unl.edu.

Installation

Changed:
<
<
You will need the following RPMs: The latter two are only needed if you do not have CA certificates already installed in /etc/grid-security/certificates.

The version should be at least 1.3.2-8.

>
>
First, install the OSG Xrootd repository:
if [ ! -e /etc/yum.repos.d/osg-xrootd.repo ]; then
  rpm -Uvh http://newman.ultralight.org/repos/xrootd/x86_64/osg-xrootd-1-1.noarch.rpm
fi

Then, install Xrootd using yum. This will add the xrootd user if it does not already exist - ROCKS users might want to create this user beforehand.

yum install xrootd-dcap
The version should be at least 1.4.2-4. If the node does not already have CA certificates and fetch-crl installed, you can also do this from the OSG Xrootd repo:
yum install fetch-crl osg-ca-certs
  Copy the template config file, /etc/xrootd/xrootd.sample.dcap.cfg to /etc/xrootd/xrootd.cfg.

Revision 22010-08-18 - BrianBockelman

Line: 1 to 1
 
META TOPICPARENT name="BrianBockelman"

Joining a dCache-based cluster to the Xrootd service.

Line: 7 to 7
 

Installation

You will need the following RPMs:

Changed:
<
<
>
>
  The latter two are only needed if you do not have CA certificates already installed in /etc/grid-security/certificates.
Changed:
<
<
The version should be at least 1.3.1-1.
>
>
The version should be at least 1.3.2-8.
 
Changed:
<
<
Copy the template config file, /etc/xrootd/xrootd.sample.hdfs.cfg to /etc/xrootd/xrootd.cfg. If your site requires storage.xml, uncomment (and possibly update) the oss.namelib line. Update the ofs.osslib line to read:

ofs.osslib /usr/lib64/libXrdDcap.so

If you do not update this out, your xrootd server will not start (Note: this will be fixed in future releases).

>
>
Copy the template config file, /etc/xrootd/xrootd.sample.dcap.cfg to /etc/xrootd/xrootd.cfg.
  Copy your site's storage.xml to /etc/xrootd/storage.xml. If you are unsure of what this means, please contact your site's CMS representative. Uncomment and update the oss.namelib line in xrootd.cfg to read:

Revision 12010-07-26 - BrianBockelman

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="BrianBockelman"

Joining a dCache-based cluster to the Xrootd service.

This document covers joining a dCache-based storage element to the CMS Xrootd service based on the redirector xrootd.unl.edu.

Installation

You will need the following RPMs:

The latter two are only needed if you do not have CA certificates already installed in /etc/grid-security/certificates.

The version should be at least 1.3.1-1.

Copy the template config file, /etc/xrootd/xrootd.sample.hdfs.cfg to /etc/xrootd/xrootd.cfg. If your site requires storage.xml, uncomment (and possibly update) the oss.namelib line. Update the ofs.osslib line to read:

ofs.osslib /usr/lib64/libXrdDcap.so

If you do not update this out, your xrootd server will not start (Note: this will be fixed in future releases).

Copy your site's storage.xml to /etc/xrootd/storage.xml. If you are unsure of what this means, please contact your site's CMS representative. Uncomment and update the oss.namelib line in xrootd.cfg to read:

oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=dcap

Finally, create a copy of the host certs to be xrootd service certs:

mkdir -p /etc/grid-security/xrd
cp /etc/grid-security/hostcert.pem /etc/grid-security/xrd/xrdcert.pem
cp /etc/grid-security/hostkey.pem /etc/grid-security/xrd/xrdkey.pem
chown xrootd: -R /etc/grid-security/xrd
chmod 400 /etc/grid-security/xrd/xrdkey.pem # Yes, 400 is required

Operating xrootd

There are two init services, xrootd and cmsd, which must both be working for the site to participate in the xrootd service:

service xrootd start
service cmsd start

Everything is controlled by a proper init script (available commands are start, stop, restart, status, and condrestart).

Log files are kept in /var/log/xrootd/{cmsd,xrootd}.log, and are auto-rotated.

After startup, the xrootd and cmsd daemons drop privilege to the xrootd user.

Port usage:

The following information is probably needed for sites with strict firewalls:
  • The xrootd server listens on TCP port 1094.
  • The cmsd server needs outgoing TCP port 1213 to xrootd.unl.edu.
  • Usage statistics are sent to xrootd.unl.edu on UDP ports 3333 and 3334.

Testing the install.

The newly installed server can be tested directly using:
xrdcp xroot://local_hostname.example.com//store/foo/bar /tmp/bar
You will need a grid certificate installed in your user account for the above to work

You can then see if your server is participating properly in the xrootd service by checking:

xrdcp root://xrootd.unl.edu//store/foo/bar /tmp/bar2
where /store/foo/bar is unique to your site
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback