Joining a dCache-based SE to the Xrootd service.

This document covers joining a dCache-based storage element to the CMS Xrootd service based on the redirector xrootd-itb.unl.edu. This page assumes three things:

  1. You are using dCache 1.9.12 or later.
  2. All your pool nodes are on the public internet.
  3. The LFN->PFN mapping for your site is as simple as adding a prefix.

If one of these is not true, use this page.

The architecture setup is diagrammed below:

XroodDcacheIntegrationV2.png

This architecture uses the built-in dCache Xrootd door and adds a "federation host". This host integrates the native dCache door with the global federation, but all clients are redirected first to the dCache xrootd door, then to the individual pools. GSI security and namespace translation are performed by dCache itself. At no point does data have to be "proxied", which should improve the scalability and remove complexity from the entire system.

Installation

First, install the OSG software repository. For SL6:

rpm -Uhv http://repo.grid.iu.edu/osg-el6-release-latest.rpm

For SL5:

rpm -Uhv http://repo.grid.iu.edu/osg-el5-release-latest.rpm

Next, install the xrootd RPM. This will add the xrootd user if it does not already exist - sites using centralized account management may want to create this user beforehand.

yum install --enablerepo=osg-contrib,osg-testing cms-xrootd-dcache
The version of xrootd-server should be at least 3.2.2.

Warning: The CMS transition to 3.1.0 from previous versions is not a clean upgrade (as we switched to the CERN-based packaging). We believe this is a one-time-only event. Unfortunately, folks will need to remove all local copies of xrootd before installing if you have xrootd < 3.1.0.

If the node does not already have CA certificates and fetch-crl installed, you can also do this from the OSG repo. For SL6

yum install fetch-crl3 osg-ca-certs

For SL5:

yum install fetch-crl osg-ca-certs

If this is a brand new host, you may need to run fetch-crl or fetch-crl3 to update CRLs before starting Xrootd.

Configuration

First, setup your dCache Xrootd door according to the instructions in the dCache book. For the simple unauthenticated access it sufficient to add a proper prefix in order to make sure you set the root path so dCache will do the LFN to PFN translation. Add something according to your local setup to the layout file of the Xrootd door.

xrootdRootPath=/pnfs/example.com/data/cms

Configuring Authenticated Access is a bit more complex.

Next, cp /etc/xrootd/xrootd.sample.dcache.cfg /etc/xrootd/xrootd-clustered.cfg and edit the resulting config file.

oss.localroot /pnfs/example.com/data/cms
xrootd.redirect xrootd-door.example.com:1094 /
Set xrootd-door.example.com to the hostname of dCache's xrootd door and /pnfs/example.com/data/cms to match your xrootdRootPath above.

Operating xrootd

PNFS must be mounted for the xrootd federation host to function. Mount this manually, and configure /etc/fstab so this happens on boot if desired.

There are two init services, xrootd and cmsd, which must both be working for the site to participate in the xrootd service:

service xrootd start
service cmsd start

Everything is controlled by a proper init script (available commands are start, stop, restart, status, and condrestart). To enable these on boot, run:

chkconfig --level 345 xroot on
chkconfig --level 345 cmsd on

Log files are kept in /var/log/xrootd/{cmsd,xrootd}.log, and are auto-rotated.

After startup, the xrootd and cmsd daemons drop privilege to the xrootd user.

If you used the RPM version of fetch-crl, you will need to enable and start the fetch-crl-cron and fetch-crl-boot services. To start:

service fetch-crl-cron
service fetch-crl-boot # This may take awhile to run

To enable on boot:

chkconfig --level 345 fetch-crl-cron on
chkconfig --level 345 fetch-crl-boot on

Port usage:

The following information is probably needed for sites with strict firewalls:
  • The xrootd server listens on TCP port 1095 (this is not the default port for Xrootd; we assume that dCache Xrootd door uses the default).
  • The cmsd server needs outgoing TCP port 1213 to xrootd.unl.edu.
  • Usage statistics are sent to xrootd.t2.ucsd.edu on UDP ports 9931 and 9930.

Testing the install.

The newly installed server can be tested directly using:
xrdcp -d 1 -f xroot://local_hostname.example.com//store/foo/bar /dev/null
You will need a grid certificate installed in your user account for the above to work

You can then see if your server is participating properly in the xrootd service by checking:

xrdcp root://xrootd-itb.unl.edu//store/foo/bar /tmp/bar2
where /store/foo/bar is unique to your site

Configuring Authenticated Access

Authentication in D-Cache is (usually) done using GPLAZMA. The door for GSI enabled access needs a host certificate. This howto covers GPLAZMA version 1 only. Since you need special rules for the Xrootd door used in the CMS redirector business, you need to configure usage of a GPLAZMA module for this door, while the remaining instance can use the same GPLAZMA cell. Note you need a recent 1.9.12 release of D-Cache, 1.9.12-*21* is known to work. (There are early 1.9.12 releases that had issues with configuring module over cell usage.) Add the following to the layout file (usually found in /opt/d-cache/etc/layout/)

useGPlazmaAuthorizationCell=false
useGPlazmaAuthorizationModule=true
xrootdIsReadOnly=true
# Adjust the path according to your site:
xrootdRootPath=/pnfs/desy.de/cms/tier2
xrootdAuthNPlugin=gsi
# You might consider to have xrootd in a selected queue (adjust to your setup):
# xrootdIoQueue=dcap-q 
# You might want to put timeouts - optimal value is matter of tuning
# xrootdMoverTimeout=28800000 

For GPLAZMA you need to adjust some settings stored in /opt/d-cache/etc/dcachesrm-gplazma.policy or in /etc/dcache/dcachesrm-gplazma.policy:

 gplazmalite-vorole-mapping="ON"
# All others are OFF

[...]

# Built-in gPLAZMAlite grid VO role mapping
gridVoRolemapPath="/etc/grid-security/grid-vorolemap"
gridVoRoleStorageAuthzPath="/etc/grid-security/storage-authzdb" 

Put proper mappings and usernames in /etc/grid-security/grid-vorolemap. Needs adaption to local setup!. (Only CMS part is shown, if other VOs are needed on the Xrootd door, add them accordingly.)

## CMS ##
# Need mapping for each VOMS Group(!), roles only for special mapping
"*" "/cms/Role=lcgadmin" cmsusr001
"*" "/cms/Role=production" cmsprd001
"*" "/cms/Role=priorityuser" cmsana001
"*" "/cms/Role=pilot" cmsusr001
"*" "/cms/Role=hiproduction" cmsprd001
"*" "/cms/dcms/Role=cmsphedex" cmsprd001
"*" "/cms/integration" cmsusr001
"*" "/cms/becms" cmsusr001
"*" "/cms/dcms" cmsusr001
"*" "/cms/escms" cmsusr001
"*" "/cms/ptcms" cmsusr001
"*" "/cms/itcms" cmsusr001
"*" "/cms/frcms" cmsusr001
"*" "/cms/production" cmsusr001
"*" "/cms/muon" cmsusr001
"*" "/cms/twcms" cmsusr001
"*" "/cms/uscms" cmsusr001
"*" "/cms/ALARM" cmsusr001
"*" "/cms/TEAM" cmsusr001
"*" "/cms/dbs" cmsusr001
"*" "/cms/uscms/Role=cmsphedex" cmsusr001
"*" "/cms" cmsusr001

Now comes the important part for path prefix in /etc/grid-security/storage-authzdb. Carefully check the usernames and UIDs GIDs, they must fit your local setup. (Again only CMS part is shown.)

authorize cmsusr001 read-write 40501 4050 /pnfs/desy.de/cms/tier2 /pnfs/desy.de/cms/tier2 /
authorize cmsprd001 read-write 40751 4075 /pnfs/desy.de/cms/tier2 /pnfs/desy.de/cms/tier2 /
authorize cmsana001 read-write 40951 4060 /pnfs/desy.de/cms/tier2 /pnfs/desy.de/cms/tier2 / 

You can do some first testing of the GSI enabled Xrootd door:

xrdcp -d 2 -f xroot://xrootd-door.mydomain.org:/store/user/<Your_HN_name>/<Your_Testfile> /dev/null

Some useful debugging results are usually found in the billing logs of your D-Cache instance. The host is usually not the host you are installing the Xrootd door on.

/opt/d-cache/billing/2012/09/

Configuring the CMS TFC Plugin in D-Cache

D-Cache provides in recent releases a TFC Plugin such that you can send an LFN open request to the D-Cache xrootd-door and the door will resolve it to a PFN based on TFC rules.

Older dCache Releases (up to 2.4)

The following information is not valid for recent supported releases. They are just kept for reference.

You need D-Cache 1.9.12-25 or beyond, 2.2 or 2.4. For the recent 1.9.12 and 2.2 You need to install the "Xrootd4j-Plugin" from D-Cache, which provides some xrootd features of the 2.4 release in 1.9.12-25+ and 2.2.

# Download the xrootd4j-backport package
cd /tmp
wget -O xrootd4j-backport-2.4-SNAPSHOT.tar.gz http://ftp1.ndgf.org:2880/behrmann/downloads/xrootd4j-backport-2.4-SNAPSHOT.tar.gz
# Install into /usr/local/share/dcache/plugins
mkdir -p  /usr/local/share/dcache/plugins
cd /usr/local/share/dcache/plugins
tar -xzvf /tmp/xrootd4j-backport-2.4-SNAPSHOT.tar.gz

Install the cmstfc plugin.

cd /tmp
wget -O xrootd4j-cms-plugin-1.0-SNAPSHOT.tar.gz https://github.com/downloads/dCache/xrootd4j-cms-plugin/xrootd4j-cms-plugin-1.0-SNAPSHOT.tar.gz
cd  /usr/local/share/dcache/plugins
tar -xzvf /tmp/xrootd4j-cms-plugin-1.0-SNAPSHOT.tar.gz

In the layout file (found typically in /opt/d-cache/etc/layouts) of the door, you have to add these lines:

# Unauthenticated
xrootdPlugins=gplazma:none,authz:cms-tfc
# Authenticated according to gplazma
# xrootdPlugins=gplazma:gsi,authz:cms-tfc
# Change this according to your location:
xrootd.cms.tfc.path=/etc/dcache/storage.xml
# Must be coherent with your TFC in storage.xml:
xrootd.cms.tfc.protocol=root

On the xrootd federation host you can use the xrootd CMS TFC plugin, by configuring it in /etc/xrootd/xrootd.cfg (or similar like /etc/xrootd/xrootd-clustered.cfg ). Make sure that there is no oss.localroot statement, which you might have from an old setup that works with a prefix only.

# Integrate with CMS TFC, placed in /etc/xrootd/storage.xml
oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=direct

Recent dCache Releases 2.6, 2.10, 2.13

For the host that runs the xrootd door you need the TFC plugin. It is provided in the Download Area from dcache.org. The RPM can be installed like this

rpm -ivh xrootd4j-cms-plugin-1.3.7-1.noarch.rpm

The following configuration parameters should be added to /etc/dcache/dcache.conf. The site name should be your CMS site name.

pool.mover.xrootd.plugins=edu.uchicago.monitor
# The following two lines are the values for EU sites
xrootd.monitor.detailed=cms-aaa-eu-collector.cern.ch:9330:60
xrootd.monitor.summary=xrootd.t2.ucsd.edu:9931:60
xrootd.monitor.vo=CMS
xrootd.monitor.site=T2_XY_MySite

The following should be added to the layout file of the machine(s) that host(s) the xrootd door(s), /etc/dcache/layouts/dcache-my-xrootd-door.layout.conf (adjust the host name). The location of the TFC file (typically named storage.xml) might be adjusted. The protocol might also be different for you TFC, it is just an identifier in the end.

 [xrootd-${host.name}Domain]
 [xrootd-${host.name}Domain/xrootd]
 xrootd.plugins=gplazma:gsi,authz:cms-tfc
 xrootd.cms.tfc.path=/etc/dcache/storage.xml
 xrootd.cms.tfc.protocol=xrootd

Test your setup smile

Configuring the Monitoring Plugin

dCache can emit monitoring information similar to the SLAC Xrootd implementation. The process of enabling this is documented on the following page:

https://twiki.cern.ch/twiki/bin/viewauth/AtlasComputing/FAXdCacheN2Nstorage

Useful Links.

Xrootd, gPlazma2 and dcache-2.6.19-1

Disclaimer

14th Jan 2014 Fabio Martinelli: This is my personal experience with the triple [ SLAC Xrootd, gPlazma2 and dcache-2.6.19-1 ] and it was not approved by CMS, it simply worked for me and I thought it was worth to report my experiences here ( it took me a day of tests ).

Intro

The dCache 2.6 Admin can avoid to manage both a gPlazma1 and a gPlazma2 configuration and simply use the gPlazma2 cell also for the dCache Xrootd cell; to achieve that I did the following configurations; be aware that writes by xrootd are not allowed because of the empty list xrootdAllowedWritePaths=

The Xrootd service requires /pnfs

The Xrootd service strictly requires the mount point /pnfs to find the /pnfs files ( the dCache services don't need it instead ) More... Close
# mount | grep pnfs
dcachedb:/pnfs on /pnfs type nfs (ro,nolock,intr,noac,hard,nfsvers=3,addr=XXX.XXX.XXX.XXX)

Xrootd conf

More... Close
[root@t3se02 dcache]# grep -v \# /etc/xrootd/xrootd-clustered.cfg | tr -s '\n'
xrd.port 1095
all.role server
all.manager any xrootd-cms.infn.it+ 1213
xrootd.redirect t3se02.psi.ch:1094 /
all.export / nostage readonly
cms.allow host *
xrootd.trace emsg login stall redirect
ofs.trace all
xrd.trace all
cms.trace all
oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml?protocol=direct
xrootd.seclib /usr/lib64/libXrdSec.so
xrootd.fslib /usr/lib64/libXrdOfs.so
all.adminpath /var/run/xrootd
all.pidpath /var/run/xrootd
cms.delay startup 10
cms.fxhold 60s
xrd.report xrootd.t2.ucsd.edu:9931 every 60s all sync
xrootd.monitor all auth flush io 60s ident 5m mbuff 8k rbuff 4k rnums 3 window 10s dest files io info user redir xrootd.t2.ucsd.edu:9930
all.sitename  T3_CH_PSI

storage.xml

More... Close
[root@t3se02 xrootd]# ls -l /etc/xrootd/storage.xml
lrwxrwxrwx 1 root root 46 Oct 30 13:25 /etc/xrootd/storage.xml -> /swshare/cms/SITECONF/local/PhEDEx/storage.xml

dCache common conf

More... Close
[root@t3se02 dcache]# grep -v \# /etc/dcache/dcache.conf | tr -s '\n'
dcache.layout=${host.name}
dcache.namespace=chimera
chimera.db.user = chimera
chimera.db.url = jdbc:postgresql://t3dcachedb04.psi.ch/chimera?prepareThreshold=3
dcache.user=dcache
dcache.paths.billing=/var/log/dcache
pnfsVerifyAllLookups=true
dcache.java.memory.heap=2048m
dcache.java.memory.direct=2048m
net.inetaddr.lifetime=1800
net.wan.port.min=20000
net.wan.port.max=25000
net.lan.port.min=33115
net.lan.port.max=33145
broker.host=t3se02.psi.ch
poolIoQueue=wan,xrootd
waitForFiles=${path}/setup
lfs=precious
tags=hostname=${host.name}
metaDataRepository=org.dcache.pool.repository.meta.db.BerkeleyDBMetaDataRepository
useGPlazmaAuthorizationModule=false
useGPlazmaAuthorizationCell=true
gsiftpIoQueue=wan
xrootdIoQueue=xrootd
remoteGsiftpIoQueue=wan
srmDatabaseHost=t3dcachedb04.psi.ch
srmDbName=dcache
srmDbUser=srmdcache
srmDbPassword=
srmSpaceManagerEnabled=yes
srmDbLogEnabled=true
srmRequestHistoryDatabaseEnabled=true
ftpPort=${portBase}126
kerberosFtpPort=${portBase}127
spaceManagerDatabaseHost=t3dcachedb04.psi.ch
pinManagerDbHost=t3dcachedb04.psi.ch
defaultPnfsServer=t3dcachedb04.psi.ch
SpaceManagerReserveSpaceForNonSRMTransfers=true
SpaceManagerLinkGroupAuthorizationFileName=/etc/dcache/LinkGroupAuthorization.conf
dcache.log.dir=/var/log/dcache
billingDbHost=t3dcachedb04.psi.ch
billingDbUser=srmdcache
billingDbPass=
billingDbName=billing
billingMaxInsertsBeforeCommit=10000
billingMaxTimeBeforeCommitInSecs=5
info-provider.site-unique-id=T3_CH_PSI
info-provider.se-unique-id=t3se02.psi.ch
info-provider.se-name=SRM endpoint for T3_CH_PSI
info-provider.glue-se-status=Production
info-provider.dcache-quality-level=production
info-provider.dcache-architecture=multidisk
info-provider.http.host = t3dcachedb04
poolmanager.cache-hit-messages.enabled=true
dcache.log.server.host=t3dcachedb04
alarms.store.db.type=rdbms
webadmin.alarm.cleaner.enabled=false
poolqplots.enabled=true
dcache.log.mode=new

dCache Xrootd node

The dCache Xrootd service is listening on the same node where I switched on the SLAC Xrootd service More... Close
[root@t3se02 dcache]# grep -v \# /etc/dcache/layouts/t3se02.conf | tr -s '\n'
dcache.log.level.file=debug
[${host.name}-Domain-dcap]
[${host.name}-Domain-dcap/dcap]
[${host.name}-Domain-gridftp]
[${host.name}-Domain-gridftp/gridftp]
[${host.name}-Domain-gsidcap]
[${host.name}-Domain-gsidcap/gsidcap]
[${host.name}-Domain-srm]
[${host.name}-Domain-srm/srm]
[${host.name}-Domain-srm/spacemanager]
[${host.name}-Domain-srm/transfermanagers]
[${host.name}-Domain-utility]
[${host.name}-Domain-utility/gsi-pam]
[${host.name}-Domain-utility/pinmanager]
[${host.name}-Domain-dir]
[${host.name}-Domain-dir/dir]
[${host.name}-Domain-info]
[${host.name}-Domain-info/info]
[dCacheDomain]
[dCacheDomain/poolmanager]
[dCacheDomain/broadcast]
[dCacheDomain/loginbroker]
[dCacheDomain/topo]
[${host.name}-Domain-xrootd]
[${host.name}-Domain-xrootd/xrootd]
xrootdPort=1094
xrootdAllowedReadPaths=/
xrootdAllowedWritePaths=
xrootdMoverTimeout=28800000 
xrootdPlugins=gplazma:gsi,authz:cms-tfc
xrootd.cms.tfc.path=/etc/xrootd/storage.xml
xrootd.cms.tfc.protocol=direct

dCache gPlazma2 node

More... Close
[root@t3dcachedb04 dcache]# grep -v \# /etc/dcache/layouts/t3dcachedb04.conf | tr -s '\n'

dcache.log.level.file=debug
[${host.name}-Domain-gPlazma]
[${host.name}-Domain-gPlazma/gplazma]
[${host.name}-Domain-namespace]
[${host.name}-Domain-namespace/pnfsmanager]
[${host.name}-Domain-namespace/cleaner]
[${host.name}-Domain-adminDoor]
[${host.name}-Domain-adminDoor/admin]
sshVersion=ssh2
admin.ssh2AdminPort=22224
adminHistoryFile=/var/log/dcache/adminshell_history
[${host.name}-Domain-nfs]
dcache.user=root
[${host.name}-Domain-nfs/nfsv3]
[${host.name}-Domain-httpd]
authenticated=false
billingToDb=yes
generatePlots=true
[${host.name}-Domain-httpd/httpd]
[${host.name}-Domain-httpd/statistics]
[${host.name}-Domain-httpd/billing]
[${host.name}-Domain-httpd/srm-loginbroker]
[${host.name}-Domain-alarms]
[${host.name}-Domain-alarms/alarms]

dCache gPlazma2 conf

More... Close
[root@t3dcachedb04 dcache]# cat /etc/dcache/gplazma.conf
auth     optional   x509 
auth     optional   voms 
map      requisite  vorolemap 
map      requisite  authzdb 
session  requisite  authzdb

dCache gPlazma2 logs

During a xrdcp interaction you will find rows like these in the gPlazma2 logs: More... Close
01.29.20  [pool-2-thread-28] [Xrootd-t3se02 Login AUTH voms] Certificate verification: Verifying certificate 'DC=ch,DC=cern,OU=computers,CN=voms.cern.ch'
01.29.20  [pool-2-thread-28] [Xrootd-t3se02 Login MAP vorolemap] Source changed. Recreating map.
01.29.20  [pool-2-thread-28] [Xrootd-t3se02 Login MAP vorolemap] VOMS authorization successful for user with DN: /DC=com/DC=quovadisglobal/DC=grid/DC=switch/DC=users/C=CH/O=Paul-Scherrer-Institut (PSI)/CN=Fabio Martinelli and FQAN: /cms for user name: martinelli_f.
01.29.20  [pool-2-thread-28] [Xrootd-t3se02 Login MAP authzdb] Source changed. Recreating map.

xrdcp example

More... Close
[martinel@lxplus0485 ~]$ xrdcp -d 1 -f root://xrootd.ba.infn.it//store/user/martinelli_f/test.root  /tmp && rm -f /tmp/test.root
131121 11:37:32 21109 Xrd: main: (C) 2004-2011 by the XRootD collaboration. Version: v3.3.4
131121 11:37:32 21109 Xrd: Create: (C) 2004-2010 by the Xrootd group. XrdClient $Revision$ - Xrootd version: v3.3.4
131121 11:37:32 21109 Xrd: ShowUrls: The converted URLs count is 1
131121 11:37:32 21109 Xrd: ShowUrls: URL n.1: root://xrootd.ba.infn.it:1094//store/user/martinelli_f/test.root.
131121 11:37:32 21109 Xrd: ShowUrls: The converted URLs count is 1
131121 11:37:32 21109 Xrd: ShowUrls: URL n.1: root://xrootd.ba.infn.it:1094//store/user/martinelli_f/test.root.
sec_Client: protocol request for host xrootd.ba.infn.it token='&P=gsi,v:10300,c:ssl,ca:2f3fadf6.0'
sec_PM: Loading gsi protocol object from libXrdSecgsi.so
131121 11:37:32 21109 secgsi_InitOpts: *** ------------------------------------------------------------ ***
131121 11:37:32 21109 secgsi_InitOpts:  Mode: client
131121 11:37:32 21109 secgsi_InitOpts:  Debug: 1
131121 11:37:32 21109 secgsi_InitOpts:  CA dir: /etc/grid-security/certificates/
131121 11:37:32 21109 secgsi_InitOpts:  CA verification level: 1
131121 11:37:32 21109 secgsi_InitOpts:  CRL dir: /etc/grid-security/certificates/
131121 11:37:32 21109 secgsi_InitOpts:  CRL extension: .r0
131121 11:37:32 21109 secgsi_InitOpts:  CRL check level: 1
131121 11:37:32 21109 secgsi_InitOpts:  CRL refresh time: 86400
131121 11:37:32 21109 secgsi_InitOpts:  Certificate: /afs/cern.ch/user/m/martinel/.globus/usercert.pem
131121 11:37:32 21109 secgsi_InitOpts:  Key: /afs/cern.ch/user/m/martinel/.globus/userkey.pem
131121 11:37:32 21109 secgsi_InitOpts:  Proxy file: //afs/cern.ch/user/m/martinel/.x509up_u17202
131121 11:37:32 21109 secgsi_InitOpts:  Proxy validity: 12:00
131121 11:37:32 21109 secgsi_InitOpts:  Proxy dep length: 0
131121 11:37:32 21109 secgsi_InitOpts:  Proxy bits: 512
131121 11:37:32 21109 secgsi_InitOpts:  Proxy sign option: 1
131121 11:37:32 21109 secgsi_InitOpts:  Proxy delegation option: 0
131121 11:37:32 21109 secgsi_InitOpts:  Allowed server names: [*/][/*]
131121 11:37:32 21109 secgsi_InitOpts:  Crypto modules: ssl
131121 11:37:32 21109 secgsi_InitOpts:  Ciphers: aes-128-cbc:bf-cbc:des-ede3-cbc
131121 11:37:32 21109 secgsi_InitOpts:  MDigests: sha1:md5
131121 11:37:32 21109 secgsi_InitOpts: *** ------------------------------------------------------------ ***
sec_PM: Using gsi protocol, args='v:10300,c:ssl,ca:2f3fadf6.0'
131121 11:37:32 21109 cryptossl_X509::IsCA: certificate has 4 extensions
131121 11:37:32 21109 cryptossl_X509::IsCA: certificate has 4 extensions
131121 11:37:32 21109 cryptossl_X509::IsCA: certificate has 4 extensions
131121 11:37:32 21109 cryptossl_X509::IsCA: certificate has 8 extensions
131121 11:37:32 21109 cryptossl_X509::IsCA: certificate has 8 extensions
131121 11:37:32 21109 Xrd: Open: Access to server granted.
131121 11:37:32 21109 Xrd: Open: Opening the remote file /store/user/martinelli_f/test.root
131121 11:37:32 21109 Xrd: Open: File open in progress.
131121 11:37:32 21112 Xrd: HandleServerError: Received redirection to [t3se01.psi.ch:1095]. Token=[]]. Opaque=[].
131121 11:37:33 21112 Xrd: HandleServerError: Received redirection to [t3se01.psi.ch:1094]. Token=[]]. Opaque=[].
131121 11:37:33 21112 Xrd: Connect: can't open connection to [t3se01.psi.ch:1094]
131121 11:37:33 21112 Xrd: XrdNetFile: Error creating logical connection to t3se01.psi.ch:1094
131121 11:37:33 21112 Xrd: GoToAnotherServer: Error connecting to [t3se01.psi.ch:1094
131121 11:37:38 21112 Xrd: HandleServerError: Received redirection to [xrootd.ba.infn.it:1094]. Token=[]]. Opaque=[].
131121 11:37:38 21112 Xrd: HandleServerError: Received redirection to [t3se02.psi.ch:1095]. Token=[]]. Opaque=[].
131121 11:37:38 21112 Xrd: HandleServerError: Received redirection to [t3se02.psi.ch:1094]. Token=[]]. Opaque=[].
sec_Client: protocol request for host t3se02.psi.ch token='&P=gsi,v:10200,c:ssl,ca:e72045ce'
sec_PM: Using gsi protocol, args='v:10200,c:ssl,ca:e72045ce'
131121 11:37:38 21112 cryptossl_X509::IsCA: certificate has 7 extensions
131121 11:37:38 21112 secgsi_VerifyCA: Warning: CA certificate not self-signed and integrity not checked: assuming OK (d800b164.0)
131121 11:37:38 21112 cryptossl_X509::IsCA: certificate has 8 extensions
131121 11:37:38 21112 Xrd: HandleServerError: Received redirection to [192.33.123.52:20533]. Token=[]]. Opaque=[&org.dcache.uuid=38ea88f9-6f38-47d8-95e3-76b90a1eacbc].
131121 11:37:38 21109 Xrd: main: root://xrootd.ba.infn.it//store/user/martinelli_f/test.root --> /tmp//test.root
131121 11:37:38 21119 Xrd: Read: Hole in the cache: offs=0, len=8388608
[xrootd] Total 460.67 MB	|====================| 100.00 % [27.4 MB/s]
Low level caching info:
 StallsRate=0.797909
 StallsCount=229
 ReadsCounter=287
 BytesUsefulness=1
 BytesSubmitted=483049545 BytesHit=483049545

XrdClient counters:
 ReadBytes:                 483049545
 WrittenBytes:              0
 WriteRequests:             0
 ReadRequests:              58
 ReadMisses:                1
 ReadHits:                  57
 ReadMissRate:              0.017241
 ReadVRequests:             0
 ReadVSubRequests:          0
 ReadVSubChunks:            0
 ReadVBytes:                0
 ReadVAsyncRequests:        0
 ReadVAsyncSubRequests:     0
 ReadVAsyncSubChunks:       0
 ReadVAsyncBytes:           0
 ReadAsyncRequests:         114
 ReadAsyncBytes:            474660937
Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng XroodDcacheIntegrationV2.png r3 r2 r1 manage 28.5 K 2011-11-02 - 03:17 BrianBockelman  
PNGpng XrootdDcacheIntegration.png r1 manage 44.3 K 2011-03-09 - 19:13 BrianBockelman  
Edit | Attach | Watch | Print version | History: r30 | r28 < r27 < r26 < r25 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r26 - 2016-08-15 - ChristophWissing
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback