Installing and configuring xrootd

Introduction

This wiki page explains the machine configuration and software installation for xrootd on the head node (redirector) and the worker nodes (data servers). This wiki page also explains how to setup a standalone data server (NFS node) . This is needed because xrootd runs on the Grid Storage element ( gridftp machine) also.

The xrood documentation (http://www.xrootd.org/docs.html) and in particular the Cluster Management Service Configuration Reference (cms) (http://www.xrootd.org/doc/dev/cms_config.htm) and File Residency Manager Reference (frm) (http://www.xrootd.org/doc/dev/frm_config.htm) are good places to start and understand the system.

The xrootd redirector runs on the same node as the condor collector/negotiator and the squid and ldap services.

https://twiki.cern.ch/twiki/bin/view/AtlasComputing/XrootdInstallationPackage#SW_and_OS_pre_requirements

The machines are either standard SLC5 or SL5. x86_64

The software is installed using YUM repositories provided by xrootd.org and gsi pluggin by VDT and OSG.

Prior to the installation and configuration of the xrootd storage space at your Tier 3 site. You have to determine how you want the storage clustered. Do you have a little bit of space on each worker node? Do the worker nodes have many TB of disk space. Do you have an external file server (or servers ) with lots of space. Do you want to cluster all of your storage together? The next two subsections explain two possible senarios. Please choose one that best suits your needs.

Worker nodes as xrootd cache space

If the amount of storage on all worker node is significantly smaller than the storage on "NFS" stand alone file server then disk space on the worker nodes could be used as cache space. In this configuration the primary location of data files is on the stand alone data server and xrootd file residency manager (FRM) is used to copy files from the stand alone data server and cluster storage of the worker nodes. Further details for XROOTD FRM is found here: http://www.xrootd.org/doc/dev/frm_config.htm Data comes into the cluster through the gridftp server and is stored on the stand alone file server. If this storage is part of the global xrootd federated storage, then the proxy server is used to provide data to the xrootd federation. The standalone data server with out-bound connectivity can copy files in from other proxies or data servers within the xrootd federated storage.

The following figure shows the configuration:

Slide2.jpg
worker nodes as xrootd cache space:

All storage clustered together with xrootd

It is simpler to have all of the xrootd data servers clustered together. This is especially true if there is a great deal of storage on the worker nodes and in the stand alone data server. Data comes into the cluster through the gridftp server and is distrubted onto each of the nodes. If the cluster is part of a global xrootd federated storage, then the proxy server is used to provide data to the xrootd federation. The data servers with out-bound connectivity can copy files in from other proxies or data servers within the xrootd federated storage.

The following fiugre shows this configuration.

Slide1.jpg
all data servers clustered with xrootd

The XRootd service runs on the non-privileged xrootd account on the redirector and data server nodes. The NFS node contains a xrootd data server, gridftp server and xrootd proxy.

YUM software installation

The YUM repositories from xrootd.org and OSG/VDT are used. You must be root to use YUM, modify the YUM repo files and install the software.

The Xrootd Yum repository information is found on the page: http://www.xrootd.org/dload.html. The YUM repository file /etc/yum.repos.d/xrootd-stable-slc5.repo is found at this URL http://xrootd.org/binaries/xrootd-stable-slc5.repo

In order to avoid dependency problems. It is recommended that yum-priorites plugin

  • Install the yum-priorities
  yum install yum-priorities
  • Ensure that /etc/yum.conf has the following line in the [main] section. This enables yum plugins
 plugins=1

  • On nodes in the cluster (interactive, data servers, redirector/head node, worker nodes) Edit /etc/yum.repos.d/xrootd-stable-slc5.repo to have the contents:

[xrootd-stable]
name=XRootD Stable repository
baseurl=http://xrootd.org/binaries/stable/slc/5/$basearch http://xrootd.cern.ch/sw/repos/stable/slc/5/$basearch
gpgcheck=0
enabled=1
protect=0
# To use priorities you must have yum-priorities installed
priority=25

  • To install the xrootd software issue the command
yum install xrootd-libs.x86_64 xrootd-fuse.x86_64  xrootd-server.x86_64 xrootd-client.x86_64

On the node that contains the gridftp server (installed via the OSG or EGI instructions), The xrootd libraries from OSG are needed if you intend to write from gridftp into your xrootd managed storage.

  • Edit /etc/yum.repos.d/vdt.repo
vdt-development]
name = VDT RPM repository - development versions for Redhat Enterprise Linux 5 and compatible
baseurl = http://vdt.cs.wisc.edu/native/rpm/development/rh5/$basearch
gpgcheck = 0
enabled = 0
# To use priorities you must have yum-priorities installed
priority=50

* To install the required xrootd software - issue the command -

yum --enablerepo=vdt-development install xrootd-dsi.x86_64

Further details about OSG Xrootd rpms can be found here: https://twiki.grid.iu.edu/bin/view/SoftwareTeam/XrootdRPMPhase1

Redirector and Data server machine settings:

You must be root to change the machine settings and install the software. YUM will be used to install the software.

Increasing the maximum number of file descriptors

HELP Work done as root

The maximum number of open files must be increase for xrootd processes on the head node and xrootd data servers (in the base line design that would be the worker nodes and the nfs file server).

The default limit on the maximum number of files open by a process in linux is 1024. We need increase the limit This is done by adding two lines to to /etc/security/limits.conf These steps are done as root

  • Check that /etc/security/limits.conf has these lines
    # added automatically
xrootd      soft    nofile          16384
xrootd      hard    nofile          63536

HELP Work done as root

For bigger systems the default number of file descriptors (ulimit -n, 1024) is insufficient to assure access by many clients simultaneously The limit should be increased to descriptors on the redirector and all data servers for the user under which xrootd is running (xrootd). The procedure below is valid for RHEL 5 based Linux distributions such as Scientific Linux.

Configure the system to accept the desired value for maximum number of open files. Check the value in /proc/sys/fs/file-max (cat /proc/sys/fs/file-max) to see if it is larger than the value needed for the maximum number of open files. To increase the number of file descriptors to 65500 run:

echo 65500 > /proc/sys/fs/file-max

and, to make it persistent across reboots, edit /etc/sysctl.conf to include the line:

echo "# increase the number of file descriptors for XROOTD" >> /etc/sysctl.conf
echo "fs.file-max = 65500" >>/etc/sysctl.conf

Directories needed by the xrootd deamons:

check that the following directories exist and are owned by user xrootd:

/var/log/xrootd
/var/spool/xrootd
/var/spool/xrootd/.xrd
/var/run/xrootd
/etc/xrootd

Prepare the storage directories

HELP Work done as root

We recommend using the same pathnames for all data server nodes. The directories have the following roles:

  • /atlas (storage path this is common to all nodes)
  • /atlas/local (storage path where users have their files)

On each data server, prepare the directories and give ownership to the xrootd account.

directories on redirector machine

HELP Work done by root account

  • create if needed xrootd name space directory /atlas and /atlas/inventory and set the ownership to xrootd
mkdir /atlas
mkdir /atlas/inventory
chown xrootd:xrootd -R /atlas

data server directories and data partitions

HELP This work is done from the root account

Determine if you have one data partition to be used for xrootd storage or multiple partitions. These partitions should be either xfs or ext4 because bother types of file systems handle meta data and extended attributes well.

Single partition for data files on data servers

  • If your kickstart file does not create a local storage mount point /local/xrootd/a, create this mount point and mount the file -

Here is the line in /etc/fstab for ext4 file system

/dev/exportvg/atlasvg        /local/xrootd/a    ext4  defaults,user_xattr 12

Here is the corresponding line in /etc/fstab for xfs file system

/dev/exportvg/atlasvg        /local/xrootd/a    xfs     defaults,attr2  1 2

  • Create atlas name space directory /local/xrootd/a/atlas and the directory of the users *and set the ownership to xrootd
mkdir /local/xrootd/a/atlas
mkdir /local/xrootd/a/atlas/local
chown -R xrootd:xrootd /local/xrootd/a/atlas

Multiple partitions for data files on data servers

If for various reasons you find it impratical to use just one data partition on a given xrootd data server then you oss.cache mechanism can be used to speard the files automatically across data partitions. Symobilic links within /atlas/* (and sub directories within) will point to the actual files.

create mount points that all have a similar name for example /local/data1 /local/data2 /local/data3 ext4 and/or xfs file systems recommended

Here is an example of the lines with /etc/fstab file:

 /dev/exportvg/atlasvg1          /local/data1            xfs     defaults,attr2  1 2
 /dev/exportvg/atlasvg2          /local/data2            xfs     defaults,attr2  1 2
 /dev/exportvg/atlasvg3         /local/data3            ext4    defaults,user_xattr 1 2

  • Set the owner ship of /local/data* to xrootd
   chown xrootd:xrootd /local/data1
   chown xrootd:xrootd /local/data2
   chown xrootd:xrootd /local/data3

  • create /atlas and /atlas/local directories
mkdir -p /atlas/local
chown -R xrootd:xrootd /atlas

Firewall port usage by xrootd

Machine type Port number Protocol  
Redirector 1094 tcp  
1213 tcp  
Data Server 1094 tcp  
1213 tcp  

Help

Machine type Port number Protocol
NFS node 1094 tcp

Help Note -

xrootd sysconfig file /etc/sysconfig/xrootd

HELP The work is done by the root account

The xrootd and cmsd daemons are controlled by the xrootd configuration files and the file /etc/sysconfig/xrootd. The file /etc/sysconfig/xrootd contains information on the intances of the xrootd, cmsd and frmd that might run. It is a self docmented file with good comments to explain how the file works. It also lists the configuration files that will be used.

/etc/sysconfig/xrootd file for head node/redirector

#-------------------------------------------------------------------------------
# Define the instances of xrootd, cmsd and frmd here and specify the option you
# need. For example, use the -d flag to send debug output to the logfile,
# the options responsible for daemonizing, pidfiles and instance naming will
# be appended automatically.
#-------------------------------------------------------------------------------

#-------------------------------------------------------------------------------
# Define the user account name which will be used to start the daemons.
# These may have many unexpected side effects, so be sure you know what you're
# doing before playing with them.
#-------------------------------------------------------------------------------
XROOTD_USER=xrootd
XROOTD_GROUP=xrootd

#-------------------------------------------------------------------------------
# Define the commandline options for the instances of the daemons.
# The format is:
# DAEMON_NAME_OPTIONS, where:
#   DAEMON - the daemon name, the valid values are: XROOTD, CMSD or FRMD
#   NAME   - the name of the instance, any uppercase alphanumeric string
#            without whitespaces is valid
#-------------------------------------------------------------------------------
XROOTD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg"
CMSD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg"
FRMD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/frmd.log -c /etc/xrootd/xrootd-clustered.cfg"
#-------------------------------------------------------------------------------
# Names of the instances to be started by default, the case doesn't matter,
# the names will be converted to lowercase automatically, use space as a
# separator
#-------------------------------------------------------------------------------
XROOTD_INSTANCES="default"
CMSD_INSTANCES="default"
#FRMD_INSTANCES="default"

/etc/sysconfig/xrootd file for nfs data server (acting as proxy server and gridftp server also)

#-------------------------------------------------------------------------------
# Define the instances of xrootd, cmsd and frmd here and specify the option you
# need. For example, use the -d flag to send debug output to the logfile,
# the options responsible for daemonizing, pidfiles and instance naming will
# be appended automatically.
#-------------------------------------------------------------------------------

#-------------------------------------------------------------------------------
# Define the user account name which will be used to start the daemons.
# These may have many unexpected side effects, so be sure you know what you're
# doing before playing with them.
#-------------------------------------------------------------------------------
XROOTD_USER=xrootd
XROOTD_GROUP=xrootd

#-------------------------------------------------------------------------------
# Define the commandline options for the instances of the daemons.
# The format is:
# DAEMON_NAME_OPTIONS, where:
#   DAEMON - the daemon name, the valid values are: XROOTD, CMSD or FRMD
#   NAME   - the name of the instance, any uppercase alphanumeric string
#            without whitespaces is valid
#-------------------------------------------------------------------------------
XROOTD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg"
CMSD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg"
FRMD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/frmd.log -c /etc/xrootd/xrootd-clustered.cfg"

XROOTD_SIMPLEPROXY_OPTIONS="-k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-simple-proxy.cfg"
CMSD_SIMPLEPROXY_OPTIONS="-k 7 -l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-simple-proxy.cfg"

# using the followng line if the nfs server is stand alone
#XROOTD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-standalone.cfg"

#-------------------------------------------------------------------------------
# Names of the instances to be started by default, the case doesn't matter,
# the names will be converted to lowercase automatically, use space as a
# separator
#-------------------------------------------------------------------------------
#XROOTD_INSTANCES="default"
#CMSD_INSTANCES="default"
XROOTD_INSTANCES="default simpleproxy"
CMSD_INSTANCES="default simpleproxy"
FRMD_INSTANCES="default"

/etc/sysconfig/xrootd file for data server

#-------------------------------------------------------------------------------
# Define the instances of xrootd, cmsd and frmd here and specify the option you
# need. For example, use the -d flag to send debug output to the logfile,
# the options responsible for daemonizing, pidfiles and instance naming will
# be appended automatically.
#-------------------------------------------------------------------------------

#-------------------------------------------------------------------------------
# Define the user account name which will be used to start the daemons.
# These may have many unexpected side effects, so be sure you know what you're
# doing before playing with them.
#-------------------------------------------------------------------------------
XROOTD_USER=xrootd
XROOTD_GROUP=xrootd

#-------------------------------------------------------------------------------
# Define the commandline options for the instances of the daemons.
# The format is:
# DAEMON_NAME_OPTIONS, where:
#   DAEMON - the daemon name, the valid values are: XROOTD, CMSD or FRMD
#   NAME   - the name of the instance, any uppercase alphanumeric string
#            without whitespaces is valid
#-------------------------------------------------------------------------------
XROOTD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg"
#XROOTD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-standalone.cfg"
CMSD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg"
FRMD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/frmd.log -c /etc/xrootd/xrootd-clustered.cfg"

#-------------------------------------------------------------------------------
# Names of the instances to be started by default, the case doesn't matter,
# the names will be converted to lowercase automatically, use space as a
# separator
#-------------------------------------------------------------------------------
XROOTD_INSTANCES="default"
CMSD_INSTANCES="default"
FRMD_INSTANCES="default"

XRootd configuration file for the redirector and clustered data servers: /etc/xrootd/xrootd-clustered.cfg

HELP More information on the XRootd configuration files can be found at: http://www.xrootd.org/docs.html

#COMMON INFORMATION

#change to be your local redirector (private network name)
set head = headprv.hep.anl.gov

set exportpath = /atlas

# change to follow your local convention if different
set localroot = /local/xrootd/a

all.adminpath /var/spool/xrootd/admin

# change to follow your local network convention
cms.allow host *.hep.anl.gov

#  xrootd daemon used for Service side inventory on redirector machine
if named cns
        all.export $(exportpath)/inventory
        xrd.port 1095
else
      all.export $(exportpath)
      all.role server
      all.role manager if $(head)
      all.manager $(head):1213
      xrd.port 1094 if exec xrootd

      all.manager meta glrd.usatlas.org:1095

      oss.localroot $(localroot)

      xrootd.chksum max 3 adler32 /usr/bin/xrdadler32

      xrootd.seclib /usr/lib64/libXrdSec.so
      # specify the sss authentication module
      sec.protocol /usr/lib64 sss -s /var/spool/xrootd/.xrd/sss.keytab
      # this specify that we use the 'unix' authentication module, additional one can be specified.
      sec.protocol /usr/lib64 unix
      # this is the authorization file
      acc.authdb /etc/xrootd/auth_file
      ofs.authorize

      ofs.notify closew create mkdir mv rm rmdir trunc |  /usr/bin/XrdCnsd -d -D 2 -i 90 -b $(head):1095:$(exportpath)/inventory

      all.export $(exportpath) r/w stage purge nocheck nodread
      all.export $(exportpath)/local r/w

      frm.purge.policy * 500g 750g hold 168h
      frm.xfr.copycmd in stats noalloc /etc/xrootd/new-stagein.sh $SRC $DST $CID
      #frm.xfr.copycmd in stats /etc/xrootd/new-stagein.sh $SRC $DST $CID
      oss.xfr deny 10m
fi

cms.prep echo
cms.space min 10g 15g

If you are using multiple partitions to store files on the data server then edit the configuration file comment out the oss.localroot $(localroot) line and add oss.cache line. For example using the multiple partitions from the example above

      oss.space public /local/data*

      #oss.localroot $(localroot)

XRootd configuration file for the stand alone data servers: /etc/xrootd/xrootd-standalone.cfg

HELP More information on the XRootd configuration files can be found at: http://www.xrootd.org/docs.html

#COMMON INFORMATION

set exportpath = /atlas

# change to follow your local convention if different
set localroot = /local/xrootd/a

all.adminpath /var/spool/xrootd/admin

# change to follow your local network convention
cms.allow host *.hep.anl.gov

all.export $(exportpath)
all.role server

xrd.port 1094 if exec xrootd

all.manager meta glrd.usatlas.org:1095

oss.localroot $(localroot)

xrootd.chksum max 3 adler32 /usr/bin/xrdadler32

xrootd.seclib /usr/lib64/libXrdSec.so
# specify the sss authentication module
sec.protocol /usr/lib64 sss -s /var/spool/xrootd/.xrd/sss.keytab
# this specify that we use the 'unix' authentication module, additional one can be specified.
sec.protocol /usr/lib64 unix
# this is the authorization file
acc.authdb /etc/xrootd/auth_file
ofs.authorize

ofs.notify closew create mkdir mv rm rmdir trunc |  /usr/bin/XrdCnsd -d -D 2 -i 90 -b $(head):1095:$(exportpath)/inventory

all.export $(exportpath) r/w stage purge nocheck nodread
all.export $(exportpath)/local r/w

frm.purge.policy * 500g 750g hold 168h
frm.xfr.copycmd in stats noalloc /etc/xrootd/new-stagein.sh $SRC $DST $CID
#frm.xfr.copycmd in stats /etc/xrootd/new-stagein.sh $SRC $DST $CID
oss.xfr deny 10m

cms.prep echo
cms.space min 10g 15g

If you are using multiple partitions to store files on the data server then edit the configuration file comment out the oss.localroot $(localroot) line and add oss.cache line. For example using the multiple partitions from the example above

      oss.space public /local/data*

      #oss.localroot $(localroot)

xrootd Simple Shared Secret (sss) Key tab file

Inside of the configuration files for the data servers (and on the machines where xrootdfs will be mounted read/write), you will notice a reference to the Xrood simple shared secret key tab file.

# specify the sss authentication module
sec.protocol /usr/lib64 sss -s /var/spool/xrootd/.xrd/sss.keytab.grp -c /var/spool/xrootd/.xrd/sss.keytab.grp

As root on the head node (redirector machine):

  • use the xrdsssadmin command to create the keytab file.
xrdsssadmin -u anybody -g usrgroup -k xrootdfs_key add /var/spool/xrootd/.xrd/sss.keytab.grp

  • the xrdassadmin command can be used to check the contents
xrdsssadmin list /var/spool/xrootd/.xrd/sss.keytab.grp
     Number Len Date/Time Created Expires  Keyname User & Group
     ------ --- --------- ------- -------- -------
          1  32 09/07/11 09:55:10 -------- xrootdfs_key anybody usrgroup

  • copy the file to /var/spool/xrootd/.xrd/sss.keytab on each data server

* change the owner ship to xrootd:xrootd (or whatever account your xrootd runs under)

chown xrootd:xrootd /var/spool/xrootd/.xrd/sss.keytab.grp

Further details about sss can be found at this URL: http://www.xrootd.org/doc/prod/sec_config.htm#_Toc248670309

xrootd security file for all data servers

This is the contents of the security file that should be on all data servers. The xrootd configuration file should list the location of this file Look for the entry acc.authdb $(xrootdlocation)/etc/auth_file where xrootdlocation currently is define to be /opt/osg_v1.2/xrootd

The contents of the security file should look like -

# This means that all the users have read access to the datasets
u * /atlas lr

# This means that all the users have full access to their private dirs
u = /atlas/local/@=/ a

# This means that this privileged user can do everything
# You need at least one user like that, in order to create the
# private dir for each user willing to store his data in the facility
u xrootd /atlas a
u atlasadmin /atlas a

IDEA! You can add other user names that can be used to write anywhere in the xrootd data space. It is good to have most users write into their own area only. It prevents them from removing someone else's files.

XRootd configuration file for the simple proxy server connected to local redirector

This proxy server serves files from the local data servers which are clustered throught the local redirector.

HELP More information on the XRootd configuration files can be found at: http://www.xrootd.org/docs.html

 all.adminpath /var/spool/xrootd/admin

all.export /atlas r/o
ofs.osslib /usr/lib64/libXrdPss.so
# change to the name of your local redirector
pss.origin headprv.hep.anl.gov:1094


# While we call this instance a "server" it is, in fact, a proxy server since we told the ofs
# to use libXrdPss.so as the storage handler (this is really the proxy service library). We
# do this so that we can directly subscribe to the meta-manager instead of doing so
# indirectly via a local manager. We don't need a local manager as there is a single
# proxy running in this cluster.
#
all.role server
all.manager glrd.usatlas.org:1095

# change the port number to fit your needs (do not use 1094)
xrd.port 41094 if exec xrootd

# change the port number to fit your needs (do not use 1213)
xrd.port 41213 if exec cmsd

xrd.allow host *

XRootd configuration file for the simple proxy server connected to stand alone XRootd dataserver


FRM copy script used to copy files from federated storage

This script used to copy files into local xrootd storage from the federated storage, should run on all data servers that one has connected to the federated xrootd storage. These data servers can send (through a proxy) and fetch data files from other data servers within the system.

#!/bin/bash

#
#  Note this script expects 3 input parameters
#
#  1 - xrootd source file name
#  2 - xrootd destination file name
#  3 - xrootd cluster ID
#

# create a temporary file
tempfoo=`basename $0`
TMPFILE=`mktemp -q /tmp/${tempfoo}.XXXXXX`
if [ $? -ne 0 ]; then
    echo "$0: Can't create temp file, exiting..."
    exit 1
fi

ARGS=3         # Script requires 3 arguments.
E_BADARGS=85   # Wrong number of arguments passed to script.

if [ $# -ne "$ARGS" ]
then
  echo "Usage: `basename $0` LFN PFN CID"
  exit $E_BADARGS
fi

xrdport=1094
bindir=/usr/bin

global_rdr=glrd.usatlas.org:${xrdport}

local_svr=`/bin/hostname`:${xrdport}
lfn=`echo $1 | /usr/bin/tr -d [:blank:]`
pfn=`echo $2 | /usr/bin/tr -d [:blank:]`
cid=$3

failed_file=${pfn}.fail
rfn="root://${global_rdr}/${lfn}?tried=+${cid}"
#tfn="root://${local_svr}/${pfn}"
tfn="root://${local_svr}/${lfn}"

echo "$0 - "`date`>>  $TMPFILE
echo "failed_file name = $failed_file ">>  $TMPFILE
echo lfn: $lfn>>  $TMPFILE
echo pfn: $pfn>>  $TMPFILE
echo rfn: $rfn>>  $TMPFILE
echo tfn: $tfn>>  $TMPFILE
echo "copy command: ${bindir}/xrdcp -f -s ${rfn} ${tfn} ">>  $TMPFILE

#
# check if the file exists within the federation
#
src=`${bindir}/xrd $global_rdr locatesingle $lfn | \
    tail -2 | head -1 | cut -f2 -d' ' | sed -e 's/^.//g; s/.$//g'`

if [ X$src = X ]; then
  echo "locate of lfn: $lfn failed" >> $TMPFILE
  exit 2
fi

# Now get the file via xrdcp
${bindir}/xrdcp -f -s ${rfn} ${tfn}>> $TMPFILE 2>&1
ret_code=RC$?
if [ ! $ret_code == "RC0" ] ; then  #  xrdcp files failed
    mv -fv $TMPFILE $failed_file
    exit 5
fi

echo "dst - ${bindir}/xrdadler32 $pfn " >> $TMPFILE
${bindir}/xrdadler32 $pfn >> $TMPFILE
echo "src - ${bindir}/xrdadler32 $tfn " >> $TMPFILE
${bindir}/xrdadler32 $tfn >> $TMPFILE

dst_adl=`${bindir}/xrdadler32 $pfn | cut -f1 -d' '`
src_adl=`${bindir}/xrdadler32 $tfn | cut -f1 -d' '`

echo dst: adler32 $dst_adl >> $TMPFILE
echo src: adler32 $src_adl >> $TMPFILE
echo dst: adler32 $dst_adl
echo src: adler32 $src_adl

if [ "$src_adl" == "$dst_adl" ]; then
    /bin/rm $TMPFILE
    exit 0
else
    echo lfn: adler32 of $lfn miss match >> $TMPFILE
    echo dst: adler32 $dst_adl >> $TMPFILE
    echo src: adler32 $src_adl >> $TMPFILE
    mv $TMPFILE $failed_file
    exit 5
fi

/bin/rm $TMPFILE
exit 0

Changes to the frm script to be used on data servers with cached xrootd space

  • In the cluster configuration where the xrootd storage on the worker nodes is just cache space, the frm running on these nodes can automatically copy files from standalone data server to clustered storage on the
worker nodes. The frm purge deamon will ensure that space on the worker nodes is automatically removed as needed.

  • change required to the frm script shown above:

Replace:

global_rdr=glrd.usatlas.org:${xrdport}

with the name of your standalone data server (for example nfsprv.hep.anl.gov)

global_rdr=nfsprv.hep.anl.gov:1094

Starting and Stopping XRootd

The XRootd process are controlled by the /sbin/service command.

  • Initially on all XRoot nodes (redirector, data servers, proxy) the system must be setup after the configuration files are in place

/sbin/service xrootd setup

Starting Xrootd services

  • Start xrootd
/sbin/service xrootd start

  • If needed run the /sbin/chkconfig command to ensure xrootd starts after reboot
/sbin/chkconfig xrootd on

  • Start cmsd
/sbin/service cmsd start

  • If needed run the /sbin/chkconfig command to ensure cmsd starts after reboot
/sbin/chkconfig cmsd on

HELP on a stand alone data server the cmsd does not need to be running

  • Start frm and frm purge deamon if needed on data server
/sbin/service frmd start

  • If needed run the /sbin/chkconfig command to ensure cmsd starts after reboot
/sbin/chkconfig frm_xfrd on
/sbin/chkconfig frm_purged on

Stopping Xrootd services

  • Stop xrootd
/sbin/service xrootd stop

  • Stopping cmsd
/sbin/service cmsd stop

HELP on a stand alone data server the cmsd does not need to be running

  • Stopping frm and frm purge deamon if needed on a data server
/sbin/service frmd stop

Installing the XrootdFS file system

HELP This section still needs more work

XrootdFS is a POSIX file system for an Xrootd storage cluster based on FUSE (Filesystem in Userspace). FUSE is a kernel module that intercepts and services requests to non-privileged user space file systems like XrootdFS. Install XrootdFS on nodes where you want a single Xrootd file system to appear, e.g. on an interactive user node.

FUSE installation requires root privileges but FUSE is normally already installed. XrootdFS can be installed by the Xrootd Administrator (xrootd).

xrootdfs file systems can be be mounted like other file systems. So it is advised to put the xrootdfs mount points in the /etc/fstab file on each node that you want xrootdfs

Install FUSE (root account)

These rpm packages must be installed:
  • fuse
  • fuse-libs
yum -y install fuse fuse-libs

This can be checked by using rpm (e.g. rpm -q [package-name]) and verifying that the package name and version are returned. If the package is not installed, rpm will print a message saying that the package is not installed. This can be done via the yum utility (e.g. yum install fuse fuse-libs) or rpm commands directly. Using yum is preferable since it will bring in any dependencies that fuse or the fuse-libs requires and automatically install the correct versions of the fuse kernel modules. Alternatively (for those familiar with building kernels) FUSE can be downloaded from http://sourceforge.net/projects/fuse/ and built according to instructions provided there. Note that root privileges are required. It is essential that the FUSE version ( > 2.7.3) and flavor match your kernel.

To make it easier for users to understand what is in their XRootd system. You need to do the following:

  • on the head node and interactive nodes create the mount points: an empty directories owned by xrootd

mkdir -pv /headprv/atlas
chown xrootd:xrootd /headprv/atlas

Pick whichever path you like instead of /headprv/atlas, but it will be nice for users if what is labeled atlas here matches your storage path. This is easy to change later after you've tested it so don't worry too much at this time.

If your cluster as stand alone xrootd file server on the nfs node

mkdir -pv /nfsprv/atlas
chown xrootd:xrootd /nfsprv/atlas
(and as above for the path)

  • create the file /etc/fuse.conf with the following line (add the line if the file exists):
    user_allow_other
  • add the user xrootd to the group fuse, e.g. edit /etc/group for example : Add xrootd after last colon, with a comma if other users are present.
fuse:x:104:xrootd

or if another user is already a member of the fuse group:

fuse:x:104:cvmfs,xrootd

Install xrootdfs

Now that xrootdfs is completely integrate in the rest of the xrootd code base. It is part of the YUM repository. xrootdfs is installed with the command (as root):

yum install  xrootd-fuse.x86_64

Configure xrootdfs

Assuming that your Xrootd redirector is (headprv.myuniv.edu), Here is what you need to add to /etc/fstab :

#xrootd mounts

#xrootdfs  /headprv/atlas fuse  rdr=root://headprv.myuniv.edu:1094//atlas,uid=nnn,sss=/var/spool/xrootd/.xrd/sss.keytab 0 0
xrootdfs  /headprv/atlas fuse  ro,rdr=root://headprv.myuniv.edu:1094//storage_path,uid=nnn 0 0
where
storage_path
is the xrootd storage path, i.e. the path users specify for xrdcp, file access, etc.

HELP uid=nnnn should be the id number of the xrootd user. ro mount option is due to a XRootd feature in machines on a public and private network

xrootdfs understands XRootd simple shared secret (sss) authentication protocol. ( see the XRootd security document for further details: http://www.xrootd.org/doc/prod/sec_config.htm)

HELP - Unfortunately in v3.0.4 sss authentication protocal does not work on machines connected to public and private networks. This will be fixed in the client in release v3.1.0

On a machine that does not have general user access then xrootdfs can be mounted read write.

Start xrootdfs

One can use the information in /etc/fstab to mount xrootdfs from the command line initially. After machine reboot xrootdfs will be mounted automatically.

 mount -t fuse -o ro,rdr=root://headprv.myuniv.edu:1094//storage_path,uid=nnnn xrootdfs  /headprv/atlas

If you got your /etc/fstab right, you should be able to just

 mount /headprv/atlas


Arrow blue up Back to Atlas Tier 3g Setup guide


Major updates:
-- DougBenjamin - 12-Oct-2010

Responsible: DougBenjamin
Last reviewed by: Never reviewed

  • worker nodes as xrootd cache space:
    Slide2.jpg

  • all storage clusted with xrootd:
    Slide1.jpg
Topic attachments
I Attachment History Action Size Date Who Comment
JPEGjpg Slide2.jpg r1 manage 40.1 K 2011-08-07 - 18:09 DougBenjamin  
Edit | Attach | Watch | Print version | History: r38 < r37 < r36 < r35 < r34 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r38 - 2014-11-20 - TWikiAdminUser
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Atlas All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback